what's wrong with entropy?

scruffy

Diamond Member
Joined
Mar 9, 2022
Messages
30,117
Reaction score
26,784
Points
2,788
We can begin with these three observations:

1. The existence of a bit is information.

2. The state of a bit is information.

3. The probability of transitions between states is information.

These are three distinct and separable types of information.

Information theory only accounts for one of them.

Proposal: the Second Law is misleading. It suggests changes to configurations (order vs disorder), whereas what's really happening is an increase in the number of possible paths (which is a restatement of the idea that "the universe is expanding").

As the universe expands, it takes more yes/no questions to get an answer. For example - you have a bottle of perfume in the corner of an 8x8 room, and 2 minutes after you open the bottle you want to know where one of the perfume molecules is. But during those two minutes your room has expanded, it's now 9x9. So now you need "more" than 6 bits to locate the molecule.

If you didn't know about the expansion, you'd still be asking the 8x8 question, which would give you an incorrect and meaningless answer.
 
We can begin with these three observations:

1. The existence of a bit is information.

2. The state of a bit is information.

3. The probability of transitions between states is information.

These are three distinct and separable types of information.

Information theory only accounts for one of them.

Proposal: the Second Law is misleading. It suggests changes to configurations (order vs disorder), whereas what's really happening is an increase in the number of possible paths (which is a restatement of the idea that "the universe is expanding").

As the universe expands, it takes more yes/no questions to get an answer. For example - you have a bottle of perfume in the corner of an 8x8 room, and 2 minutes after you open the bottle you want to know where one of the perfume molecules is. But during those two minutes your room has expanded, it's now 9x9. So now you need "more" than 6 bits to locate the molecule.

If you didn't know about the expansion, you'd still be asking the 8x8 question, which would give you an incorrect and meaningless answer.

 
So, so far, there are at least 6 kinds of mutually orthogonal information:

1. The existence of a thing
2. The states of the thing
3. The probabilities attached to (2)
4. The available configurations for sets of things
5. The probabilities attached to (4)
6. The occupation of states in (4)
 
And 7

The relationships between 2 and 5, and 3 and 5.
 
We can begin with this simple thought experiment.

Implement the axiom of choice. Put some marbles in a jar. Index them so they can be uniquely identified. Now reach into the jar and grab a marble.

We'll stipulate simple choice, so all marbles are equally likely.

When there are marbles present, the information provided by a choice is just log(# marbles). Therefore, when there is only one marble present you choose the same marble every time, with probability 1, therefore your information derived from this choice is 0.

HOWEVER, when there are no marbles present, you reach in and grab "nothing" every time, with probability 1. Therefore your derived information is again 0.

This tells us, that the existence of a marble is the same as the non-existence of a marble. Which is obviously nonsensical.

This situation arises because information theory only tells us the information you can DERIVE using your operator, not the information inherent in the underlying set. Clearly, the set {marble} is different from the set {empty}, but information theory does not distinguish the empty set.

As such, it is a statement about MEASUREMENT rather than a statement about information itself. The information itself is found in the cardinality of the underlying set, with the empty set having cardinality 0.

This should tell us that any measurement method we can come up with is by definition imperfect. Because it can not distinguish the existence of something from its non-existence.

The only way around this paradox is to wave one's mathematical hands over the issue by stating that the information derivable from nothingness is equivalent to the information derivable from somethingness.
 
We can begin with these three observations:

1. The existence of a bit is information.

2. The state of a bit is information.

3. The probability of transitions between states is information.

These are three distinct and separable types of information.

Information theory only accounts for one of them.

Proposal: the Second Law is misleading. It suggests changes to configurations (order vs disorder), whereas what's really happening is an increase in the number of possible paths (which is a restatement of the idea that "the universe is expanding").

As the universe expands, it takes more yes/no questions to get an answer. For example - you have a bottle of perfume in the corner of an 8x8 room, and 2 minutes after you open the bottle you want to know where one of the perfume molecules is. But during those two minutes your room has expanded, it's now 9x9. So now you need "more" than 6 bits to locate the molecule.

If you didn't know about the expansion, you'd still be asking the 8x8 question, which would give you an incorrect and meaningless answer.
Why isn't the increasing amount of information consistent with an increase in disorder?

I see nothing wrong with entropy at all. It's probably the safest law of nature there is.
 
Continuing with this line of thought -

We are dealing with two fundamental mathematical concepts called "existence" and "uniqueness".

Information theory, it turns out, suffers from the exact thing it warns us against - it confuses information with KNOWLEDGE.

Consider: to measure something, you have to know what you're measuring in the first place.

If you want to know whether a bit is on or off (its "state"), it means you already know up front that the bit exists. Same for a channel, or spin up or down, or any other thing you're trying to measure.

In the example of the marble, the first and most basic piece of information is the marble "exists". There's something there, it's not nothing.

You reach into the jar and pull out a marble. It's not a cat or a dog, it's a marble. It is "unique" that way. This is the second piece of information, BEFORE any measurements are made and BEFORE any probabilities are assumed.

If you overeagerly characterize the act of reaching into the jar as a "measurement", you have to assign a probability to the outcome of pulling out a marble, before you can even state that a marble exists. You are PROVIDING information to the system, instead of extracting it.

In mathematics there is the concept of "completeness", which in this case means a complete characterization of what is being measured, in the context of the scope of the measurement. For example, you reach into the jar and pull out "something", then you look at the thing you just pulled out and you say "aha! It's a marble", because you are looking at its "attributes". It's round, it's small, it weighs about two ounces, it's a marble. These are measurements, yes - but they have nothing to do with the states of the marble, or whether marbles are ordered or not. They have to do with "existence" and "uniqueness".

Let us then continue even farther - when you reach into the jar and pull out "something", it's a surprise, unless you know ahead of time that something is in the jar. To describe this in terms of the existing information theory, you'd have to assign probabilities to pulling out every single object you could possibly pull out, even objects you've never seen or heard of - AND, you'd have to assign a probability to pulling out nothing at all. This is clearly a nonsensical situation.

Instead what happens, you "know" in advance, that someone has put zero or more marbles into the jar. Or, you are ASSUMING that someone has done that. Either way, you are providing information to the measurement.

So I suggest that information theory is incomplete, because it doesn't account for the information that YOU provide. In fact, it assumes (falsely) that all information except for the measurement is a wash.

In science we say you "discover" a marble. That's existence. Then you characterize it, which is uniqueness. Then and only then, can you form a hypothesis and guesstimate the probabilities, once you've discovered how marbles interact with the jar, and with each other. If you begin from a completely naive standpoint the probability of pulling a marble out of the jar is ZERO, because the probability spectrum is a dust.
 
Okay, so then, I propose treating an experiment with the FULL information picture, in other words, accounting for information in an OPEN system, rather than an isolated measurement.

We can take our cue from the Lindblad master equations in quantum thermodynamics.

The full information equation becomes something like this:

H = Hba + Hsa + (Hsb - Hbs) - (Hsp + Hbp)

where a stands for anterior and p stands for posterior, and the terms Hbs and Hsb stand for information transferred from the bath to the system and the system to the bath, respectively.

Conservation of information therefore equates with H = 0, which says the information in the combined (system+bath) after the experiment, is the amount before the experiment, plus the amount exchanged during the experiment.

An important point is that the bath includes the experimenter.

Just as with energy transfer in quantum thermodynamics, this equation lets us account for changes in the probability matrix due to the experiment - which is to say, increase or decrease in the number and configuration of available states.

The equation is written from the bath's (experimenter's) point of view. You can play with the signs to get the system's viewpoint.
 
Last edited:
Back
Top Bottom