Friday, November 18, 2011 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Nature hypes anti-QM crackpot paper by Pusey et al.

Quantum mechanics under attack

The anti-quantum dimwits have flooded the Physics Stack Exchange, are beginning to do the same thing on the Theoretical Physics Stack Exchange, and the same seems to hold for Journal Nature which published an incredibly embarrassing article called:

Quantum theorem shakes foundations
by Eugenie Samuel Reich (it's a female name even though Samuel doesn't look like one) which argues that a crackpot preprint by Pusey, Barrett, and Rudolph has finally disproved the probabilistic interpretation of quantum mechanics, after 85 years when this politically incorrect thing was believed by physicists (and after Max Born got his Nobel prize for the discovery). The stupid people have been waiting for this moment when quantum mechanics is dethroned for 85 years and as Ms Reich tells us, November 2011 is the date of the happy revolution: the wave function is a real object after all. ;-)

It sounds like black humor but with idiots in Journal Nature, these claims are now being printed black on white and pretend to be serious, too. Needless to say, insincere and mindless people who keep on detecting the wind of political correctness throughout their lives in order to achieve personal gain, such as Sean Carroll, immediately notice that the crank paper is getting "positive reviews". By the way, what "David Wallace" writes in his rant about the superpositions is pure garbage, too.

In order to start, let me say that this is such a remarkable claim that if it is wrong, and it is obviously wrong, as I will discuss below, you should only be able to do it once in your life, especially if it gets to Nature, as long as the system of institutionalized science is functional. It's clearly not. You don't need to be competent at all. You may produce nothing else than garbage throughout your life and you will do just fine.




The big claim is made clear in the very abstract. Some people say that the wave function is real, like positions and momenta in classical physics. Others, like the actual physicists, say that it only has a probabilistic interpretation. The authors (Pusey is a student, the others are lecturers of a sort) boldly claim that they "debunk" the claim that the wave function must be interpreted statistically. (The people who claim that the paper tries to disprove hidden variables or defend proper quantum mechanics etc. couldn't have possibly read the paper and not even the abstract.)

So what's the new "proof" that quantum mechanics has to be wrong, a proof that was celebrated by several anti-quantum idiots on Cosmic Variance under Tom Banks' article on quantum mechanics? You open the PDF arXiv:1111.3328 file, skip the silly and totally irrelevant quotations at the beginning (in which even Albert Einstein is accused of being too probabilistic!), and read a few paragraphs on page 2 which begin with
We will show that this statistical view is not compatible with the predictions of quantum theory.
The "proof" that the statistical interpretation of quantum mechanics (see Max Born's 1954 Nobel prize lecture) fails is another funny game about a simple two-qubit system. Each of the two qubits may be found in \(|0\rangle\), \(|1\rangle\), or their combinations such as
\[ |\pm \rangle = \frac{|0\rangle \pm |1\rangle}{\sqrt{2}} \] Fine, so the numbers \(0,1\) or the signs \({+},{-}\) represent the basis of the first qubit's Hilbert space; the same applied to the second qubit.

For each of these two qubits, we prepare the initial state either as \(|0\rangle\) or \(|{+}\rangle\). They're not explicit about these matters but there's some classical random generator making the choice and the probability of each of these two states is at least \(q\). They use some bizarre language of a state different from a state vector, \(\lambda\), something that isn't known under these terms in physics (a commenter in the fast comments reminds me that \(\lambda\) is indeed meant to be a non-quantum "ontological state" of hidden variables, so the very usage of \(\lambda\) means that they violate the basic rules of quantum mechanics: but the text makes it clear that they believe that they are allowed to do so), and claim that it is "compatible" with either \(|0\rangle\) or \(|{+}\rangle\), whatever the adjective "compatible" is exactly supposed to mean.

None of these concepts and adjectives exist in physics; the most general interpretation compatible with quantum mechanics would be that \(\lambda\) is a mixed state combining pure states \(|0\rangle\) and \(|{+}\rangle\) with coefficients (probabilities) that may be time-dependent but exceed \(q\).

Fine, so far, aside from some fog and misconceptions peppered in between the lines, it is just a trivial repetition of trivial things about bases in two-state systems. What's the paradox? They prepare both of the two particles in the mixed state \(\lambda\) which is given by
\[ \begin{align} \lambda &= c_1 |00\rangle\langle 00 | + c_2 |0{+}\rangle\langle 0{+} | +\\ &+\, c_3 |{+}0\rangle\langle {+}0 | + c_4 |{+}{+}\rangle\langle {+}{+} | \end{align} \] Its trace \(c_1+c_2+c_3+c_4\) should be one; note that the trace is just this simply sum despite the fact that \(|0\rangle\) and \(|{+}\rangle\) are not orthogonal because the trace is a linear, not a bilinear, object. They don't write the exact inequality that should be satisfied by the coefficients \(c_i\), probably because they haven't mastered the calculus of density matrices, so the whole treatment is a bit ambiguous from the beginning and the precise meaning of the variable \(q\) is clearly ill-defined, but the qualitative assumption is that none of the coefficients is too small.

What they do is to measure the initial state of the particle by a gadget projecting the state \(|\psi\rangle\) or \(\lambda\) into a new orthonormal basis of the two-qubit system,
\[ \begin{align} |\xi_1\rangle &= \frac{1}{\sqrt{2}} (|0\rangle \otimes |1\rangle + |1\rangle \otimes |0\rangle),\\ |\xi_2\rangle &= \frac{1}{\sqrt{2}} (|0\rangle \otimes |-\rangle + |1\rangle \otimes |{+}\rangle),\\ |\xi_3\rangle &= \frac{1}{\sqrt{2}} (|{+}\rangle \otimes |1\rangle + |-\rangle \otimes |0\rangle),\\ |\xi_4\rangle &= \frac{1}{\sqrt{2}} (|{+}\rangle \otimes |-\rangle + |-\rangle \otimes |{+}\rangle). \end{align} \] I originally made a mistake of saying that they're not even orthogonal; but they are, if you properly include the mixed terms. At any rate, it's spectacularly clear what are the probabilities that the initial state is measured in either of these four states. They're
\[ P_i = {\rm Tr}(\lambda |\xi_i\rangle\langle \xi_i|) \] If \(\lambda\) were pure, i.e. \(|\psi\rangle\), it would be just \(|\langle \psi|\xi_i\rangle|^2\). It's just some totally straightforward linear algebra to calculate the probabilities and a competent quantum physicist has done such things many times. Whatever quantum mechanics predicts is right, and there obviously can't be any ambiguity and inconsistency because all these expressions are well-defined for well-defined \(c_i\). Anyone who doubts, after all those decades of checks of the very same kind, that quantum mechanics properly describes a 2-qubit system must be insane.

So what's their claimed "paradox"? You may see it just three paragraphs later. Because of various orthogonalities,
  • \(|\xi_1\rangle\) can't be detected if the initial state is \(|0\rangle \otimes |0\rangle\),
  • \(|\xi_2\rangle\) can't be detected if the initial state is \(|0\rangle \otimes |{+}\rangle\),
  • \(|\xi_3\rangle\) can't be detected if the initial state is \(|{+}\rangle \otimes |0\rangle\),
  • \(|\xi_4\rangle\) can't be detected if the initial state is \(|{+}\rangle \otimes |{+}\rangle\),
Fine, out of 4 possible results, you could exclude 1 initial state. So one may say that we haven't learned much: almost nothing. However, here a miracle takes place and a breathtaking conclusion obtained by the authors follows:
This leads immediately to the desired contradiction. At least \(q^2\) of the time, the measuring device is uncertain which of the four possible preparation methods was used, and on these occasions it runs the risk of giving an outcome that quantum theory predicts should occur with probability \(0\). Importantly, we have needed to say nothing about the value of \(q\) per se to arrive at this contradiction.
Wow. So the "apparatus feels uncertain what to say". Poor guy. However, no emotions of the poor apparatus are needed for the laws of physics to operate. The laws of physics don't feel uncertain and whatever the initial state or density matrix \(\lambda\) is, they end up with some of the four \(|\xi_i\rangle\) results and the probabilities are easily calculable. It's not an "active intelligence" of the apparatus that decides about the probabilities; it's the laws of Nature. A better observer may be more knowledgeable about the initial state and he can make better predictions. If there's no better observer who knows the initial state, no one can calculate the right odds, but that also means that no one can possibly derive any paradox. Ignorance makes it harder to say any sharp things, e.g. to prove a paradox, than a full knowledge.

At any rate, one's (or particle's or apparatus') ignorance about some potential for an event doesn't mean that event can't happen. One may be ignorant about having cancer but cancer may still kill him. One may be ignorant about having no money on his checking account but police may still visit him because of a bouncing cheque he wrote yesterday. ;-)

What's their problem? If we detected e.g. \(|\xi_1\rangle\), we can know with certainty that the initial state wasn't \(|0\rangle \otimes |0\rangle\) because these two vectors are orthogonal. However, that doesn't mean that we can say that the initial state was one of the list\(|0\rangle \otimes |{+}\rangle\), \(|{+}\rangle \otimes |0\rangle\), \(|{+}\rangle \otimes |{+}\rangle\) because these four states are not orthogonal to each other i.e. they are not mutually exclusive possibilities.

I hope that you're as puzzled as I am about the origin of their psychological problem. Do the following sentences offer a hint?
This argument shows that no physical state of the system can be compatible with both of the quantum states \(|0\rangle\) and \(|{+}\rangle\). If the same can be shown for any
pair of quantum states \(| 0\rangle\) and \(| 1\rangle\), then the quantum state can be inferred uniquely from \(\lambda\). In this case, the quantum state is a physical property of the system, and the statistical view is false.
So they derived that the quantum state – they mean which of the four vectors \(|00\rangle\) ... \(|{+}{+}\rangle\) – can be uniquely derived from \(\lambda\). They clearly mean from the measurement. Except, as I have already said, it is simply not true. If you make a measurement, you may only eliminate 1 of the 4 choices and 3 other choices remain. And when you list the remaining choices that are still possible, you simply cannot list the remaining choices from the list \(|00\rangle\) ... \(|{+}{+}\rangle\). Instead, you must list a basis of the 3-dimensional space orthogonal to the choice that was excluded. For example, if \(|00\rangle\) has been excluded, you may say that \(|01\rangle\), \(|10\rangle\), \(|11\rangle\) remain possibilities.

So you clearly can't get the pure state by a single measurement. Moreover, and this is repeating what we said at the end of the last paragraph, if you determine that a 2-state system isn't in the state \(|0\rangle\), it does not mean that it is in the state \(|{+}\rangle\). These two vectors are not mutually exclusive because they're not orthogonal. Instead, in a two-dimensional system, strictly proving that a particle isn't in the state \(|0\rangle\) is equivalent to proving that it is in the orthogonal state \(|1\rangle\). However, \(|1\rangle\) is something else than \(|{+}\rangle\) so being in \(|1\rangle\) is not equivalent to being in \(|{+}\rangle\). Instead, being in \(|1\rangle\) means to have a 50% probability to be either in \(|{+}\rangle\) or in \(|-\rangle\).

Statistically, by making lots of measurements of various kinds, you may almost certainly determine the four coefficients \(c_1,c_2,c_3,c_4\) out of the percentages of measurements in which you got the four outcomes \(|\xi_i\rangle\). Or if you made many measurements of different properties of the states, instead of just the projections to the \(|\xi_i\rangle\) basis, you could determine the whole \(\lambda\) even if it were arbitrary (but identical in each repetition of the experiment). This is just a frequentist measurement of the probabilities. The possibility that you may measure a distribution (or the density matrix) by many measurements doesn't mean that the distribution (or the density matrix) isn't a probabilistic object. Quite on the contrary, it surely is.

Whatever way you choose to read the text, it makes no sense whatsoever. How they suddenly jump to the conclusion that there is a problem with the probabilistic meaning of the wave function remains completely mysterious. Most likely, they think that they may isolate one of the four vectors \(|00\rangle\) ... \(|{+}{+}\rangle\) from the measurement (producing one of the four vectors \(|\xi_i\rangle\)) because they incorrectly assume that "not being \(|0\rangle\)" is the same thing as "being \(|{+}\rangle\)". It's incorrect because these two vectors are not mutually exclusive because they're not orthogonal.

There obviously can't be any paradox in any of these thought – or real – experiments. My main point is that it is totally unnecessary to read and evaluate some detailed games with the ket vectors and bra vectors because they have nothing to do with the big claim they are trying to make. Their big claim is that quantum mechanics leads to self-contradictory predictions of the probabilities for the 4-state quantum system. This is obviously bullshit because the probabilities are given by totally indisputable, unambiguous formulae and the probabilities also add to one whenever we measure which of the mutually exclusive (orthogonal) alternatives was taken.

The text goes on and repeats the same meaningless "proof" for more than two bits. They also talk about "experiments" to test something but it is not clear what they exactly want to test. Do they really have doubts that experiments will confirm quantum mechanical predictions of a trivial 2-bit system, even though they have been doing the very same thing in every other case for 85 years? At any rate, the very usage of the word "experiment" surely makes the article more attractive for the obsessively stupid people, the kind of human crap at the bottom of the society that e.g. visits the Shmoit-like crackpot blogs.

It's very hard to localize where their psychological problem is. They just generate lots of irrelevant formalism and buzzwords that have nothing to do with the "problem". They have just never learned quantum mechanics properly. When you don't know how something works, it's very likely for you to make lots of incredibly stupid claims that it doesn't work.

And that's the memo.



One special comment about the remarks by Scott Aaronson, a mad engineer. He clearly hasn't read or understood the abstract of the Pusey et al. paper, or he doesn't understand quantum mechanics, because he claims that it's not an anti-quantum-mechanics paper. Instead, he presents it as an important battle deciding whether the "ontic camp" or the "epistemic camp" camp was right. And he declares one of them winners.

What Mr Aaronson is missing is that, as you can check by reading the first paragraph of this paper, both "ontic camp" and "epistemic camp" are composed of advocates of hidden variables. But hidden variables, contradicting basic principles of quantum mechanics, have been shown inconsistent with observations (even without assuming that quantum mechanics is right) for many decades, so every single "ontist" and every single "epistemist" is an anti-quantum-mechanical crank, and so is apparently Mr Aaronson.

Quantum mechanics, as long as it is compatible with the observations, even the very general ones, doesn't allow one to add any hidden variables or make any other drastic modification of this kind. It is of course very easy to derive contradictions in theories with hidden variables because they simply contaminate the pure behavior of quantum mechanics in a dramatic way. However, the paper by Pusey et al. is clearly written by authors who don't understand how genuine quantum mechanics – which is neither ontic nor epistemic but it is definitely probabilistic – works. Their wording makes it explicit that they believe that some kind of hidden variables have to exist.

In particular, quantum mechanics doesn't allow you to assume that the state of the system may be described by an "ontological state" (which would be de facto a classical description). It cannot be described by anything else than pure vectors or density matrices. However, they assume that the system may be prepared in an "ontological state" because they exclude the correct theory, namely quantum mechanics without any hidden variables, a priori: they think it is so "obviously legitimate" they don't even mention that this is what they do.

The meaningless paper by Pusey et al. is clearly making a bizarre combination of wrong assumptions about the real world – the detailed mixture is completely uninteresting for those who are more interested in physics than psychiatry – and then they probably derive, just in their heads, some confusion resulting from the invalid assumptions that a physicist could be able to see within seconds. At any rate, the paper doesn't derive and can't derive any result about quantum mechanics because it violates its rules throughout the text and be sure that quantum mechanics is as right, as probabilistic, and as clearly avoiding classical ideas about the "real state of a system" as it was before this paper was written down.

Add to del.icio.us Digg this Add to reddit

snail feedback (10) :


reader Gustav said...

Indeed, the paper is simply drivel, as it has to be. Hardy already demonstrated in his original 2001 paper, "Quantum Theory from Five Reasonable Axioms", that quantum theory can be derived in entirety from theory of probability. This was further improved by Schack in 2002, who demonstrated that four axioms were sufficient (Hardy's latest revision is dated 2008). What this means is that no amount of intellectual gymnastics rooted in quantum mechanics can disprove its probabilistic interpretation. What more! As far back as 1966 Nelson showed that the Schroedinger equation can be derived from... Brownian motion, sic! (Physical Review, vol. 150, no. 4, pp. 1079-1085) Others demonstrated later that the Feynman formulation can similarly be derived from Brownian motion... so there is probability all over the good old QM. Hardy's work basically generalizes these older results.


reader Arun said...

The question being posed is not whether Quantum Theory is probabilistic or not. Of course Quantum Theory is probabilistic.

The question is - is the wavefunction simply a computational device to arrive at probabilities, or is there some reality to it? If it is purely computational, there is no problem with "spooky action at a distance", there is no problem with wave function "collapse" - it is all just ordinary probability theory. However, if the wave function has physical reality, then all these become mysterious.

Now, the difference between being a computational device and being physical reality matters only if there is an experimental difference. That is what this paper is trying to find - an experimental difference. Do they succeed in doing so? I don't know.

But has anyone on this blog even understood the problem they're trying to address? I don't think so. In particular, Motl gets so fond of his own rhetoric his mind seems to close.

Tom Banks wrote: "I am not a historian of science but my cursory reading of the evidence suggests that Einstein understood completely that there were no paradoxes in QM if the wave function was thought of merely as a device for computing probability. He objected to the contention of some in the Copehagen crowd that the wave function was real and satisfied a deterministic equation and tried to show that that interpretation violated the principles of causality. It does, but the statistical treatment is the right one. Einstein was wrong only in insisting that God doesn’t play dice."

This paper is an attempt to establish an thought-experimental result that the wave function is real in the sense that Tom Banks wrote above.


reader Sandro Magi said...

Hidden variable theories are not "anti-QM", only local hidden variable theories are. The de Broglie-Bohm interpretation is a trivial counter-example to your assertion which has been known for over 50 years, so you're either ignorant, dishonest and intentionally deceiving people, or simply an idiot.

I actually know you're aware of de Broglie-Bohm, because I've posted about it in your comments before, and you replied, so you must be either dishonest or an idiot. Which is it?

As for the paper itself, it's simply another nail in the coffin for local hidden variables akin to Bell's theorem. It's not earth-shattering at this point, but neither is it trivial or hogwash as you seem to think.


reader Luboš Motl said...

Dear Sandro, what you write is simply not true.

There is no working non-local hidden-variable theory that would agree with the observations (including trivial things such as observations of QFT - multi-particle states with spin, particle creation, and annihilation: each of these things is totally incompatible with the Bohmian pseudoscientific approach) and large classes of non-local hidden-variable models have been falsified as rigorously as local hidden variables, see e.g. Zeilinger et al. 2007.

But even before this detailed work whose conclusion was pretty much obvious to everyone who has a clue (which obviously doesn't include you), it's been obvious since 1905 that non-locality flagrantly violates one of the well-established pillars of modern physics, special theory of relativity, so non-local theories can't be the right description of natural phenomena.

Both local and non-local hidden-variable theories are dead.


reader Axel Boldt said...

Your criticism is misguided, basically because you misunderstand the point of the paper, specifically their usage of λ.

Their goal is to refute a particular statistical interpretation of QM, namely the following: (1) every system has a physically complete (but maybe hidden) list of properties λ; (2) the outcome of every measurement on the system is deterministically determined by λ; and (3) the system's quantum state (wavefunction) encodes a mere probability distribution of the λ's.

When they say "the quantum state is compatible with λ" they mean: the probability distribution encoded by the quantum state assigns a non-zero probability to λ.

They then show that this statistical interpretation leads to consequences that are at variance with classical QM predictions. Further they state that these deviations should be detectable by experiment. They clearly believe that such deviations would not be detected and that the above statistical interpretation is therefore false.

The paper is in now way "anti-QM" and your claim that their derivation somehow assumes mutual exclusivity of two non-orthogonal states is besides the point and false.


reader Luboš Motl said...

Dear Alex, you're exactly as deluded and confused about complete basics of quantum physics as the authors of the paper.

The fact that the outcomes of experiments can't be deterministically determined has been known since 1925-26 when Max Born's statistical interpretation of QM was fully appreciated and when Heisenberg understood the uncertainty principle.

Even when one wants rigorous proofs, the determinism has been excluded since the first moment when people used QM to analyze the double-slit experiment or any other "equally strong" experiment. If there existed at least in principle a way to predict whether the electron would be observed in the left slit or right slit when shined upon, then it would already be guaranteed that it can't contribute to an interference pattern when it's not shined upon. But it does create it, so determinism and a form of realism is automatically eliminated. This has been explained e.g. by Feynman in the 1964 Messenger Lectures but it had surely been known since the 1920s.

You're just rotating in circles and you never learn anything new.

"When they say "the quantum state is compatible with λ" they mean: the probability distribution encoded by the quantum state assigns a non-zero probability to λ."

This is a completely inconsistent sentence. At the beginning, you said that "λ" were some variables including hidden variables, so "λ" is clearly not an object that exists in quantum mechanics. So quantum mechanics can't and doesn't assign probabilities to it. Quantum mechanics only assigns probabilities to observed ("visible") quantities, to the results of measurements.

Your comment, much like the paper, is inconsistently using classical and quantum rules to predict phenomena but it doesn't actually have any coherent theory to mix it. The only coherent frameworks in physics we know are classical physics; and quantum physics. No one has ever defined any rules "in between" so one can't discuss this non-existent possibility at all.

What actually happens is that you, and all the other deluded people including the authors of this nonsensical paper, are still assuming that the world is classical and they are using classical thinking to try to explain the results of quantum measurements even though this has been known to be impossible for 85 years. Of course that it doesn't make any sense, and they may even see that their reasoning is internally inconsistent. They never learn anything out of their failures so they will never learn the actual rules of quantum mechanics.

Cheers
LM


reader Axel Boldt said...

You still don't seem to understand the simple logical structure of the paper.

They assume, for the sake of argument, a statistical interpretation of QM that postulates that every system has a complete description λ and that wavefunctions encode probability distributions over these λs. These probability distributions are not the standard QM distributions over values of observables that you confuse them with, nor does this "encoding" need to follow any known QM rules. λ is not an entity of standard QM (and certainly not a mixed state as you claim in your terribly convoluted and confused blog post); it is an object of the postulated interpretation, and their goal is to show that such a λ cannot exist. Or rather: if it existed, then a certain experiment would show outcomes at variance with standard QM predictions.


reader Luboš Motl said...

Alex, your texts are completely incoherent both when it comes to the terminology and the logic. I can't imagine any interpretation that would make your comment anything else than gibberish.

First, you describe some hypothetical crackpot assumptions of their paper and you call it "a statistical interpretation". That's totally wrong because "statistical interpretation" is exactly what you think it's not: it's the assumption that the state vector is purely a tool to calculate probabilities that can be verified statistically, by multiple repetitions of the same situations. See the 1954 Nobel prize lecture by Born which is called "The Statistical Interpretation of Quantum Mechanics". That's exactly what he got his Nobel prize for, that's exactly how quantum mechanics works, and it can't work otherwise which has been known from the early days of QM.

You are clearly ignorant about these basics of QM that have been around for 50-100 years. But you're incoherent when it comes to the presentation of this would-be new hidden-variable pseudoscience as well. You just can't call it an "interpretation of quantum mechanics" because you're clearly trying to construct a totally different, and de facto classical, theory. This language is just like calling creationism "an interpretation of Darwin's theory of evolution". It's just nonsense. It's the very point of quantum mechanics that it's not - it can't be interpreted as - classical physics of any sort.

But even if I ignore your totally absurd way how you use the terms "statistical interpretation" and "interpretation" itself, you logic just doesn't add up even with these reversal of the whole terminology. The paper and your logic assumes things that flagrantly contradict the basic postulates of quantum mechanics, so it can't possibly find any contradiction in quantum mechanics or any interpretations of it.

You're clearly as deluded a crackpot as the authors of this article itself.


reader boson said...

what the fuck is 'compatible' ... Lumo, u r totally correct.... these authors are being too foggy and confusing the readers with the beginning paragraphs


reader Luboš Motl said...

Exactly, the word "compatible" is a vague ill-defined adjective that stores much of their "magic".

If you search these comments, someone mentions that it suggests that the word means, in their usage, that the probability of something is nonzero according to a given quantum state.

Except that they don't calculate this probability correctly according to the right quantum rules but rather some proto-classical intuitive rules that are never clearly stated.

Moreover, they often pretend that things' being "compatible" means, in some sense, that the probability is not only nonzero (which is almost always the case for any proposition) but "close to 100%" so that it implies something.

It's just vague emotional rubbish.