Three myths about quantum computing — Part 1: Exponential size state space

This is the first post in the series on myths about quantum computing. In this post I will discuss the first misconception about quantum computers—that the exponential size of their state space is the main feature that distinguishes them from their classical cousins—and explain why this should not be used as an argument to suggest that quantum computers are likely to be more powerful than the classical ones.

Consider a system S with states labelled by n-bit strings. Assume that S is initialized in all-zeroes state “00…0” and undergoes some kind of physical evolution. Let us consider three scenarios.

Deterministic classical world

If the evolution is deterministic, then one can specify the current state of the system by giving the corresponding label. Thus, one needs n bits of information to describe the state in this case.

Quantum world

If S is a quantum system, it can be in a superposition of all possible states. To describe this superposition, one has to assign a complex number to each state. Thus, in total one has to specify 2n complex numbers (say, to some finite precision), which is exponentially more information than in the deterministic case!

If you have never heard of this before (or remember your excitement when you heard it the first time), this definitely is mind-boggling. In fact, this is often mentioned as the main obstacle for efficiently simulating quantum computers, and also as a hand-waving argument to convince wider public that quantum computers are likely to be exponentially more powerful than classical ones.

Probabilistic classical world

One should not get too excited about the above observation, but instead think more carefully whether this comparison of deterministic and quantum computers is fair. Since the output of a quantum computer is intrinsically probabilistic, one should be looking at probabilistic evolution in the classical case, not deterministic.

If the transitions between states of the system S are described by probabilities, then the resulting state is a probability distribution over {0,1}n. To describe such distribution, one needs to specify 2n real numbers—one for each n-bit string. This is also exponentially more information than in the deterministic case!

Thus, with this comparison the only difference in the amount of information needed to describe classical and quantum states is a factor of two (since probability can be specified by a single real number whereas two real numbers are needed to specify one complex quantum amplitude).

Conclusion

The fact that an exponential amount of information is needed to describe the state space of a particular computational model does not immediately imply that it is exponentially more powerful. To be more specific, let us consider the classes of decision problems that can be solved in polynomial time by the three computational models discussed above:

The following inclusions are trivial:

$\mathrm{P} \subseteq \mathrm{BPP} \subseteq \mathrm{BQP}$

However, as of now, none of these inclusions is known to be strict.

Thus, one should not use the exponential size of the state space as an argument to claim that quantum computers are likely to be hard to simulate by deterministic classical computers, since the same argument could also be used in favor of probabilistic classical computers.

To me it seems that the main difference between probabilistic and quantum computers is the underlying field of numbers (non-negative real numbers versus complex numbers). It is a major open problem to show if this subtle difference can be used to separate the corresponding two complexity classes.