In broad terms, the concept of Entropy is used to quantify the amount of randomness or uncertainty in the state of a system. There is not one but a whole family of entropies, and some of them play key roles in different areas of quantum information theory. Since it would be impossible to cover them all in a single post, here we will focus on those that are more relevant to the design and implementation of quantum technologies [1].
Arguably, the most ubiquitous quantum entropy is the von Neumann entropy (in fact, some people simply refer to it as “quantum entropy”). It quantifies the amount of classical uncertainty about the quantum state of a system. If a system A is in a pure state, meaning that it has a well-defined value for some property (e.g., it is a horizontally polarized photon), then there is no uncertainty in its quantum mechanical description, and so its von Neumann entropy S(A) is zero. In any other case, we will have S(A)>0, achieving the maximum level of entropy when A is in a uniform mixture of all the states representing well defined values for some property (e.g., a photon in a uniform mixture of vertically and horizontally polarized states). It is quite natural to see the von Neumann entropy as a generalization of Shannon’s entropy from classical information theory, with pure states playing the role of deterministic random variables [2]. However, this intuition spectacularly fails when entanglement enters the picture:
Given two quantum systems A and B in some joint state ρ AB , one can study, just like in the classical setting, the entropy S(A|B) of A given the knowledge of (or, conditioned on) B. It turns out that, when there is entanglement between A and B, this quantity can become negative [3], something which is without any classical analog. This negativity is, however, not just a mathematical curiosity, as it can be given a clear operational interpretation. Consider that two physicists, Alice and Bob, share N pairs of entangled qubits, each of them in some joint state ρ AB .
Alice would like to send qubits to Bob (they are in different labs) so that he receives all her shares of the N pairs. Ok, that’s easy: Alice should just send her N qubits to Bob. It turns out that she can achieve the same goal by sending only N x S(A|B) qubits, which can be significantly smaller than N. So, what happens when S(A|B) is negative? Well, that just means that Alice can achieve her goal without the need of any quantum communication and, moreover, that she gains the ability to communicate for free in the future. Amazing, isn’t it?
Let’s now talk about the quantum min-entropy, a quantum entropy that is central to the business of generating random numbers. Consider that we have source of quantum particles P1P2… and that we measure them to produce the binary sequence X = X1X2…. If the particles are prepared with a definite value for some property (say, they are electrons with a well-defined value for their spin in the z-direction) and we measure a complementary property (i.e. the spin of the said electrons in the x-direction), then quantum theory predicts that the outcomes are uniformly distributed and, more importantly, independent of any other measurement.
From a cryptographic point of view, this means that the said outcomes are completely unpredictable for any eavesdropper bounded by the laws of quantum mechanics. In real devices, however, due to noise and imperfections it is impossible to prepare systems with perfectly defined properties and to perfectly measure complementarity properties. The conditional quantum min-entropy Hmin (X|E) of X given the eavesdropper’s system E precisely quantifies the number of completely unpredictable bits that can be extracted from the said outcomes. But, you may ask yourselves, “how do we know the state of the eavesdropper’s quantum memory E? I am sure she wouldn’t just willingly reveal it”. It turns that we don’t need to know or assume anything about her system: from a complete description of our particles and measurements we get a description of the worst-case state for the adversary. That’s just the beauty of quantum mechanics.
Entropy is, undoubtedly, the main concept in quantum information theory. For the interested reader, we recommend the following books on the subject:
[1] See our recent post about quantum technologies.
[2] Remarkably, it was Shannon’s who based his definition of entropy in the one by von Neumann.
[3] People in the quantum info community are still discussing whether to refer to this situation as knowing “more than everything” or “less than nothing”.
If one could perfectly measure a property of a quantum system that has been prepared with a perfectly definite value for the said property, then there wouldn’t be any entropy (neither in our description of the system’s state nor in the outcomes of the measurement). However, in real implementations, this is impossible (or, in other words, noise is unavoidable) and therefore, albeit vanishingly small, there is always some amount of entropy.
There are different concrete instantiations of the abstract notion of entropy in quantum information theory, many of them with a clearly defined operational interpretation. In this post, we have briefly reviewed two of them: the von Neumann entropy and the quantum min-entropy.
In abstract terms, entropy is a measure of the amount of uncertainty or randomness in the state of a system.