# Physics and Algorithmic Complexity

Algorithmic Information Theory (AIT) is a field that has found numerous applications in computer science [6,7], but that is not very well-known among physicists. While standard information theory assigns information measures to probability distributions (like, for example, entropy), AIT is concerned with the information content (and other properties) of concrete outcomes or configurations, for example finite binary strings.

The most well-known notion of AIT is Kolmogorov complexity: given some finite binary string x, its Kolmogorov complexity K(x) is the length of the shortest program for a universal computer that produces x as output. For example, a string

has Kolmogorov complexity of (at most) about log(n): we can simply program the computer to “output `10’ n times”. All we have to do is to encode the number n (which takes at most about log n bits), together with some fixed set of instructions. On the other hand, a string like

typically has Kolmogorov complexity of about n: the shortest program is simply to list all the digits, and tell the computer to print those digits. In general, the Kolmogorov complexity of a string tells us the length of its shortest possible compression.

Kolmogorov complexity has found numerous applications in physics, in particular in the context of thermodynamics (see ) and, recently, also in quantum information theory . It is therefore a natural idea to try and extend its definition to the quantum case. Based on the definition by Berthiaume, van Dam, and Laplante , we have proven [1-4] that “Quantum Kolmogorov Complexity” has a number of important properties that resemble its classical counterpart. In particular, its definition depends only up to an additive constant on the choice of universal quantum Turing machine  (which relies on some unexpected subtleties), and, for ergodic quantum information sources, its value is closely related to von Neumann entropy .

However, the current research in our group is concerned with the classical version of AIT, and in particular with the notion of algorithmic probability. In a nutshell, algorithmic probability P(x) is the probability that a randomly chosen input for a universal computer yields an output that starts with the binary string x (see  for details). The main significance of algorithmic probability comes from its use in Solomonoff’s theory of inductive inference, see e.g. this excellent introduction on LessWrong.com:

In some sense, Solomonoff induction can be understood as a “gold standard” for any method to predict the future, given the past: it says that an agent who observes one bit after the other should predict to see either y=0 or y=1 as the next bit with conditional algorithmic probability

if x1,…,xn are the bits that the agent has seen previously. In any probabilistic computable environment, it is guaranteed that this prediction converges to the correct prediction for large n. For example, suppose that the agent observes the outcomes of a repeated coin toss, and there is a probability of .7 for “0” and .3 for “1”, but the agent does not know this. In the long run, algorithmic probability P  as above will converge to .7 for y=0 and .3 for y=1 (with unit probability) – so if the agent simply assigns algorithmic probability P to her future observations, she can be sure to make the best possible guess asymptotically. This is also true if there is an arbitrarily complicated form of memory or “postprocessing” involved, as long as the resulting distribution is in principle computable.

Now, the catch is that the laws of physics as we know them satisfy these two conditions: they are probabilistic and correspond in principle to computable distributions. Hence Solomonoff induction will in principle predict observations at least as good as our best physical theories. This statement in itself does not have any direct application, since it is extremely hard to actually implement Solomonoff induction on a computer (algorithmic probability is not even computable). Nevertheless, it suggests a quite bold idea: what if physics “is nothing but” induction? Can we have a fully information-theoretic approach to physics in which not a notion of “external world”, but of “observation” is the fundamental starting point?

This is one of the main ideas behind our project “Emergent objective reality – from observers to physics via Solomonoff induction” which is currently funded by the Foundational Questions Institute (FQXi). A project description can be found under the following link:

A somewhat more detailed, but preliminary description (based on the original FQXi application) has been presented at the Workshop on Participatory Realism in Stellenbosch, South Africa in June 2017:

This is an ambitious project with several interdisciplinary aspects (despite its mathematical rigor), and we will give more details here once the first publication has been finished. I am glad to work on the philosophical aspects of this with Dr. Michael Cuffaro, who obtains funding for this project from FQXi as a postdoc at the Rotman Institute of Philosophy at the University of Western Ontario. I am also grateful to the Rotman Institute for supplementing the funding via a Catalyst Grant.

 M. Müller, Quantum Kolmogorov complexity and the quantum Turing machine, PhD thesis, TU Berlin, 2007. arXiv:0712.4377

 M. Müller, Strongly universal quantum Turing machines and invariance of Kolmogorov complexity, IEEE Trans. Inf. Th. 54(2), 763—780 (2008). arXiv:quant-ph/0605030

 F. Benatti, T. Krüger, M. Müller, Ra. Siegmund-Schultze, and A. Szkoła, Entropy and quantum Kolmogorov complexity: a quantum Brudno’s theorem, Commun. Math. Phys. 265(2), 437—461 (2008). arXiv:quant-ph/0506080

 M. Müller, On the quantum Kolmogorov complexity of classical strings, Int. J. Quant. Inf. 7(4), 701—711 (2009). arXiv:0707.2924

 M. Müller, Stationary Algorithmic Probability, Theoretical Computer Science 411, 113—130 (2010). arXiv:cs/0608095