The purpose of physics is the description and interpretation of physical reality based on observation. To this end, mathematics has been a fundamental tool to formalize this reality through models, which in turn have allowed predictions to be made that have subsequently been experimentally verified. This creates an astonishing connection between reality and abstract logic that makes suspect the existence of a deep relationship beyond its conceptual definition. In fact, the ability of mathematics to accurately describe physical processes can lead us to think that reality is nothing more than a manifestation of a mathematical world.

But perhaps it is necessary to define in greater detail what we mean by this. Usually, when we refer to mathematics we think of concepts such as theorems or equations. However, we can have another view of mathematics as an information processing system, in which the above concepts can be interpreted as a compact expression of the behavior of the system, as shown by the algorithmic information theory [1].

In this way**, physical laws determine how the information that describes the system is processed**, establishing a space-time dynamic. As a consequence**, a parallelism is established between the physical system and the computational system that, from an abstract point of view, are equivalent**. This equivalence is somewhat astonishing, since in principle we assume that both systems belong to totally different fields of knowledge.

But apart from this fact, we can ask what consequences can be drawn from this equivalence. In particular, computability theory [2] and information theory [3] [1] provide criteria for determining the computational reversibility and complexity of a system [4]. In particular:

- In a reversible computing system (RCS) the amount of information remains constant throughout the dynamics of the system.
- In a non-reversible computational system (NRCS) the amount of information never increases along the dynamics of the system.
- The complexity of the system corresponds to the most compact expression of the system, called Kolmogorov complexity and is an absolute measure.

It is important to note that in an NRCS system information is not lost, but is explicitly discarded. This means that there is no fundamental reason why such information should not be maintained, as the complexity of an RCS system remains constant. In practice, the implementation of computer systems is non-reversible in order to optimize resources, as a consequence of the technological limitations for its implementation. In fact, the energy currently needed for its implementation is much higher than that established by the Landauer principle [5].

If we focus on the analysis of **reversible physical systems**, such as quantum mechanics, relativity, Newtonian mechanics or electromagnetism, **we can observe invariant physical magnitudes that are a consequence of computational reversibility**. These are determined by unitary mathematical processes, which mean that every process has an inverse process [6]. But the difficulties in understanding reality from the point of view of mathematical logic seem to arise immediately, with thermodynamics and quantum measurement being paradigmatic examples.

In the case of quantum measurement, the state of the system before the measurement is made is in a superposition of states, so that when the measurement is made the state collapses in one of the possible states in which the system was [7]. This means that the quantum measurement scenario corresponds to that of a non-reversible computational system, in which the information in the system decreases when the superposition of states disappears, making the system non-reversible as a consequence of the loss of information.

This implies that **physical reality systematically loses information, which poses two fundamental contradictions. The first is the fact that quantum mechanics is a reversible theory** and that observable reality is based on it. **The second is that this loss of information contradicts the systematic increase of classical entropy**, which in turn poses a deeper contradiction, since in classical reality there is a spontaneous increase of information, as a consequence of the increase of entropy.

The solution to the first contradiction is relatively simple if we eliminate the anthropic vision of reality. In general, the process of quantum measurement introduces the concept of observer, which creates a certain degree of subjectivity that is very important to clarify, as it can lead to misinterpretations. In this process there are two clearly separated layers of reality, the quantum layer and the classical layer, which have already been addressed in previous posts. **The realization of quantum measurement involves two quantum systems**, one that we define as the system to be measured and another that corresponds to the measurement system, which can be considered as a quantum observer, and both have a quantum nature. **As a result of this interaction, classical information emerges**, where the classical observer is located, who can be identified e.g. with a physicist in a laboratory.

Now consider that the measurement is structured in two blocks, one the quantum system under observation and the other the measurement system that includes the quantum observer and the classical observer. In this case it is being interpreted that the quantum system under measurement is an open quantum system that loses quantum information in the measurement process and that as a result a lesser amount of classical information emerges. In short, this scenario offers a negative balance of information.

But, on the contrary, in the quantum reality layer the interaction of two quantum systems takes place which, it can be said, mutually observe each other according to unitary operators, so that the system is closed producing an exchange of information with a null balance of information. As a result of this interaction, the classical layer emerges. But then there seems to be a positive balance of information, as classical information emerges from this process. But what really happens is that the emerging information, which constitutes the classical layer, is simply a simplified view of the quantum layer. For this reason we can say that the classical layer is an emerging reality.

So, it can be said that the **quantum layer is formed by subsystems that interact with each other in a unitary way**, constituting a closed system in which the information and, therefore, the complexity of the system is invariant. As a consequence of these interactions, **the classical layer emerges as an irreducible reality of the quantum layer**.

As for the contradiction produced by the increase in entropy, the reasons justifying this behavior seem more subtle. However, a first clue may lie in the fact that this increase occurs only in the classical layer. It must also be considered that, according to the algorithmic information theory, the complexity of a system, and therefore the amount of information that describes the system, is the set formed by the processed information and the information necessary to describe the processor itself.

**A physical scenario that can illustrate this situation is the case of the big bang** [8], in which it is considered that the entropy of the system in its beginning was small or even null. This is so because the microwave background radiation shows a fairly homogeneous pattern, so the amount of information for its description and, therefore, its entropy is small. But if we create a computational model of this scenario, it is evident that the complexity of the system has increased in a formidable way, which is incompatible from the logical point of view. This indicates that in the model not only the information is incomplete, but also the description of the processes that govern it. But what physical evidence do we have to show that this is so?

Perhaps the clearest sample of this is cosmic inflation [9], so that the space-time metric changes with time, so that the spatial dimensions grow with time. **To explain this behavior the existence of dark energy has been postulated as the engine of this process [10]**, which in a physical form recognizes the gaps revealed by mathematical logic. Perhaps one aspect that is not usually paid attention is the interaction between vacuum and photons, which produces a loss of energy in photons as space-time expands. This loss supposes a decrease of information that necessarily must be transferred to space-time.

This situation causes the vacuum, which in the context of classical physics is nothing more than an abstract metric, to become a fundamental physical piece of enormous complexity. Aspects that contribute to this conception of vacuum are the entanglement of quantum particles [11], decoherence and zero point energy [12].

From all of the above, **a hypothesis can be made as to what the structure of reality is from a computational point of view**, as shown in the following figure. If we assume that the quantum layer is a unitary and closed structure, its complexity will remain constant. But the functionality and complexity of this remains hidden from observation and it is only possible to model it through an inductive process based on experimentation, which has led to the definition of physical models, in such a way that these models allow us to describe classical reality. As a consequence, the quantum layer shows a reality that constitutes the classical layer and that is a partial vision and, according to the theoretical and experimental results, extremely reduced of the underlying reality and that makes the classical reality an irreducible reality.

**The fundamental question that can be raised in this model is whether the complexity of the classical layer is constant or whether it can vary over time**, since it is only bound by the laws of the underlying layer and is a partial and irreducible view of that functional layer. But for the classical layer to be invariant, it must be closed and therefore its computational description must be closed, which is not verified since it is subject to the quantum layer. Consequently, the complexity of the classical layer may change over time.

Consequently, **the question arises as to whether there is any mechanism in the quantum layer that justifies the fluctuation of the complexity of the classical layer**. Obviously one of the causes is quantum decoherence, which makes information observable in the classical layer. Similarly, cosmic inflation produces an increase in complexity, as space-time grows. On the contrary, attractive forces tend to reduce complexity, so gravity would be the most prominent factor.

From the observation of classical reality we can answer that **currently its entropy tends to grow**, as a consequence of the fact that decoherence and inflation are predominant causes. However**, one can imagine recession scenarios, such as a big crunch scenario in which entropy decreased**. Therefore, the entropy trend may be a consequence of the dynamic state of the system.

In summary, it can be said that the amount of information in the quantum layer remains constant, as a consequence of its unitary nature. On the contrary, the amount of information in the classical layer is determined by the amount of information that emerges from the quantum layer. Therefore, the challenge is to determine precisely the mechanisms that determine the dynamics of this process. Additionally, it is possible to analyze specific scenarios that generally correspond to the field of thermodynamics. Other interesting scenarios may be quantum in nature, such as the one proposed by Hugh Everett on the Many-Worlds Interpretation (MWI).

#### Bibliography

[1] | P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008. |

[2] | M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012. |

[3] | C. E. Shannon, “A Mathematical Theory of Communication,” vol. 27, pp. 379-423, 623-656, 1948. |

[4] | M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011. |

[5] | R. Landauer, «Irreversibility and Heat Generation in Computing Process,» IBM J. Res. Dev., vol. 5, pp. 183-191, 1961. |

[6] | J. Sakurai y J. Napolitano, Modern Quantum Mechanics, Cambridge University Press, 2017. |

[7] | G. Auletta, Foundations and Interpretation of Quantum Mechanics, World Scientific, 2001. |

[8] | A. H. Guth, The Inflationary Universe, Perseus, 1997. |

[9] | A. Liddle, An Introduction to Modern Cosmology, Wiley, 2003. |

[10] | P. J. E. Peebles and Bharat Ratra, “The cosmological constant and dark energy,” arXiv:astro-ph/0207347, 2003. |

[11] | A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981. |

[12] | H. B. G. Casimir and D. Polder, “The Influence of Retardation on the London-van der Waals Forces,” Phys. Rev., vol. 73, no. 4, pp. 360-372, 1948. |