The perception of time

In the post “What is the nature of time?” the essence of time has been analyzed from the point of view of physics. Several conclusions have been drawn from it, which can be summarized in the following points:

  • Time is an observable that emerges at the classical level from quantum reality.
  • Time is determined by the sequence of events that determines the dynamics of classical reality.
  • Time is not reversible, but is a unidirectional process determined by the sequence of events (arrow of time), in which entropy grows in the direction of the sequence of events. 
  • Quantum reality has a reversible nature, so the entropy of the system is constant and therefore its description is an invariant.
  • The space-time synchronization of events requires an intimate connection of space-time at the level of quantum reality, which is deduced from the theory of relativity and quantum entanglement.

Therefore, a sequence of events can be established which allows describing the dynamics of a classical system (CS) in the following way:

CS = {… Si-2, Si-1, Si, Si+1, Si+2,…}, where Si is the state of the system at instant i.

This perspective has as a consequence that from a perceptual point of view the past can be defined as the sequence {… S-2, S-1}, the future as the sequence {S+1, S+2,…} and the present as the state S0.

At this point it is important to emphasize that these states are perfectly distinguishable from a sequential conception (time) since the amount of information of each state, determined by its entropy, verifies that:

  H(Si) < H(Si+1) [1].

Therefore, it seems necessary to analyze how this sequence of states can be interpreted by an observer, the process of perception being a very prominent factor in the development of philosophical theories on the nature of time.

Without going into the foundation of these theories, since we have exhaustive references on the subject [2], we will focus on how the sequence of events produced by the dynamics of a system can be interpreted from the point of view of the mechanisms of perception [3] and from the perspective currently offered by the knowledge on Artificial Intelligence (AI) [4].

Nevertheless, let us make a brief note on what physical time means. According to the theory of relativity, space-time is as if in a vacuum there were a network of clocks and rules of measurement, forming a reference system, in such a way that its geometry depends on the gravitational effects and the relative velocity of the observer’s own reference system. And it is at this point where we can scale in the interpretation of time if we consider the observer as a perceptive entity and establish a relationship between physics and perception.

The physical structure of space-time

What we are going to discuss next is whether the sequence of states {… S-2, S-1, S0, S+1, S+2,…} is a physical reality or, on the contrary, is a purely mathematical construction, such that the concept of past, present and future is exclusively a consequence of the perception of this sequence of states. Which means that the only physical reality would be the state of the system S0, and that the sequences {… S-2, S-1} and {S+1, Si+2,…} would be an abstraction or fiction created by the mathematical model.

The contrast between these two views has an immediate consequence. In the first case, in which the sequence of states has physical reality, the physical system would be formed by the set of states {… S-2, S-1, S0, S+1, S+2,…}, which would imply a physical behavior different from the observed universe, which would reinforce the strictly mathematical nature of the sequence of states.

In the second hypothesis there would only be a physical reality determined by the state of the system S0, in such a way that physical time would be an emergent property, consequence of the entropy difference between states that would differentiate them and make them observable.

This conception must be consistent with the theory of relativity, which is possible if we consider that one of the consequences of its postulates is the causality of the system, so that the sequence of events is the same in all reference systems, regardless of the fact that the space-time geometry is different in each of them and therefore the emergent space-time magnitudes are different.

At this point one could posit as fundamental postulates of the theory of relativity the invariance of the sequence of events and covariance. But this is another subject.

Past , present and future

From this physical conception of space-time, the question that arises is how this physical reality determines or conditions an observer’s perception of time.

Thus, in the post “the predictive brain” the ability of neural tissue to process time, which allows higher living beings to interact with the environment, has been indirectly discussed. This requires not only establishing space-time models, but also making space-time predictions [5]. Thus, time perception requires discriminating time intervals of the order of milliseconds to coordinate in real time the stimuli produced by the sensory organs and the actions to activate the motor organs. The performance of these functions is distributed in the brain and involves multiple neural structures, such as the basal ganglia, cerebellum, hippocampus and cerebral cortex [6] [7].

To this we must add that the brain is capable of establishing long-term timelines, as shown by the perception of time in humans [8], in such a way that it allows establishing a narrative of the sequence of events, which is influenced by the subjective interest of those events.

This indicates that when we speak generically of “time” we should establish the context to which we refer. Thus, when we speak of physical time we would be referring to relativistic time, as the time that elapses between two events and that we measure by means of what we define as a clock.

But when we refer to the perception of time, a perceptual entity, human or artificial, interprets the past as something physically real, based on the memory provided by classical reality. But such reality does not exist once the sequence of events has elapsed, since physically only the state S0 exists, so that the states Si, i<0, are only a fiction of the mathematical model. In fact, the very foundation of the mathematical model shows, through chaos theory [9], that it is not possible to reconstruct the states Si, i<0, from S0. In the same way it is not possible to define the future states, although here an additional element appears determined by the increase of the entropy of the system.

With this, we are hypothesizing that the classical universe is S≡S0, and that the states Si, i≠0 have no physical reality (another thing is the quantum universe, which is reversible, so all its states have the same entropy! Although at the moment it is nothing more than a set of mathematical models). Colloquially, this would mean that the classical universe does not have a repository of Si states. In other words, the classical universe would have no memory of itself.

Thus, it is S that supports the memory mechanisms and this is what makes it possible to make a virtual reconstruction of the past, giving support to our memories, as well as to areas of knowledge such as history, archeology or geology. In the same way, state S provides the information to make a virtual construction of what we define as the future, although this issue will be argued later. Without going into details, we know that in previous states we have had some experiences that we store in our memory and in our photo albums.

Therefore, according to this hypothesis it can be concluded that the concepts of past and future do not correspond to a physical reality, since the sequences of states {… S-2, S-1} and {S+1, S+2,…}  have no physical reality, since they are only a mathematical artifact. This means that the concepts of past and future are virtual constructs that are materialized on the basis of the present state S, through the mechanisms of perception and memory. The arising question that we will try to answer is the one about how the mechanisms of perception construct these concepts.

Mechanisms of perception

Natural processes are determined by the dynamics of the system in such a way that, according to the proposed model, there is only what we define as present state S. Consequently, if the past and the future have no physical reality, it is worth asking whether plants, inanimate beings are aware of the passage of time.

It is obvious that for humans the answer is yes, otherwise we would not be talking about it. And the reason for this is the information about the past contained in the state S. But this requires the existence of information processing mechanisms that make it possible to virtually construct the past. Similarly, these mechanisms may allow the construction of predictions about future states that constitute the perception of the future [10].

For this, the cognitive function of the brain requires the coordination of neural activity at different levels, from neurons, neural circuits, to large-scale neural networks [7]. As an example of this, the post “The predictive brain” highlights the need to coordinate the stimuli perceived by the sensory organs with the motor organs, in order to be able to interact with the environment. Not only that, but it is essential for the neural tissue to perform predictive processing functions [5], thus overcoming the limitations caused by the response times of neurons.

As already indicated, the perception of time involves several neural structures, which allow the measurement of time at different scales. Thus, the cerebellum allows establishing a time base on the scale of tens of milliseconds [11], analogous to a spatiotemporal metric. Since the dynamics of events is something physical that modifies the state of the system S, the measurement of these changes by the brain requires a physical mechanism that memorizes these changes, analogous to a delay line, which seems to be supported by the cerebellum.

However, this estimation of time cannot be considered at the psychological level as a high-level perceptual functionality, since it is only effective within very short temporal windows, necessary for the performance of functions of an automatic or unconscious nature. For this reason, one could say that time as a physical entity is not perceived by the brain at the conscious level. Thus, what we generally define as time perception is a relationship between events that constitute a story or narrative. This involves processes of attention, memory and consciousness supported in a complex way, involving structures from the basal ganglia to the cerebral cortex, with links between temporal and non-temporal perception mechanisms [12] [13].

Given the complexity of the brain and the mechanisms of perception, attention, memory and self-awareness, it is not possible, at least for the time being, to understand in detail how humans construct temporal stories. Fortunately, we now have AI models that allow to understanding how this can be possible and how stories and narratives can be constructed from the sequential perception of daily life events. A paradigmatic example of this are the “Large Language Models” (LLMs), which based on natural language processing (NLP) techniques and neural networks, are capable of understanding, summarizing, generating and predicting new content and which raise the debate on whether human cognitive capabilities could emerge in these generic models, if provided with sufficient processing resources and training data [14].

Without delving into this debate, today anyone can verify through this type of applications (ChatGPT, BARD, Claude, etc.) how a completely consistent story can be constructed, both in its content and in its temporal plot, from the human experiences reflected in written texts with which these models have been trained.

Taking these models as a reference provides solid evidence on perception in general and on the perception of time in particular. However, it should be noted that these models also show how new properties emerge in their behavior as their complexity grows [15]. This gives a clue as to how new perceptual capabilities or even concepts such as self-awareness may emerge, although this last term is purely speculative, and that in the event that this ends up being the case, it raises the problem discussed in the post “Consciousness from the AI point of view” concerning how to know that an entity is self-aware.

But returning to the subject at hand, what is really important from the point of view of the perception of the passage of time is how the timeline of stories or narratives is a virtual construction that transcends physical time. Thus, the chronological line of events does not refer to a measure of physical time, but is a structure in which a hierarchy or order is established in the course of events.

Virtual perception of time

It can therefore be concluded that the brain only needs to measure physical time in the very short term, in order to be able to interact with the physical environment. But from this point on, all that is needed is to establish a chronological order without a precise reference to physical time. Thus we can refer to an hour, day, month, year, or a reference to another event as a way of ordering events, but always within a purely virtual context. This is one of the reasons for how the passage of time is perceived, so that virtual time will be extended according to the amount of information or relevance of events, something that is evident in playful or stressful situations [16].

Conclusions

The first conclusion that results from the above analysis is the existence of two conceptions of time. One is the one related to physical time that corresponds to the sequence of states of a physical system and the other is the one corresponding to the stimuli produced by this sequence of states on a perceptual intelligence.

Both concepts are elusive when it comes to understanding them. We are able to measure physical time with great precision. However, the theory of relativity shows space-time as an emergent reality that depends on the reference system. On the other hand, the synchronization of clocks and the establishment of a space-measuring structure may seem somewhat contrived, oriented simply to the understanding of space-time from the point of view of physics. On the other hand, the compression of cognitive processes still has many unknowns, although new developments in AI allow us to intuit its foundation, which sheds some light on the concept of psychological time.

The interpretation of time as the sequence of events or states occurring within a reference system is consistent with the theory of relativity and also allows for a simple justification of the psychological perception of time as a narrative.

The hypothesis that the past and the future have no physical reality and that, therefore, the universe keeps no record of the sequence of states, supports the idea that these concepts are an emergent reality at the cognitive level, so that the conception of time at the perceptual level would be based on the information contained in the current state of the system, exclusively. 

From the point of view of physics this hypothesis does not contradict any physical law. Moreover, it can be considered fundamental in the theory of relativity, since it assures a causal behavior that would solve the question of temporal irreversibility and the impossibility of traveling both to the past and to the future. Moreover, invariance in the time sequence supports the concept of causality, which is fundamental for the emergent system to be logically consistent.

References

[1]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.
[2]N. Emery, N. Markosian y M. Sullivan, «”Time”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2020/entries/time/&gt;,» [En línea].
[3]E. R. Kandel, J. H. Schwartz, S. A. Siegenbaum y A. J. Hudspeth, Principles of Neural Science, The McGraw-Hill, 2013.
[4]F. Emmert-Streib, Z. Yang, S. Tripathi y M. Dehmer, «An Introductory Review of Deep Learning for Prediction Models With Big Data,» Front. Artif. Intell., 2020.
[5]W. Wiese y T. Metzinger, «Vanilla PP for philosophers: a primer on predictive processing.,» In Philosophy and Predictive Processing. T. Metzinger &W.Wiese, Eds., pp. 1-18, 2017.
[6]J. Hawkins y S. Ahmad, «Why Neurons Have Tousands of Synapses, Theory of Sequence Memory in Neocortex,» Frontiers in Neural Circuits, vol. 10, nº 23, 2016.
[7]S. Rao, A. Mayer y D. Harrington, «The evolution of brain activation during temporal processing.,» Nature Neuroscience, vol. 4, p. 317–323, 2001.
[8]V. Evans, Language and Time: A Cognitive Linguistics Approach, Cambridge University Press, 2013.
[9]R. Bishop, «Chaos: The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed).,» Bishop, Robert, “Chaos”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), 2017. [En línea]. Available: https://plato.stanford.edu/archives/spr2017/entries/chaos/. [Último acceso: 7 9 2023].
[10]A. Nayebi, R. Rajalingham, M. Jazayeri y G. R. Yang, «Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes,» arXiv.2305.11772v2.pdf, 2023.
[11]R. B. Ivry, R. M. Spencer, H. N. Zelaznik y J. Diedrichsen, «Ivry, Richard B., REBECCA M. Spencer, Howard N. Zelaznik and Jörn Diedrichsen. The Cerebellum and Event Timing,» Ivry, Richard B., REBECCA M. Spencer, Howard N. Zelaznik and Jörn DiedrichAnnals of the New York Academy of Sciences, vol. 978, 2002.
[12]W. J. Matthews y W. H. Meck, «Temporal cognition: Connecting subjective time to perception, attention, and memory.,» Psychol Bull., vol. 142, nº 8, pp. 865-907, 2016.
[13]A. Kok, Functions of the Brain: A Conceptual Approach to Cognitive Neuroscience, Routledge, 2020.
[14]J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean y W. Fedus, «Emergent Abilities of Large Language Models,» Transactions on Machine Learning Research. https://openreview.net/forum?id=yzkSU5zdwD, 2022.
[15]T. Webb, K. J. Holyoak y H. Lu, «Emergent Analogical Reasoning in Large Language Models,» Nature Human Behaviour, vol. 7, p. 1526–1541, 3 8 2023.
[16]P. U. Tse, J. Intriligator, J. Rivest y P. Cavanagh, «Attention and the subjective expansion of time,» Perception & Psychophysics, vol. 66, pp. 1171-1189, 2004.

Teleportation: Fact and Fiction

When we talk about teleportation, we quickly remember science fiction stories in which both people and artifacts are teleported over great distances instantaneously, overcoming the limitations of relativistic physical laws.

Considering that the theoretical possibility of teleporting quantum information was proposed in the scientific field, Bennett et al [1] (1993), and that it was later experimentally demonstrated to be possible, Bouwmeester et al (1997)  [2] and Boschi et al (1998) [3], we can ask the question what is true in this assumption.

For this reason, the aim of this post is to expose the basics of quantum teleportation, analyze its possible practical applications and clarify what is true in the scenarios proposed by science fiction.

Fundamentals of quantum teleportation

Before delving into the fundamentals, it should be clarified that quantum teleportation consists of converting the quantum state of a system into an exact replica of the unknown quantum state of another system with which it is quantum entangled. Therefore, teleportation in no way means the transfer of matter or energy. And as we will see below, teleportation also does not imply the violation of the non-cloning theorem [4] [5].

Thus, the model proposed by Bennet et al [1] is the one shown in the figure below, which is constituted by a set of quantum logic gates that process the states of the three qubits, named A, B and Ancillary. The A qubit corresponds to the system whose state is to be teleported, while the B qubit is the system on which the quantum state of system A is transferred. The ancillary qubit is a qubit necessary to perform the transfer.

Once the three qubits are processed by the logic gates located up to the point indicated by ③ they are quantum entangled [6] [7] [8] [9], in such a way that when a measurement is performed on qubit A and ancillary qubit ④ its state collapses into one of the possible states (|00〉,|01〉,|10〉,|11〉).

From this information, qubit B is processed by a quantum gate U, whose functionality depends on the state obtained from the measurement performed on qubits A and ancillary, according to the following criterion, where I, X, Z are Pauli gates.

  • |00〉 → U = I.
  • |01〉 → U = X.
  • |10〉 → U = Z.
  • |11〉 → U = XZ.

As a consequence, the state of qubit B corresponds to the original state of qubit A, which in turn is modified as a consequence of the measurement process. This means that once the measurement of qubit A and the ancillary qubit is performed, their state collapses, verifying the non-cloning theorem [4] [5] which establishes the impossibility of creating copies of a quantum state.

From a practical point of view, once the three qubits are entangled, qubit B can be moved to another spatial position, which is constrained by the laws of general relativity, so the velocity of qubit B cannot exceed the speed of light. On the other hand, the measurement result of the A ancillary qubits must be transferred to the location of qubit B by means of a classical information channel, so the information transfer speed cannot exceed the speed of light. The result is that teleportation makes it possible to transfer the state of a quantum particle to another remotely located quantum particle, but this transfer is bound by the laws of general relativity, so it cannot exceed the speed of light.

It is very important to note that in reality the only thing that is transferred between qubit A and qubit B is the information describing the wave function, since what physically constitutes the particles that support the qubit are not teleported. This raises a fundamental question concerning the meaning of teleportation at the level of classical reality, which we will analyze in the context of complex systems consisting of multiple qubits.

But a fundamental aspect in determining the nature of information is the fact that teleportation is based on the transfer of information, which is another indication that information is the support of reality, as we concluded in the post “Reality as an Information Process“.

Quantum teleportation of macroscopic objects

Analogous to the teleportation scenario proposed by Bennett et al [1], it is possible to teleport the quantum state of a complex system consisting of N quantum particles. As shown in the figure below, teleportation from system A to system B requires the use of N ancillary qubits.

This is because the number of combinations of the coefficients aI of the wave function |ψC〉 and their signs is of the order of 22N. Thus, when the measurement of the qubits of system A and the auxiliary qubits is performed, 2N classical bits are obtained, which encode 22N  configurations of the unitary transform U. Thus, the coefficients of the wave function |ψC〉 can be rearranged, transforming the wave function of system B into |ψ〉. 

Consequently, from the theoretical point of view, the teleportation of complex quantum systems consisting of a large number of particles is possible. However, its practical realization faces the difficulty of maintaining the quantum entanglement of all particles, as a consequence of quantum decoherence [10]. This causes the quantum particles to no longer be entangled as a consequence of the interaction with the environment, which causes the transferred quantum information to contain errors.

Since decoherence effect grows exponentially with the number of particles forming the quantum system, it is evident that the teleportation of N-particle systems is in practice a huge challenge, since the system is composed of 3N particles. The difficulty is even greater if it is considered that in the preparation of the teleportation scenario systems A, B and ancillary qubits will be in the same location. But subsequently system B will have to move to another space-time location in order for the teleportation to make any practical sense. This makes system B under physical conditions that make decoherence much more likely and produce a higher error rate in the transferred quantum state, with respect to the original quantum state of system A.

But suppose that these limitations are overcome in such a way that it is possible in practice to teleport macroscopic objects, even objects of a biological nature. The question arises: what properties of the teleported object are transferred to the receiving object?

In principle, it can be assumed that the properties of the receiving object have the same properties as the original object from the point of view of classical reality, since after the teleportation is completed the receiving object has the same wave function as the teleported object.

In the case of inanimate objects it can be assumed that the classical properties of the receiving object are the same as those of the original object, since its wave function is exactly the same. This must be so since the observables of the object are determined by the wave function. This means that the receiving object will not be distinguishable from the original object, so for all intents and purposes it must be considered the same object. But from this conclusion the question again arises as to what is the nature of reality, since the process of teleportation is based on the transfer of information between the original object and the receiving object. Therefore, it seems obvious that information is a fundamental part of reality.

Another issue is the teleportation of biological objects. In this case the same argument could be used as in the case of non-animate objects. However, in this case it must be considered that in the framework of classical reality decoherence plays a fundamental role, since classical reality emerges as a consequence of the interaction of quantum systems, which observe one another, producing the collapse of their quantum functions, emerging states of classical reality.

This makes the process of entanglement of biological systems necessary in teleportation incompatible with what is defined as life, since this process would inhibit decoherence and therefore the emergence of classical reality. This issue has already been discussed in the posts Reality as an irreducible layered structure and A macroscopic view of Schrodinger’s cat, in which it is made clear that a living being is a set of independent quantum systems, and therefore not entangled among them. Therefore, the process of entanglement of all these systems will require the inhibition of all biological activity, something that will certainly have a profound effect on what is defined as a living being.

Since if teleportation is to be used to move an object to another location, system B must be relocated to that location prior to making measurements on system A and the ancillary system, which is governed by the laws of general relativity. Additionally, once the measurement has been performed, the information must be transferred to the location of system B, which is also limited by general relativity. In short, the teleportation process has no practical advantage over a classical transport process, especially considering that it is also susceptible to possible quantum errors.

Consequently, quantum applications are limited to the implementation of quantum networks and quantum computing systems, the structure of which can be found in the specialized literature [11] [12].

A bit of theory

The functionality of quantum systems is based on tensor calculus and quantum computation [13]. In particular, in order to illustrate the mathematical foundation underpinning quantum teleportation, the figure below shows the functionality of the Hadamard and CNOT logic gates needed to implement quantum teleportation.

Additionally, the following figure shows the functionality of the Pauli gates, necessary to perform the transformation of the wave function of qubit B, once the measurement is performed on the A and auxiliary qubits.

Conclusion

As discussed, quantum teleportation allows the transfer of quantum information between two remote locations by means of particle entanglement. This makes it possible to implement quantum communication and computing systems.

Although for the moment its experimental realization is limited to a very small number of particles, from a theoretical point of view it can be applied to macroscopic objects, which raises the possibility of applying it to transport objects of classical reality, even objects of a biological nature.

However, as has been analyzed, the application of teleportation to macroscopic objects poses a difficulty as a consequence of quantum decoherence, which implies the appearance of errors in the transferred quantum information.

On the other hand, quantum teleportation does not involve overcoming the limitations imposed by the theory of relativity, so the fictitious idea of using quantum teleportation as a means of transferring macroscopic objects at a distance instantaneously is not an option. But in addition, it must be considered that quantum entanglement of biological objects may not be compatible with what is defined as life.

[1]C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters, “Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels,” Phys. Rev. Lett., vol. 70, pp. 1895-1899, 1993.
[2]D. Bouwmeester, J.-W. Pan, K. Matte, M. Eibl, H. Weinfurter y A. Heilinger, «Experimental quantum teleportation,» arXiv:1901.11004v1 [quant-ph], 1997.
[3]D. Boschi, S. Branca, F. De Martini, L. Hardy y S. Popescu, «Experimental Realization of Teleporting an Unknown Pure Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels,» Physical Review Letters, vol. 80, nº 6, pp. 1121-1125, 1998.
[4]W. K. Wootters y W. H. Zurek, «A Single Quantum Cannot be Cloned,» Nature, vol. 299, pp. 802-803, 1982.
[5]D. Dieks, «Communication by EPR devices,» Physics Letters, vol. 92A, nº 6, pp. 271-272, 1982.
[6]E. Schrödinger, «Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 32, nº 3, pp. 446­-452, 1936.
[7]A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[8]J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics, vol. 1, nº 3, pp. 195-290, 1964.
[9]A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[10]H. D. Zeh, «On the Interpretation of Measurement in Quantum Theory,» Found. Phys., vol. 1, nº 1, pp. 69-76, 1970.
[11]T. Liu, «The Applications and Challenges of Quantum Teleportation,» Journal of Physics: Conference Series, vol. 1634, nº 1, 2020.
[12]Z.-H. Yan, J.-L. Qin, Z.-Z. Qin, X.-L. Su, X.-J. Jia, C.-D. Xie y K.-C. Peng, «Generation of non-classical states of light and their application in deterministic quantum teleportation,» Fundamental Research, vol. 1, nº 1, pp. 43-49, 2021.
[13]M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011.

Entropy: A relativistic invariant

Since the establishment of the principles of relativity, the problem arose of determining the transformations that would allow the expression of thermodynamic parameters in the relativistic context, such as entropy, temperature, pressure or heat transfer; in a analogous way to that achieved in the context of mechanics, such as space-time and momentum.

The first efforts in this field were made by Planck [1] and Einstein [2], arriving at the expressions:

        S’ = S, T’ = T/γ, p = p’, γ = (1 – (v/c)2) -1/2

Where S, T, p are the entropy, temperature and pressure of the inertial thermodynamic system at rest I, and S’, T’, p’ are the entropy, temperature and pressure observed from the inertial system I’ in motion, with velocity v.

But in the 1960s this conception of relativistic thermodynamics was revised, and two different points of view were put forward. On the one hand, Ott [3] and Arzeliès [4] propose that the observed temperature of a body in motion must be T’ = Tγ. Subsequently, Landsberg [5] proposes that the observed temperature has be T’ = T.

All these cases are based on purely thermodynamic arguments of energy transfer by heat and work, such that ∆E = ∆Q + ∆W.  However, van Kampen [6] and later Israel [7] analyze the problem from a relativistic point of view, such that G = ∆Q + ∆W , where G  is the increment of the energy-momentum vector, Q and W are the four-component vectors corresponding to the irreversible and reversible part of the thermodynamic process, with ∆Q being the time component of Q .

Thus, the van Kampen-Israel model can be considered the basis for the development of thermodynamics in a relativistic context, offering the advantage that it does not require the concepts of heat and energy, the laws of thermodynamics being expressed in terms of the relativistic concept of momentum-energy.

In spite of this, there is no theoretical justification based on any of the models that allows to determine conclusively the relation of temperatures corresponding to the thermodynamic system at rest and the one observed in the system in motion, so that the controversy raised by the different models is still unresolved today.

To complicate the situation further, the experimental determination of the observed temperature poses a challenge of enormous difficulty.  The problem is that the observer must move in the thermal bath located in the inertial system at rest. To find out the observed temperature from the moving reference system Landsberg proposed a thought experiment and thus determine the relativistic transformation of the temperature experimentally. As a result of this proposal, he recognized that the measurement scenario may be unfeasible in practice.

In recent years, algorithms and computational capabilities have made it possible to propose numerical solutions aimed at resolving the controversy over relativistic transformations for a thermodynamic system. As a result, it is concluded that any of the temperature relations proposed by the different models can be true, depending on the thermodynamic assumptions used in the simulation [8] [9], so the resolution of the problem remains open.

The relativistic thermodynamic scenario

In order to highlight the difficulty inherent in the measurement of the thermodynamic body temperature of the inertial system at rest I from the inertial system I’ it is necessary to analyze the measurement scenario.

Thus, as Landsberg and Johns [10] make clear, the determination of the temperature transformation must be made by means of a thermometer attached to the observer by means of a brief interaction with the black body under measurement. To ensure that there is no energy loss, the observer must move within the thermodynamic system under measurement, as shown in the figure below.

This scenario, which may seem bizarre and may not be realizable in practice, clearly shows the essence of thermodynamic relativity. But it should not be confused with the scenario of temperature measurement in a cosmological scenario, in which the observer does not move inside the object under measurement.

Thus, in the case of the measurement of the surface temperature of a star the observer does not move within the thermodynamic context of the star, so that the temperature may be determined using Wien’s law, which relates the wavelength of the emission maximum of a black body and its temperature T, such that T = b/λmax, where b is a constant (b≅2.9*10-3 m K).

In this case the measured wavelength λmax must be corrected for several factors, https://en.wikipedia.org/wiki/Redshift, such as:

  • The redshift or blueshift produced by the Doppler effect, a consequence of the relative velocity of the reference systems of the star and the observer.
  • The redshift produced by the expansion of the universe and which is a function of the scales of space at the time of emission and observation of the photon.
  • The redshift produced by the gravitational effect of the mass of the star.

As an example, the following figure shows the concept of the redshift produced by the expansion of the universe.

Entropy is a relativistic invariant

Although the problem concerning the determination of the observed temperature in relativistic systems I and I’ remains open, it follows from the models proposed by Planck, Ott and Landsberg that the entropy is a relativistic invariant

This conclusion follows from the fact that the number of microstates in the two systems is identical, so that according to the expression of the entropy S = k ln(Ω), where k is the Boltzmann constant and Ω is the number of microstates, it follows that S = S’, since Ω = Ω’. 

The invariance of entropy in the relativistic context is of great significance, since it means that the amount of information needed to describe any scenario of reality that emerges from quantum reality is an invariant, independently of the observer.

In the post ’an interpretation of the collapse of the wave function‘ it was concluded that a quantum system is reversible and therefore its entropy is constant and, consequently, the amount of information for its description is an invariant. This post also highlights the entropy increase of classical systems, which is deduced from the ‘Pauli’s Master Equation’ [11], such that > 0. This means that the information to describe the system grows systematically.

The conclusion drawn from the analysis of relativistic thermodynamics is that the entropy of a classical system is the same regardless of the observer and, therefore, the information needed to describe the system is also independent of the observer

Obviously, the entropy increment of a classical system and how the information increment of the system emerges from quantum reality remains a mystery. However, the fact that the amount of information needed to describe a system is an invariant independent of the observer suggests that information is a fundamental physical entity at this level of reality.

On the other hand, the description of a system at any level of reality requires information in a magnitude that according to Algorithmic Information Theory is the entropy of the system. Therefore, reality and information are two entities intimately united from a logical point of view.

In short, from both the physical and the logical point of view, information is a fundamental entity. However, the axiomatic structure that configures the functionality from which the natural laws emerge, which determines how information is processed, remains a mystery.

[1]M. Planck, « Zur dynamik bewegter systeme,» Ann. Phys. , vol. 26, 1908.
[2]A. Einstein, «Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen.,» Jahrb. Radioakt. Elektron., vol. 4, pp. 411-462, 1907.
[3]H. Ott, «Lorentz-Transformation der Wiirme und der Temperatur,» Zeitschrift Für Physik, vol. 175, pp. 70-104, 1963.
[4]H. Arzeliès, «Transformation relativiste de la temperature et de quelques autres grandeurs thermodynamiques.,» Nuovo Cim. 35, 792 (1965)., vol. 35, nº 3, pp. 792-804, 1965.
[5]P. Landsberg, «Special relativistic thermodynamics,» Proc. Phys. Soc., vol. 89, pp. 1007-1016, 1966.
[6]N. G. van Kampen, «Relativistic Thermodynamics of Moving Systems,» Phys. Rev. 173, 295 (1968)., vol. 173, pp. 295-301, 1968.
[7]W. Israel, «Nonstationary Irreversible Thermodynamics: A Causal Relativistic Theory,» Ann. Phys., vol. 106, pp. 310- 331, 1976.
[8]D. Cubero, J. Casado-Pascual, J. Dunkel, P. Talkner y P. Hänggi, «Thermal Equilibrium and Statistical Thermometers in Special Relativity.,» Relativity. Phys. Rev. Lett. , vol. 99, pp. 170601, 1-4, 2007.
[9]R. Manfred, «Thermodynamics meets Special Relativity–or what is real in Physics?,» arXiv: 0801.2639., nº 2008.
[10]P. T. Landsberg y K. A. Johns, «The Problem of Moving Thermometers,» Proc. R. Soc. Lond., vol. 306, pp. 477-486, 1968.
[11]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.

What is the nature of time?

Undoubtedly, the concept of time is possibly one of the greatest mysteries of nature. The nature of time has always been a subject of debate both from the point of view of philosophy and physics. But this has taken on special relevance as a consequence of the development of the theory of relativity, which has marked a turning point in the perception of space-time.

Throughout history, different philosophical theories have been put forward on the nature of time [1], although it has been from the twentieth century onwards when the greatest development has taken place, mainly due to advances in physics. Thus, it is worth mentioning the argument against the reality of time put forward by McTaggart [2], such that time does not exist and that the perception of a temporal order is simply an appearance, which has had a great influence on philosophical thought.

However, McTaggart’s argument is based on the ordering of events, as we perceive them. From this idea, several philosophical theories have been developed, such as A-theory, B-theory, C-theory and D-theory [3]. However, this philosophical development is based on abstract reasoning, without relying on the knowledge provided by physical models, which raises questions of an ontological nature. 

Thus, both relativity theory and quantum theory show that the emergent reality is an observable reality, which means that in the case of space-time both spatial and temporal coordinates are observable parameters, emerging from an underlying reality. In the case of time this raises the question: Does the fact that something is past, present or future imply that it is something real? Consequently, how does the reality shown by physics connect with the philosophical thesis?

If we focus on an analysis based on physical knowledge, there are two fundamental aspects in the conception of time. The first and most obvious is the perception of the passage of time, on which the idea of past, present and future is based, which Arthur Eddington defined as the arrow of time [4], which highlights its irreversibility. The second aspect is what Carlo Rovelli [5] defines as “loss of unity” and refers to space-time relativity, which makes the concept of past, present and future an arbitrary concept, based on the perception of physical events.

But, in addition to using physical criteria in the analysis of the nature of time, it seems necessary to analyze it from the point of view of information theory  [6], which allows an abstract approach to overcome the limitations derived from the secrets locked in the underlying reality. This is possible since any element of reality must have an abstract representation, i.e. by information, otherwise it cannot be perceived by any means, be it sensory organ or measuring device, so it will not be an element of reality.

The topology of time

From the Newtonian point of view, the dynamics of classical systems develops in the context of space-time of four dimensions, three spatial dimensions (x,y,z) and one temporal dimension (t), so that the state of the system can be expressed as a function of the generalized coordinates q and the generalized moments p as a function f(q,p,t), where q and p are tuples (ordered lists of coordinates and moments) that determine the state of each of the elements that compose the system.

Thus, for a system of point particles, the state of each particle is determined by the coordinates of its position q = (x,y,z) and of its momentum p = (mẋ, mẏ, mż). This representation is very convenient, since it allows the analysis of the systems by calculating continuous time functions. However, this view can lead to a wrong interpretation since identifying time as a mathematical variable makes it conceived as a reversible variable. This becomes clear if the dynamics of the system is represented as a sequence of states, which according to quantum theory has a discrete nature [7] and can be expressed in terms of a classical system (CS) as:

        CS = {… Si-2(qi-2,pi-2), Si-1(qi-1,pi+-), Si(qi,pi), Si+1(qi+1,pi+1), Si+2(qi+2,pi+2),…}

According to this representation, we define the past as the sequence {… Si-2(qi-2,pi-2), Si-1(qi-1,pi+-)},  the future as the sequence {Si+1(qi+1,pi+1), Si+2(qi+2,pi+2),…} and the present as the state Si(qi,pi). The question that arises is: Do the sequences {… Si-3(qi-3,pi-3), Si-2(qi-2,pi-2), Si-1(qi-1,pi+-)} y {Si+1(qi+1,pi+1), Si+2(qi+2,pi+2), Si+3(qi+3,pi+3),…} have real existence? Or on the contrary: Are they the product of the perception of the emergent reality?

In the case of a quantum system its state is represented by its wave function Ψ(q), which is a superposition of the wave functions that compose the system:

        Ψ(q,t) = Ψ(q1,t) ⊗ Ψ(q1,t) …⊗ Ψ(qi,t) …⊗ Ψ(qn,t)

Thus, the dynamics of the system can be expressed as a discrete sequence of states:

        QS = {… Ψi-2(q i-2), Ψi-1(q i-1), Ψi(q i), Ψi+1(q i+1), Ψi+2(q i+2), …}

As in the case of the classical system Ψi(q) would represent the present state, while {… Ψi-2(q), ΨYi-1(q)} represents the past and {Ψi+1(q), Ψi+2(q), …} the future, although as will be discussed later this interpretation is questionable.

However, it is essential to emphasize that the sequences of the classical system CS and the quantum system QS have, from the point of view of information theory, a characteristic that makes that their nature, and therefore their interpretation, must be different. Thus, quantum systems have a reversible nature, since their dynamics is determined by unitary transformations [8], so that all the states of the sequence contain the same amount of information. In other words, their entropy remains constant throughout the sequence:

        H(Ψi(q i)) = Hi(q i+1)).

In contrast, classical systems are irreversible [9], so the amount of information of the sequence states grows systematically, such that:

        H(Si(qi,pi)) < H(Si+1(qi+1,pi+1)).

Concerning the entropy increase of classical systems, the post “An interpretation of the collapse of the wave function” has dealt with the nature of entropy growth from the “Pauli’s Master Equation” [10], which demonstrates that quantum reality is a source of emergent information towards classical reality. However, this demonstration is abstract in nature and provides no clues as to how this occurs physically, so it remains a mystery. Obviously, the entropy growth of classical systems assumes that there must be a source of information and, as has been justified, this source is quantum reality.

This makes the states of the classical system sequence distinguishable, establishing a directional order. On the contrary, the states of the quantum system are not distinguishable, since they all contain the same information because quantum theory has a reversible nature. And here we must make a crucial point, linked to the process of observation of quantum states, which may lead us to think that this interpretation is not correct. Thus, the classical states emerge as a consequence of the interaction of the quantum components of the system, which may lead to the conclusion that the quantum states are distinguishable, but the truth is that the states that are distinguishable are the emerging classical states.

According to this reasoning the following logical conclusion can be drawn. Time is a property that emerges from quantum reality as a consequence of the fact that the classical states of the system are distinguishable, establishing in addition what has been called the arrow of time, in such a way that the sequence of states has a distinguishable characteristic such as the entropy of the system.

This also makes it possible to hypothesize that time only has an observable existence at the classical level, while at the quantum level the dynamics of the system would not be subject to the concept of time, and would therefore be determined by means of other mechanisms. In principle this may seem contradictory, since according to the formulation of quantum mechanics the time variable appears explicitly. In reality this would be nothing more than a mathematical contraption that allows expressing a quantum model at the boundary that separates the quantum system and the classical system and thus describe the classical reality from the quantum mathematical model. In this sense it should be considered that the quantum model is nothing more than a mathematical model of the emerging reality that arises from an underlying nature, which for the moment is unknown and which tries to be interpreted by new models, such as string theory.

An argument that can support this idea is also found in the theory of loop quantum gravitation (LQG) [11], which is defined as a substrate-independent theory, meaning that it is not embedded in a space-time structure, and which posits that space and time emerge at distances about 10 times the Planck length [12].

The arrow of time

When analyzing the sequences of states CS and QS we have alluded to the past, present and future, which would be an emergent concept determined by the evolution of the entropy of the system. This seems clear in classical reality. But as reasoned, the sequence of quantum states is indistinguishable, so it would not be possible to establish the concept of past, present and future.

A fundamental aspect that must be overcome is the influence of the Newtonian view of the interpretation of time. Thus, in the fundamental equation of dynamics:

        F = m d2x/dt2

the variable time is squared, this indicates that the equation does not distinguish t from -t, i.e., it is the same backward or forward in time, so that the dynamics of the system is reversible. This at the time led to Laplace’s causal determinism, which remained in force until the development of statistical mechanics and Boltzmann’s interpretation of the concept of entropy. To this we must add that throughout the twentieth century scientific development has led without any doubt to the conclusion that physics cannot be completely deterministic, both classical physics and quantum physics [13].

Therefore, it can be said that the development of calculus and the use of the continuous variable time (t) in the determination of dynamical processes has been fundamental and very fruitful for the development of physics. However, it must be concluded that this can be considered a mathematical contraption that does not reflect the true nature of time. Thus, when a trajectory is represented on coordinate axes, the sensation is created that time can be reversed at will, which would be justified by the reversibility of the processes.

However, classical processes are always subject to thermodynamic constraints, which make these processes irreversible, which mean that for an isolated system its state evolves in such a way that its entropy grows steadily and therefore the quantity and information describing the system, so that a future state cannot be reverted to a past state. Consequently, if the state of the system is represented as a function of time, it could be thought that the time variable could be reverted as if a cursor were moved on the time axis, which does not seem to have physical reality, since the growth of entropy is not compatible with this operation.

To further emphasize the idea of the possibility of moving in time as if it were an axis or a cursor, we can consider the evolution of a reversible system, which can reach a certain state Si and continue to evolve, and after a certain moment it can reach the state Si again. But this does not mean that time has been reversed, but rather that time always evolves in the direction of the dynamics of the system and the only thing that happens is that the state of the system can return to a past state in a reversible way. However, in classical systems this is only a hypothetical proposal, since reversible systems are ideal systems free of thermodynamic behavior, such as gravitational, electromagnetic and frictionless mechanical systems. To say, ideal models that do not interact with an underlying reality.

In short, the state of a system is a sequence determined by an index that grows systematically. Therefore, the idea of a time axis, although it allows us to visualize and treat systems intuitively, should be something we should discard, since it leads us to a misconception of the nature of time. Therefore, time is not a free variable, but the perception of a sequence of states.

Returning to the concept of past, present and future, it can be assured that according to information theory, the state of present is supported by the state Si(qi,pi), and therefore is part of the classical reality. As for the sequence of past states {… Si-3(qi-3,pi-3), Si-2(qi-2,pi-2), Si-1(qi-1,pi-1)}  to be a classical reality would require that these states continue to exist physically, something totally impossible since it would require an increase of information in the system that is not in accordance with the increase of its entropy, so this concept is also purely perceptual. On the other hand, if this were possible the system would be reversible.

In the case of the future sequence of states {Si+1(qi+1,pi+1), Si+2(qi+2,pi+2),…}  it is a classical reality for occurring with a degree of uncertainty that makes it not predictable. Even supposing this were possible, the states of the present would have to increase the amount of information to hold accurate forecasts of the future, which would increase their entropy, which is at disagreement with observable reality. Therefore, the concept of the future is not a classical reality, being a purely perceptual concept. In short, it can be concluded that the only concept of classical reality is the state of the present.

The relativistic context

Consequently, classical systems offer a vision of reality as a continuous sequence of states, while quantum physics modifies it, establishing that the dynamics of systems is a discrete sequence of states. However, the classical view is no more than an appearance at the macroscopic level. However, the theory of relativity [14] modifies the classical view, such that the description of a system is a sequence of events. If to this we add the quantum view, the description of the system is a discrete sequence of events.

But in addition, the theory of relativity offers a perspective in which the perception of time depends on the reference system and therefore on the observer. Thus, as the following figure shows, clocks in motion are slower than stationary clocks, so that we can no longer speak of a single time sequence, but that it depends on the observer.

However, this does not modify the hypothesis put forward, which is to consider time as the perception of a sequence of states or events. This reinforces the idea that time emerges from an underlying reality and that its perception varies according to how it is observed. Thus, each observer has an independent view of time, determined by a sequence of events.

In addition to the relative perception of time, the theory of relativity has deeper implications, since it establishes a link between space and time, such that the relativistic interval

        ds2 = c2 dt2 – dx2 – dy2 – dz2 = c2 dt2 – (dx2 + dy2+ + dz2)

is invariant and therefore takes the same value in any reference frame.

As a consequence, both the perception of time and space depends on the observer and as the following figure shows, simultaneous events in one reference frame are observed as events occurring at different instants of time in another reference frame, so that in this one they are not simultaneous, giving rise to the concept of relativity of simultaneity.

In spite of this behavior, the view of time as the perception of a sequence of events is not modified, since although the sequences of events in each reference system are correlated, in each reference system there is a sequence of events that will be interpreted as the flow of time corresponding to each observer.

The above arguments are valid for inertial reference frames, i.e. free of acceleration. However, the theory of general relativity [15], based on the principles of covariance and equivalence, establishes the metric of the deformation of space-time in the presence of matter-energy and how this deformation acts as a gravitational field. These principles are defined as:

  • The Covariance Principle states that the laws of physics must take the same form in all reference frames.
  • The Equivalence Principle states that a system subjected to a gravitational field is indistinguishable from a non-inertial reference frame (subjected to acceleration).

It should be noted that, although the equivalence principle was fundamental in the development of general relativity, it is not a fundamental ingredient, and is not verified in the presence of electromagnetic fields. 

It follows from the theory of general relativity that acceleration bends space-time, paradigmatic examples being the gravitational redshift of photons escaping from the gravitational field, or gravitational lenses. For this reason, it is essential to analyze the concept of time perception from the point of view of this perspective.

Thus, the following figure shows a round trip to Andromeda by a spacecraft propelled with acceleration a = g. It shows the time course in the Earth reference frame t and the proper time in the spacecraft reference frame T, such that the time course on Earth is slower than in the spacecraft by a value determined by g. The fact that the time course is produced by the velocity of the spacecraft in an inertial system or by the acceleration of the spacecraft does not modify the reasoning used throughout the test, since the time course is determined exclusively in each of the reference systems by the sequence of events observed in each of them independently.

Therefore, it can be concluded that the perception of time is produced by the sequence of events occurring in the observing reference system. To avoid possible anthropic interpretations, an entity endowed with the ability to detect events and to develop artificial intelligence (AI) algorithms can be proposed as an observer. As a consequence, it can be concluded that the entity will develop a concept of time based on the sequence of events. Evidently, the developed concept will not be reversible, since this sequence is organized by an index.

However, if the event detection mechanisms were not sufficiently accurate, the entity could deduce that the dynamics of the process could be cyclic and therefore reversible. However, the sequence of events is ordered and will therefore be interpreted as flowing in a single direction.

Thus, identical entities located in different reference systems will perceive a different sequence of events of the dynamics, determined by the laws of relativity. But the underlying reality sets a mark on each of the events that is defined as physical time, and to which the observing entities are inexorably subject in their real time clocks. Therefore, the question that remains to be answered is what the nature of this behavior is.

Physical time

So far, the term perception has been used to sidestep this issue. But it is clear that although real time clocks run at different rates in different reference systems, all clocks are perfectly synchronized. But for this to be possible a total connection of the universe in its underlying reality is necessary. This must be so, since the clocks located in the different reference systems run synchronously, regardless of their location, even though they run at different speeds.

Thus, in the example of the trip to Andromeda, when the ship returns to Earth, the elapsed time of the trip in the Earth’s reference system is T = 153.72 years, while in the ship’s clock it is t = 16.92 years, but both clocks are synchronized by the parameter g, so that they run according to the expression dt = γdT. The question arises: What indications are there that the underlying reality of the universe is a fully connected structure?

There are several physical clues arising from relativistic and quantum physics, such as space-time in the photon reference frame and quantum particle entanglement. Thus, in the case of the photon γ→∞, so that any interval of time and space in the direction of motion in the reference frame of the observer tends to zero in the reference frame of the photon. If we further consider that the state of the photon is a superposition of states in any direction, the universe for a photon is a singular point without space-time dimensions. This suggests that space-time arises from an underlying reality from which time emerges as a completely cosmologically synchronized reality.

In the context of quantum physics, particle entanglement provides another clue to the interconnections in the structure on which classical reality is based. Thus, the measurement of two entangled particles implies the exchange of quantum information between them independently of their position in space and instantaneously, as deduced from the superposition of quantum states and which Schrödinger posed as a thought experiment in “Schrödinger’s cat” [16]. This behavior seems to contradict the impossibility of transferring information faster than the speed of light, which raised a controversy known as the EPR paradox [17], which has been resolved theoretically and experimentally [18],  [19].

Therefore, at the classical scale information cannot travel faster than the speed of light. However, at the quantum scale reality behaves as if there were no space-time constraints. This indicates that space and time are realities that emerge at the classical scale but do not have a quantum reality, whereas space-time at the classical scale emerges from a quantum reality, which is unknown so far.

But perhaps the argument that most clearly supports the global interconnectedness of space-time is the Covariance Principle, which explicitly recognizes this interconnectedness by stating that the laws of physics must take the same form in all reference frames.

Finally, the question that arises is the underlying nature of space-time. In the current state of development of physics, the Standard Particle Model is available, which describes the quantum interactions between particles in the context of space-time. In this theoretical scheme, space-time is identified with the vacuum, which in quantum field theory is identified with the quantum vacuum which is the quantum state with the lowest possible energy, but this model does not seem to allow a theoretical analysis of how space-time emerges. Perhaps, the development of a model of fields that give sense to the physical reality of the vacuum and that integrates the standard model of particles will allow in the future to investigate how the space-time reality emerges from this model.

[1]N. Emery, N. Markosian y M. Sullivan, «”Time”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2020/entries/time/&gt;,» [En línea].
[2]J. M. E. McTaggart, «The Unreality of Time, http://www.jstor.org/stable/2248314,» Mind, vol. 17, nº 68, pp. 457-474, 1908.
[3]S. Baron, K. Miller y J. Tallant, Out of Time. A Philosophical Study of Timelessness, Oxford University Press, 2022.
[4]A. S. Eddington, The nature of the physical world, Cambridge University Press, 1948.
[5]C. Rovelli, The order of time, Riverhead Books, 2018.
[6]C. E. Shannon, “A Mathematical Theory of Communication,” The Bell system technical journal, vol. 27, pp. 379-423, 623-656, 1948.
[7]P. Ball, Designing the Molecular World, Princeton University Press, 1994.
[8]L. E. Ballentine, Quantum Mechanics. A Modern Development. Chapter 3., World Scientific Publishing Co., 2000.
[9]A. Ben-Naim, A Farewell to Entropy: Statistical Thermodynamics Based on Information, World Publishing Company, 2008.
[10]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.
[11]A. Ashtekar y E. Bianchi, «A Short Review of Loop Quantum Gravity URL= <arXiv:2104.04394v1 [gr-qc]>,» 2021.
[12]L. Smolin, «The case for background independence. URL = < https://arxiv.org/abs/hep-th/0507235v1&gt;,» 2005. [En línea].
[13]I. Reznikoff, «A class of deductive theories that cannot be deterministic: classical and quantum physics are not deterministic. URL = https://arxiv.org/abs/1203.2945v3,» 2013. [En línea].
[14]A. Einstein, «On The Electrodynamics Of Moving Bodies,» 1905.
[15]T. P. Cheng, Relativity, Gravitation and Cosmology, Oxford University Press, 2010.
[16]E. Schrödinger, «The Present Situation in Quantum Mechanics. (Trans. John Trimmer),» Naturwissenschaften, vol. 23, pp. 844-849, 1935.
[17]A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[18]J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics, vol. 1, nº 3, pp. 195-290, 1964.
[19]A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.

The predictive brain

While significant progress has been made in the field of neuroscience and in particular in the neural circuits that support perception and motor activity, the understanding of neural structures, how they encode information, and establish the mechanisms of learning is still under investigation.

Digital audio and image processing techniques and advances in artificial intelligence (AI) are a source of inspiration for understanding these mechanisms. However, it seems clear that these ideas are not directly applicable to brain functionality.

Thus, for example, the processing of an image is static, since digital sensors provide complete images of the scene. In contrast, the information encoded by the retina is not homogeneous, with large differences in resolution between the fovea and the surrounding areas, so that the image composition is necessarily spatially segmented.

But these differences are much more pronounced if we consider that this information is dynamic in time. In the case of digital video processing, it is possible to establish a correlation of the images that make up a sequence. A correlation that in the case of the visual system is much more complex, due to the spatial segmentation of the images and how this information is obtained using the saccadic movements of the eyes.

The information generated by the retina is processed by the primary visual cortex (V1) which has a well-defined map of spatial information and also performs simple feature recognition functions. This information progresses to the secondary visual cortex (V2) which is responsible for composing the spatial information generated by saccadic eye movement.

This structure has been the dominant theoretical framework, in what has been termed the hierarchical feedforward model [1]. However, certain neurons in V1 and V2 regions have been found to have a surprising response. They seem to know what is going to happen in the immediate future, activating as if they could perceive new visual information without it having been produced by the retina [2], in what is defined as Predictive Processing (PP)  [3], and which is gaining influence in cognitive neuroscience, although it is criticized for lacking empirical support to justify it.

For this reason, the aim of this post is to analyze this behavior from the point of view of signal processing techniques and control systems, which show that the nervous system would not be able to interact with the surrounding reality unless it performs PP functions.

A brief review of control systems

The design of a control system is based on a mature technique [4], although the advances in digital signal processing produced in the last decades allow the implementation of highly sophisticated systems. We will not go into details about these techniques and will only focus on the aspects necessary to justify the possible PP performed by the brain.

Thus, a closed-loop control system is composed of three fundamental blocks:

  • Feedback: This block determines the state of the target under control.
  • Control: Determines the actions to be taken based on the reference and the information on the state of the target.
  • Process: Translates the actions determined by the control to the physical world of the target.

The functionality of a control system is shown in the example shown in the figure. In this case the reference is the position of the ball and the target is for the robot to hit the ball accurately.

The robot sensors must determine in real-time the relative position of the ball and all the parameters that define the robot structure (feedback). From these, the control must determine the robot motion parameters necessary to reach the target, generating the control commands that activate the robot’s servomechanisms.

The theoretical analysis of this functional structure allows determining the stability of the system, which establishes its capacity to correctly develop the functionality for which it has been designed. This analysis shows that the system can exhibit two extreme cases of behavior. To simplify the reasoning, we will eliminate the ball and assume that the objective is to reach a certain position.

In the first case, we will assume that the robot has a motion capability such that it can perform fast movements without limitation, but that the measurement mechanisms that determine the robot’s position require a certain processing time Δt. As a consequence, the decisions of the control block are not in real-time since the decisions at t = ti actually correspond to t = ti-Δt, where Δt is the time required to process the information coming from the sensing mechanisms. Therefore, when the robot approaches the reference point the control will make decisions as if it were somewhat distant, which will cause the robot to overshoot the position of the target. When this happens, the control should correct the motion by turning back the robot’s trajectory. This behavior is defined as an underdamped regime.

Conversely, if we assume that the measurement system has a fast response time, such that Δt≊0, but that the robot’s motion capability is limited, then the control will make decisions in real-time, but the approach to the target will be slow until the target is accurately reached. Such behavior is defined as an overdamped regime.

At the boundary of these two behaviors is the critically damped regime that optimizes the speed and accuracy to reach the target. The behavior of these regimes is shown in the figure.

Formally, the above analysis corresponds to systems in which the functional blocks are linear. The development of digital processing techniques allows the implementation of functional blocks with a nonlinear response, resulting in much more efficient control systems in terms of response speed and accuracy. In addition, they allow the implementation of predictive processing techniques using the laws of mechanics. Thus, if the reference is a passive entity, its trajectory is known from the initial conditions. If it is an active entity, i.e. it has internal mechanisms that can modify its dynamics, heuristic functions, and AI can be used  [5].

The brain as a control system

As the figure below shows, the ensemble formed by the brain, the motor organs, and the sensory organs comprises a control system. Consequently, this system can be analyzed with the techniques of feedback control systems.

For this purpose, it is necessary to analyze the response times of each of the functional blocks. In this regard, it should be noted that the nervous system has a relatively slow temporal behavior [6]. Thus, for example, the response time to initiate movement in a 100-meter sprint is 120-165 ms. This time is distributed in recognizing the start signal, the processing time of the brain to interpret this signal and generate the control commands to the motor organs, and the start-up of these organs. In the case of eye movements toward a new target, the response time is 50-200 ms. These times give an idea of the processing speed of the different organs involved in the different scenarios of interaction with reality.

Now, let’s assume several scenarios of interaction with the environment:

  • A soccer player intending to hit a ball moving at a speed of 10 km/hour. In a time of 0.1 s. the ball will have moved 30 cm. 
  • A tennis player who must hit a ball moving at 50 km/hour. In a time of 0.1 s. the ball will have displaced 150 cm. 
  • Grip a motionless cup by moving the hand at a speed of 0.5 m/s. In a time of 0.1 s. the hand will have moved 5 cm.

These examples show that if the brain is considered as a classical control system, it is practically impossible to obtain the necessary precision to justify the behavior of the system. Thus, in the case of the soccer player, the information obtained by the brain from the sensory organs, in this case, the sight, will be delayed in time, providing a relative position of the foot concerning the ball with an error of the order of centimeters, so that the ball strike will be very inaccurate.

The same reasoning can be made in the case of the other two proposed scenarios, so it is necessary to investigate the mechanisms used by the brain to obtain an accuracy that justifies its actual behavior, much more accurate than that provided by a control system based on the temporal response of neurons and nerve tissue.

To this end, let’s assume the case of grasping the cup, and let’s do a simple exercise of introspection. If we close our eyes for a moment we can observe that we have a precise knowledge of the environment. This knowledge is updated as we interact with the environment and the hand approaches the cup. This spatiotemporal behavior allows predicting with the necessary precision what will be the position of the hand and the cup at any moment, despite the delay produced by the nervous system.

To this must be added the knowledge acquired by the brain about space-time reality and the laws of mechanics. In this way, the brain can predict the most probable trajectory of the ball in the tennis player’s scenario. This is evident in the importance of training in sports activities since this knowledge must be refreshed frequently to provide the necessary accuracy. Without the above prediction mechanisms, the tennis player would not be able to hit the ball.

Consequently, from the analysis of the behavior of the system formed by the sensory organs, the brain, and the motor organs, it follows that the brain must perform PP functions. Otherwise, and as a consequence of the response time of the nervous tissue, the system would not be able to interact with the environment with the precision and speed shown in practice. In fact, to compensate for the delay introduced by the sensory organs and their subsequent interpretation by the brain, the brain must predict and advance the commands to the motor organs in a time interval that can be estimated at several tens of milliseconds.

The neurological foundations of prediction

As justified in the previous section, from the temporal response of the nervous tissue and the behavior of the system formed by the sensory organs, the brain, and the motor organs, it follows that the brain must support two fundamental functions: encoding and processing reference frames of the surrounding reality and performing Predictive Processing.

But what evidence is for this behavior? It has been known for several decades that there are neurons in the entorhinal cortex and hippocampus that respond to a spatial model, called grid cells [7]. But recently it has been shown that in the neocortex there are structures capable of representing reference frames and that these structures can render both a spatial map and any other functional structure needed to represent concepts, language, and structured reasoning [8].

Therefore, the question to be resolved is how the nervous system performs PP. As already advanced, PP is a disputed functionality because of its lack of evidence. The problem it poses is that the number of neurons that exhibit predictive behavior is very small compared to the number of neurons that are activated as a consequence of a stimulus.

The answer to this problem may lie in the model proposed by Jeff Hawkins and Subutai Ahmad [9] based on the functionality of pyramidal neurons [10], whose function is related to motor control and cognition, areas in which PP should be fundamental.

The figure below shows the structure of a pyramidal neuron, which is the most common type of neuron in the neocortex. The dendrites close to the cell body are called proximal synapses so that the neuron is activated if they receive sufficient excitation. The nerve impulse generated by the activation of the neuron propagates to other neurons through the axon, which is represented by an arrow.

This description corresponds to a classical view of the neuron, but pyramidal neurons have a much more complex structure. The dendrites radiating from the central zone are endowed with hundreds or thousands of synapses, called distal synapses so approximately 90% of the synapses are located on these dendrites. Also, the upper part of the figure shows dendrites that have a longer reach, which have feedback functionality.

The remarkable thing about this type of neuron is that if a group of synapses of a distal dendrite close to each other receives a signal at the same time, a new type of nerve impulse is produced that propagates along the dendrite until it reaches the body of the cell. This causes an increase in the voltage of the cell, but without producing its activation, so it does not generate a nerve impulse towards the axon. The neuron remains in this state for a short period, returning to its relaxed state.

The question is: What is the purpose of these nerve impulses from the dendrites if they are not powerful enough to produce cell activation? This has been an unknown that is intended to be solved by the model proposed by Hawkins and Ahmad [9], which proposes that the nerve impulses in the distal dendrites are predictions.

This means that a dendritic impulse is produced when a set of synapses close to each other on a distal dendrite receive inputs at the same time, and it means that the neuron has recognized a pattern of activity determined by a set of neurons. When the pattern of activity is detected, a dendritic impulse is created, which raises the voltage in the cell body, putting the cell into what we call a predictive state.

The neuron is then ready to fire. If a neuron in the predictive state subsequently receives sufficient proximal input to create an action potential to fire it, then the neuron fires slightly earlier than it would if the neuron were not in the predictive state.

Thus, the prediction mechanism is based on the idea that multiple neurons in a minicolumn [11] participate in the prediction of a pattern, all of them entering a prediction state, such that when one of them fires it inhibits the firing of the rest. This means that in a minicolumn hundreds or thousands of predictions are made simultaneously over a certain control scenario, such that one of the predictions will prevail over the rest, optimizing the accuracy of the process. This justifies the fact of the small number of predictive events observed versus the overall neuronal activity and also explains why unexpected events or patterns produce greater activity than more predictable or expected events.

If the neural structure of the minicolumns is taken into account, it is easy to understand how this mechanism involves a large number of predictions for the processing of a single pattern, and it can be said that the brain is continuously making predictions about the environment, which allows real-time interaction.

The PP from the point of view of AI

According to the above analysis, it can be concluded that the PP performed by the brain within a time window, of the order of tens of milliseconds, is fundamental for the interaction with the surrounding reality, synchronizing this reality with the perceived reality. But this ability to anticipate perceived events requires other mechanisms such as the need to establish reference frames as well as the ability to recognize patterns.

In the subject raised, it is evident the need to have reference frames in which objects can be represented, such as the dynamic position of the motor organs and of the objects with which to interact. In addition to this, the brain must be able to recognize such objects.

But these capabilities are common to all types of scenarios, although it is perhaps more appropriate to use the term model as an alternative to a reference frame since it is a more general concept. Thus, for example, in verbal communication, it is necessary to have a model that represents the structure of language, as well as an ability to recognize the patterns encoded in the stimuli perceived through the auditory system. In this case, the PP must play a fundamental role, since prediction allows for greater fluency in verbal communication, as is evident when there are delays in a communication channel. This is perhaps most evident in the synchronism necessary in musical coordination.

The enormous complexity of the nervous tissue and the difficulty to empirically identify these mechanisms can be an obstacle to understanding their behavior. For this reason, AI is a source of inspiration [12] since, using different neural network architectures, it shows how models of reality can be established and predictions can be made about this reality.

It should be noted that these models do not claim to provide realistic biological models. Nevertheless, they are fundamental mathematical models in the paradigm of machine learning and artificial intelligence and are a fundamental tool in neurological research. In this sense, it is important to highlight that PP is not only a necessary functionality for the temporal prediction of events, but as shown by artificial neural networks pattern recognition is intrinsically a predictive function.

This may go unnoticed in the case of the brain since pattern recognition achieves such accuracy that it makes the concept of prediction very diluted and appears to be free of probabilistic factors. In contrast, in the case of AI, mathematical models make it clear that pattern recognition is probabilistic in nature and practical results show a diversity of outcomes.

This diversity depends on several factors. Perhaps the most important is its state of development, which can still be considered very primitive, compared to the structural complexity, processing capacity, and energy efficiency of the brain. This means that AI applications are oriented to specific cases where it has shown its effectiveness, such as in health sciences [13] or in the determination of protein structures [14].

But without going into a deeper analysis of these factors, what can be concluded is that the functionality of the brain is based on the establishment of models of reality and the prediction of patterns, one of its functions being temporal prediction, which is the foundation of PP. 

References

[1]J. DiCarlo, D. Zoccolan y N. Rust, «How does the brain solve visual object recognition?,» Neuron, vol. 73, pp. 415-434, 2012.
[2]A. Clark, «Whatever next? Predictive brains, situated agents, and the future of cognitive science,» Behav. Brain Sci., vol. 34, p. 181–204, 2013.
[3]W. Wiese y T. Metzinger, «Vanilla PP for philosophers: a primer on predictive processing.,» In Philosophy and Predictive Processing. T. Metzinger &W.Wiese, Eds., pp. 1-18, 2017.
[4]G. F. Franklin, J. D. Powell y A. Emami-Naeini, Feedback Control of Dynamic Systems, Pearson; 8a edición, 2019.
[5]C. Su, S. Rakheja y H. Liu, «Intelligent Robotics and Applications,» de 5th International Conference, ICIRA, Proceedings, Part II, Montreal, QC, Canada, 2012.
[6]A. Roberts, R. Borisyuk, E. Buhl, A. Ferrario, S. Koutsikou, W.-C. Li y S. Soffe, «The decision to move: response times, neuronal circuits and sensory memory in a simple vertebrate,» Proc. R. Soc. B, vol. 286: 20190297, 2019.
[7]M. B. Moser, «Grid Cells, Place Cells and Memory,» de Nobel Lecture. Aula Medica, Karolinska Institutet, Stockholm, http://www.nobelprize.org/prizes/medicine/2014/may-britt-moser/lecture/, 2014.
[8]M. Lewis, S. Purdy, S. Ahmad y J. Hawkings, «Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells,» Frontiers in Neural Circuits, vol. 13, nº 22, 2019.
[9]J. Hawkins y S. Ahmad, «Why Neurons Have Tousands of Synapses, Theory of Sequence Memory in Neocortex,» Frontiers in Neural Circuits, vol. 10, nº 23, 2016.
[10]G. N. Elston, «Cortex, Cognition and the Cell: New Insights into the Pyramidal Neuron and Prefrontal Function,» Cerebral Cortex, vol. 13, nº 11, p. 1124–1138, 2003.
[11]V. B. Mountcastle, «The columnar organization of the neocortex,» Brain, vol. 120, p. 701–722, 1997.
[12]F. Emmert-Streib, Z. Yang, S. Tripathi y M. Dehmer, «An Introductory Review of Deep Learning for Prediction Models With Big Data,» Front. Artif. Intell., 2020.
[13]A. Bohr y K. Memarzadeh, Artificial Intelligence in Healthcare, Academic Press, 2020.
[14]E. Callaway, «‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures,» Nature, nº 588, pp. 203-204, 2020.

An interpretation of the collapse of the wave function

The aim of this post is to hypothesize about the collapse of the wave function based on thermodynamic entropy and computational reversibility. This will be done using arguments based on statistical mechanics, both quantum and classical, and on the theory of computation and the information theory. 

In this sense, it is interesting to note that most of the natural processes have a reversible behavior, among which we must highlight the models of gravitation, electromagnetism and quantum physics. In particular, the latter is the basis for all the models of the emerging reality that configure the classical reality (macroscopic reality).

On the contrary, thermodynamic processes have an irreversible behavior, which contrasts with the previous models and poses a contradiction originally proposed by Loschmidt, since they are based on quantum physics, which has a reversible nature. It should also be emphasized that thermodynamic processes are essential to understand the nature of classical reality, since they are present in all macroscopic interactions.

This raises the following question. If the universe as a quantum entity is a reversible system, how is it possible that irreversible behavior exists within it?

This irreversible behavior is materialized in the evolution of thermodynamic entropy, in such a way that the dynamics of thermodynamic systems is determined by an increase of entropy as the system evolves in time. This determines that the complexity of the emerging classical reality grows steadily in time and therefore the amount of information of the classical universe.

To answer this question we will hypothesize how the collapse of the wave function is the mechanism that determines how classical reality emerges from the underlying quantum nature, justifying the increase of entropy and as a consequence the rise in the amount of information.

In order to go deeper into this topic, we will proceed to analyze it from the point of view of the theory of computation and the theory of information, emphasizing the meaning and nature of the concept of entropy. This point of view is fundamental, since quantity of information and entropy are synonyms of the same phenomenon.

Reversible computing

First we must analyze what reversible computation is and how it is implemented. To begin with, it should be emphasized that classical computation has an irreversible nature, which is made clear by a simple example, such as the XOR gate, which constitutes a universal set in classical computation, meaning that with a set of these gates any logical function can be implemented.

This gate performs the logical function X⊕Y from the logical variables X and Y, in such a way that in this process the system loses one bit of information, since the input information corresponds to two bits of information, while the output has only one bit of information. Therefore, once the X⊕Y function has been executed, it is not possible to recover the values of the X and Y variables.

According to Landauer’s principle [1], this loss of information means that the system dissipates energy in the environment, increasing its entropy, so that the loss of one bit of information dissipates a minimum energy k·T·ln2 in the environtment. Where k is Boltzmann’s constant and T is the absolute temperature of the system.

Therefore, for a classic system to be reversible it must verify that it does not lose information, so two conditions must be verified:

  • The number of input and output bits must be the same.
  • The relationship between inputs and outputs must be bijective.

The following figure shows the above criteria. But this does not mean that the logic function can be considered a complete set of implementation in a reversible computational context, since the relationship between inputs and outputs is linear and therefore cannot implement nonlinear functions.

It is shown that for this to be possible the number of bits must be n≥3, an example being the Toffoli gate (X,Y,Z)→(X,Y,Z⊕XY) and Fredkin gate (X,Y,Z)→(X, XZ+¬XY,XY+¬XZ), where ¬ is the logical negation.

For this type of gates to form a universal set of quantum computation it is also necessary that they verify the ability to implement nonlinear functions, so according to its truth table the Toffoli gate is not a universal quantum set, unlike the Fredkin gate which is.

One of the reasons for studying universal reversible models of computation, such as the billiard ball model proposed by Fredkin and Toffoli [2], is that they could theoretically lead to real computational systems that consume very low amounts of energy.

But where these models become relevant is in quantum computation, since quantum theory has a reversible nature, which makes it possible to implement reversible algorithms by using reversible logic gates. The reversibility of these algorithms opens up the possibility of reducing the energy dissipated in their execution and approaching the Landauer limit.

Fundamentals of quantum computing

In the case of classical computing a bit of information can take one of the values {0,1}. In contrast, the state of a quantum variable is a superposition of its eigenstates. Thus, for example, the eigenstates of the spin of a particle with respect to some reference axes are {|0〉,|1〉}, so that the state of the particle |Ψ〉 can be in a superposition of the eigenstates |Ψ〉= α|0〉+ β|1〉, α2+ β2 = 1. This is what is called a qubit, so that a qubit can simultaneously encode the values {0,1}.

Thus, in a system consisting of n qubits its wave function can be expressed as |Ψ〉 = α0|00…00〉+α1|00…01〉+α2|00…10〉+…+αN-1|11…11〉, Σ(αi)2 =1, N=2n, such that the system can encode the N possible combinations of n bits and process them simultaneously, which is an exponential speedup compared to classical computing.

The time evolution of the wave function of a quantum system is determined by a unitary transformation, |Ψ’〉 = U|Ψ〉, such that the transposed conjugate of U is its inverse, UU = UU= I. Therefore, the process is reversible |Ψ〉 = U|Ψ’〉 = UU|Ψ〉, keeping the entropy of the system constant throughout the process, so the implementation of quantum computing algorithms must be performed with reversible logic gates. As an example, the inverse function of the Ferdkin gate is itself, as can be easily deduced from its definition.

The evolution of the state of the quantum system continues until it interacts with a measuring device, in what is defined as the quantum measurement, such that the system collapses into one of its possible states |Ψ〉 = |i〉, with probability (αi)2. Without going into further details, this behavior raises a philosophical debate that nevertheless has an empirical confirmation.

Another fundamental feature of quantum reality is particle entanglement, which plays a fundamental role in the implementation of quantum algorithms, quantum cryptography and quantum teleportation.

To understand what particle entanglement means let us first analyze the wave function of two independent quantum particles. Thus, the wave function of a quantum system consisting of two qubits, |Ψ0〉 = α00|0〉+ α01|1〉, |Ψ1〉 = α10|0〉+ α11|1〉, can be expressed as:

|Ψ〉= |Ψ0〉⊗ |Y1〉= α00·α10|00〉+α00·α11|01〉+α01·α10|10〉+α01·α11|11〉,

such that both qubits behave as independent systems, since this expression is factorizable in the functions |Ψ0〉 and |Ψ1〉. Where ⊗ is the tensor product.

However, quantum theory admits solutions for the system, such as |Ψ〉 = α|00〉+β|11〉, α2+ β2 = 1, so if a measurement is performed on one of the qubits, the quantum state of the other collapses instantaneously, regardless of the location of the entangled qubits.

Thus, if one of the qubit collapses in state |0〉 the other qubit collapses also in state |0〉. Conversely, if the qubit collapses into the |1〉 state the other qubit collapses into the |1〉 state as well. This means that the entangled quantum system behaves not as a set of independent qubits, but as a single inseparable quantum system, until the measurement of the system is performed.

This behavior seems to violate the speed limit imposed by the theory of relativity, breaking the principle of locality, which establishes that the state of an object is only influenced by its immediate environment. These inconsistencies gave rise to what is known as the EPR paradox [3], positing that quantum theory was an incomplete theory requiring the existence of hidden local variables in the quantum model.

However, Bell’s theorem [4] proves that quantum physics is incompatible with the existence of local hidden variables. For this purpose, Bell determined what results should be obtained from the measurement of entangled particles, assuming the existence of local hidden variables. This leads to the establishment of a constraint on how the measurement results correlate, known as Bell’s inequalities.

The experimental results obtained by A. Aspect [5] have shown that particle entanglement is a real fact in the world of quantum physics, so that the model of quantum physics is complete and does not require the existence of local hidden variables.

In short, quantum computing is closely linked to the model of quantum physics, based on the concepts of: superposition of states, unitary transformations and quantum measurement. To this we must add particle entanglement, so that a quantum system can be formed by a set of entangled particles, which form a single quantum system.

Based on these concepts, the structure of a quantum computer is as shown in the figure below. Without going into details about the functional structure of each block, the logic gates that constitute the quantum algorithm perform a specific function, for example the product of two variables. In this case, the input qubits would encode all the possible combinations of the input variables, obtaining as a result all the possible products of the input variables, encoded in the superposition of states of the output qubits.

For the information to emerge into the classical world it is necessary to measure the set of output qubits, so that the quantum state randomly collapses into one of its eigenstates, which is embodied in a set of bits that encodes one of the possible outcomes.

But this does not seem to be of practical use. On the one hand, quantum computing involves exponential speedup, by running all products simultaneously. But all this information is lost when measuring quantum information. For this reason, quantum computing requires algorithm design strategies to overcome this problem.

Shor’s factorization algorithm [6] is a clear example of this. In this particular case, the input qubits will encode the number to be factorized, so that the quantum algorithm will simultaneously obtain all the prime divisors of the number. When the quantum measurement is performed, a single factor will be obtained, which will allow the rest of the divisors to be obtained sequentially in polynomial time, which means acceleration with respect to the classical algorithms that require an exponential time.

But fundamental questions arise from all this. It seems obvious that the classical reality emerges from the quantum measurement and, clearly, the information that emerges is only a very small part of the information describing the quantum system. Therefore, one of the questions that arise is: What happens to the information describing the quantum system when performing the measurement? But on the other hand, when performing the measurement information emerges at the classical level, so we must ask: What consequences does this behavior have on the dynamics of the classical universe?

Thermodynamic entropy

The impossibility of directly observing the collapse of the wave function has given rise to various interpretations of quantum mechanics, so that the problem of quantum measurement remains an unsolved mystery [7]. However, we can find some clue if we ask what quantum measurement means and what is its physical foundation.

In this sense, it should be noted that the quantum measurement process is based on the interaction of quantum systems exclusively. The fact that quantum measurement is generally associated with measurement scenarios in an experimental context can give the measurement an anthropic character and, as a consequence, a misperception of the true nature of quantum measurement and of what is defined as a quantum observable.

Therefore, if the quantum measurement involves only quantum systems, the evolution of these systems will be determined by unitary transformations, so that the quantum entropy will remain constant throughout the whole process. But on the other hand, this quantum interaction causes the emergence of information that constitutes classical reality and ultimately produces an increase in classical entropy. Consequently, what is defined as quantum measurement would be nothing more than the emergence of information that conforms classical reality.

The abstract view is clearly shown in practical cases. Thus, for example, from the interaction between atoms that interact with each other emerge the observable properties that determine the properties of the system they form, such as its mechanical properties. However, the quantum system formed by atoms evolves according to the laws of quantum mechanics, keeping the amount of quantum information constant.

Similarly, the interaction between a set of atoms to form a molecule is determined by the laws of quantum mechanics, and therefore by unitary transformations, so that the complexity of the system remains constant at the quantum level. However, at the classical level the resulting system is more complex, emerging new properties that constitute the laws of chemistry and biology.

The question that arises is how it is possible that equations at the microscopic level which are time invariant can lead to a time asymmetry, as shown by the Boltzmann equation of heat diffusion.

Another objection to this behavior, and to a purely mechanical basis for thermodynamics, is due to the fact that every finite system, however complex it may be, must recover its initial state periodically after the so-called recurrence time, as demonstrated by Poincaré [8]. However, by purely statistical analysis it is shown that the probability of a complex thermodynamic system returning to its initial state is practically zero, with recurrence times much longer than the age of the universe itself.

Perhaps the most significant and which clearly highlights the irreversibility of thermodynamic systems is the evolution of the entropy S, which determines the complexity of the system and whose temporal dynamics is increasing, such that the derivative of S is always positive Ṡ > 0. But what is more relevant is that this behavior is demonstrated from the quantum description of the system in what is known as “Pauli’s Master Equation” [9].

This shows that the classical reality emerges from the quantum reality in a natural way, which supports the hypothesis put forward, in such a way that the interaction between quantum systems results in what is called the collapse of the wave function of these systems, emerging the classical reality.

Thermodynamic entropy vs. information theory

The analysis of this behavior from the point of view of information theory confirms this idea. The fact that quantum theory is time-reversible means that the complexity of the system is invariant. In other words, the amount of information describing the quantum system is constant in time. However, the classical reality is subject to an increase of complexity in time determined by the evolution of thermodynamic entropy, which means that the amount of information of the classical system is increasing with time.

If we assume that classical reality is a closed system, this poses a contradiction since in such a system information cannot grow over time. Thus, in a reversible computing system the amount of information remains unchanged, while in a non-reversible computing system the amount of information decreases as the execution progresses. Consequently, classical reality cannot be considered as an isolated system, so the entropy increase must be produced by an underlying reality that injects information in a sustained way.

In short, this analysis is consistent with the results obtained from quantum physics, by means of the “Pauli’s Master Equation”, which shows that the entropy growth of classical reality is obtained from its quantum nature.

It is important to note that the thermodynamic entropy can be expressed as a function of the probability of the microstates as S = – k Σ(pi ln pi),  where k is the Boltzmann constant and which matches the amount of information in a system, if the physical dimensions are chosen such that k = 1. Therefore, it seems clear that the thermodynamic entropy represents the amount of information that emerges from the quantum reality.

But there remains the problem of understanding the physical process by which quantum information emerges into the classical reality layer1. It should be noted that the analysis to obtain the classical entropy from the quantum state of the system is purely mathematical and does not provide physical criteria on the nature of the process. Something similar happens with the analysis of the system from the point of view of classical statistical mechanics [10], where the entropy of the system is obtained from the microstates of the system (generalized coordinates qi and generalized momentum pi), so it does not provide physical criteria to understand this behavior either.

The inflationary universe

The expansion of the universe [11] is another example of how the entropy of the universe is growing steadily since its beginning, suggesting that the classical universe is an open system. But, unlike thermodynamics, in this case the physical structure involved is the vacuum.

It is important to emphasize that historically physical models integrate the vacuum as a purely mathematical structure of space-time in which physical phenomena occur, so that conceptually it is nothing more than a reference frame. This means that in classical models, the vacuum or space-time is not explicitly considered as a physical entity, as is the case with other physical concepts.

The development of the theory of relativity is the first model in which it is recognized, at least implicitly, that the vacuum must be a complex physical structure. While it continues to be treated as a reference frame, two aspects clearly highlight this complexity: the interaction between space-time and momentum-energy, and its relativistic nature.

Experiments such as the Casimir effect [12] or the Lamb effect show the complexity of the vacuum, so that quantum mechanics attributes to the basic state of electromagnetic radiation zero-point electric field fluctuations that pervade empty space at all frequencies. Similarly, the Higgs field suggests that it permeates all of space, such that particles interacting with it acquire mass.But ultimately there is no model that defines spacetime beyond a simple abstract reference frame.

However, it seems obvious that the vacuum must be a physical entity, since physical phenomena occur within it and, above all, its size and complexity grow systematically. This means that its entropy grows as a function of time, so the system must be open, there being a source that injects information in a sustained manner. The current theory assumes that dark energy is the cause of inflation [13], although its existence and nature is still a hypothesis.

Conclusions

From the previous analysis it is deduced that the entropy increase of the classical systems emerges from the quantum reality, which produces a sustained increase of the information of the classical reality. For this purpose different points of view have been used, such as classical and quantum thermodynamic criteria, and mathematical criteria such as classical and quantum computation theory and information theory.

The results obtained by these procedures are concordant, allowing verification of the hypothesis that classical reality emerges in a sustained manner from quantum interaction, providing insight into what is meant by the collapse of the wave function.

What remains a mystery is how this occurs, for while the entropy increase is demonstrated from the quantum state of the system, this analysis does not provide physical criteria for how this occurs.

Evidently, this must be produced by the quantum interaction of the particles involved, so that the collapse of their wave function is a source of information at the classical level. However, it is necessary to confirm this behavior in different scenarios since, for example, in a system in equilibrium there is no increase in entropy and yet there is still a quantum interaction between the particles.

Another factor that must necessarily intervene in this behavior is the vacuum, since the growth of entropy is also determined by variations in the dimensions of the system, which is also evident in the case of the inflationary universe. However, the lack of a model of the physical vacuum describing its true nature makes it difficult to establish hypotheses to explain its possible influence on the sustained increase of entropy.

In conclusion, the increase of information produced by the expansion of the universe is an observable fact that is not yet justified by a physical model. On the contrary, the increase of information determined by entropy is a phenomenon that emerges from quantum reality and that is justified by the model of quantum physics and that, as has been proposed in this essay, would be produced by the collapse of the wave function.

Appendix

1 The irreversibility of the system is obtained from the quantum density matrix:  

  ρ(t)= ∑ i pi |i〉〈i|

Being |i〉 the eigenstates of the Hamiltonian ℌ0, such that the general Hamiltonian is ℌ=ℌ0+V, where the perturbation V is the cause of the state transitions. Thus for example, in an ideal gas ℌ0, could be the kinetic energy and V the interaction as a consequence of the collision of the atoms of the gas.

Consequently, “Pauli’s Master Equation” takes into consideration the interaction of particles with each other and their relation to the volume of the system, but in an abstract way. Thus, the interaction of two particles has a quantum nature, exchanging energy by means of bosons, something that is hidden in the mathematical development.

Similarly, gas particles interact with the vacuum, this interaction being fundamental, as is evident in the expansion of the gas shown in the figure. However, the quantum nature of this interaction is hidden in the model. Moreover, it is also not possible to establish what this interaction is like, beyond its motion, since we lack a vacuum model that allows this analysis.

References

[1]R. Landauer, “Irreversibility and Heat Generation in Computing Process,” IBM J. Res. Dev., vol. 5, pp. 183-191, 1961.
[2]E. Fredkin y T. Toffoli, «Conservative logic,» International Journal of Theoretical Physics, vol. 21, p. 219–253, 1982.
[3]A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[4]J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics, vol. 1, nº 3, pp. 195-290, 1964.
[5]A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[6]P. W. Shor, «Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,» arXiv:quant-ph/9508027v2, 1996.
[7]M. Schlosshauer, J. Kofler y A. Zeilinger, «A Snapshot of Foundational Attitudes Toward Quantum Mechanics,» arXiv:1301.1069v, 2013.
[8]H. Poincaré, «Sur le problème des trois corps et les équations de la dynamique,» Acta Math, vol. 13, pp. 1-270, 1890.
[9]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.
[10]F. W. Sears, An Introduction to Thermodynamics, The Kinetic Theory of Gases, and Statistical Mechanics, Addison-Wesley Publishing Company, 1953.
[11]A. H. Guth, The Inflationary Universe, Perseus, 1997.
[12]H. B. G. Casimir, «On the Attraction Between Two Perfectly Conducting Plates,» Indag. Math. , vol. 10, p. 261–263., 1948.
[13]P. J. E. Peebles y B. Ratra, «The cosmological constant and dark energy,» Reviews of Modern Physics, vol. 75, nº 2, p. 559–606, 2003.

Consciousness from the point of view of AI

The self-awareness of human beings, which constitutes the concept of consciousness, has been and continues to be an enigma faced by philosophers, anthropologists and neuroscientists. But perhaps most suggestive is the fact that consciousness is a central concept in human behavior and that being aware of it does not find an explanation for it.

Without going into details, until the modern age the concept of consciousness had deep roots in the concept of soul and religious beliefs, often attributing to divine intervention in the differentiation of human nature from other species.

The modern age saw a substantial change based on Descartes’ concept “cogito ergo sum ( I think, therefore I am”) and later on the model proposed by Kant, which is structured around what are known as “transcendental arguments” [1].

Subsequently, a variety of schools of thought have developed, among which dualistic, monistic, materialistic and neurocognitive theories stand out. In general terms, these theories focus on the psychological and phenomenological aspects that describe conscious reality. In the case of neurocognitive theories, neurological evidence is a fundamental pillar. But ultimately, all these theories are abstract in nature and, for the time being, have failed to provide a formal justification of consciousness and how a “being” can develop conscious behavior, as well as concepts such as morality or ethics.

One aspect that these models deal with and that brings into question the concept of the “cogito” is the change of behavior produced by brain damage and that in some cases can be re-educated, which shows that the brain and the learning processes play a fundamental role in consciousness.

In this regard, advances in Artificial Intelligence (AI) [2] highlight the formal foundations of learning, by which an algorithm can acquire knowledge and in which neural networks are now a fundamental component. For this reason, the use of this new knowledge can shed light on the nature of consciousness.

The Turing Test paradigm

To analyze what may be the mechanisms that support consciousness we can start with the Turing Test [3], in which a machine is tested to see if it shows a behavior similar to that of a human being.

Without going into the definition of the Turing Test, we can assimilate this concept to that of a chatbot, as shown in Figure 1, which can give us an intuitive idea of this concept. But we can go even further if we consider its implementation. This requires the availability of a huge amount of dialogues between humans, which allows us to train the model using Deep Learning techniques [4]. And although it may seem strange, the availability of dialogues is the most laborious part of the process.

Figure 1. Schematic of the Turing Test

Once the chatbot has been trained, we can ask about its behavior from a psychophysical point of view. The answer seems quite obvious, since although it can show a very complex behavior, this will always be a reflex behavior, even though the interlocutor can deduce that the chatbot has feelings and even an intelligent behavior. The latter is a controversial issue because of the difficulty of defining what constitutes intelligent behavior, which is highlighted by the questions: Intelligent? Compared to what?

But the Turing Test only aims to determine the ability of a machine to show human-like behavior, without going into the analysis of the mechanisms to establish this functionality.

In the case of humans, these mechanisms can be classified into two sections: genetic learning and neural learning.

Genetic learning

Genetic learning is based on the learning capacity of biology to establish functions adapted to the processing of the surrounding reality. Expressed in this way it does not seem an obvious or convincing argument, but DNA computing [5] is a formal demonstration of the capability of biological learning. The evolution of capabilities acquired through this process is based on trial and error, which is inherent to learning. Thus, biological evolution is a slow process, as nature shows.

Instinctive reactions are based on genetic learning, so that all species of living beings are endowed with certain faculties without the need for significant subsequent training. Examples are the survival instinct, the reproductive instinct, and the maternal and paternal instinct. These functions are located in the inner layers of the brain, which humans share with vertebrates.

We will not go into details related to neuroscience [6], since the only thing that interests us in this analysis is to highlight two fundamental aspects: the functional specialization and plasticity of each of its neural structures. Thus, structure, plasticity and specialization are determined by genetic factors, so that the inner layers, such as the limbic system, have a very specialized functionality and require little training to be functional. In contrast, the external structures, located in the neocortex, are very plastic and their functionality is strongly influenced by learning and experience.

Thus, genetic learning is responsible for structure, plasticity and specialization, whereas neural learning is intimately linked to the plastic functionality of neural tissue.

A clear example of functional specialization based on genetic learning is the space-time processing that we share with the rest of higher living beings and that is located in the limbic system. This endows the brain with structures dedicated to the establishment of a spatial map and the processing of temporal delay, which provides the ability to establish trajectories in advance, vital for survival and for interacting with spatio-temporal reality.

This functionality has a high degree of automaticity, which makes its functional capacity effective from the moment of birth. However, this is not exactly the case in humans, since these neural systems function in coordination with the neocortex, which requires a high degree of neural training.

Thus, for example, this functional specialization precludes visualizing and intuitively understanding geometries of more than three spatial dimensions, something that humans can only deal with abstractly at a higher level by means of the neocortex, which has a plastic functionality and is the main support for neural learning.

It is interesting to consider that the functionality of the neocortex, whose response time is longer than that of the lower layers, can interfere in the reaction of automatic functions. This is clearly evident in the loss of concentration in activities that require a high degree of automatism, as occurs in certain sports activities. This means that in addition to having an appropriate physical capacity and a well-developed and trained automatic processing capacity, elite athletes require specific psychological preparation.

This applies to all sensory systems, such as vision, hearing, balance, in which genetic learning determines and conditions the interpretation of information coming from the sensory organs. But as this information ascends to the higher layers of the brain, the processing and interpretation of the information is determined by neural learning.

This is what differentiates humans from the rest of the species, being endowed with a highly developed neocortex, which provides a very significant neural learning capacity, from which the conscious being seems to emerge.

Nevertheless, there is solid evidence of the ability to feel and to have a certain level of consciousness in some species. This is what has triggered a movement for legal recognition of feelings in certain species of animals, and even recognition of personal status for some species of hominids.

Neural learning: AI as a source of intuition

Currently, AI is made up of a set of mathematical strategies that are grouped under different names depending on their characteristics. Thus, Machine Learning (ML) is made up of classical mathematical algorithms, such as statistical algorithms, decision trees, clustering, support vector machine, etc. Deep Learning, on the other hand, is inspired by the functioning of neural tissue, and exhibits complex behavior that approximates certain capabilities of humans.

In the current state of development of this discipline, designs are reduced to the implementation and training of specific tasks, such as automatic diagnostic systems, assistants, chatbots, games, etc., so these systems are grouped in what is called Artificial Narrow Intelligence.

The perspective offered by this new knowledge makes it possible to establish three major categories within AI:

  • Artificial Narrow Intelligence.
  • Artificial General Intelligence. AI systems with a capacity similar to that of human beings.
  • Artificial Super Intelligence: Self-aware AI systems with a capacity equal to or greater than that of human beings. 

The implementation of neural networks used in Deep Learning is inspired by the functionality of neurons and neural tissue, as shown in Figure 2 [7]. As a consequence, the nerve stimuli coming from the axon terminals that connect to the dendrites (synapses) are weighted and processed according to the functional configuration of the neuron acquired by learning, producing a nerve stimulus that propagates to other neurons, through the terminal axons.

Figure 2. Structure of a neuron and mathematical model

Artificial neural networks are structured by creating layers of the mathematical neuron model, as shown in Figure 3. A fundamental issue in this model is to determine the mechanisms necessary to establish the weighting parameters Wi in each of the units that form the neural network. Neural mechanisms could be used for this purpose. However, although there is a very general idea of how the functionality of the synapses is configured, the establishment of the functionality at the neural network level is still a mystery.

Figure 3. Artificial Neural Network Architecture

In the case of artificial neural networks, mathematics has found a solution that makes it possible to establish the Wi values, by means of what is known as supervised learning. This requires having a dataset in which each of its elements represents a stimulus X i and the response to this stimulus Y i. Thus, once the Wi values have been randomly initialized, the training phase proceeds, presenting each of the X i stimuli and comparing the response with the Y i values. The errors produced are propagated backwards by means of an algorithm known as backpropagation.

Through the sequential application of the elements of a training set belonging to the dataset in several sessions, a state of convergence is reached, in which the neural network achieves an appropriate degree of accuracy, verified by means of a validation set of elements belonging to the dataset that are not used for training.

An example is much more intuitive to understand the nature of the elements of a dataset. Thus, in a dataset used in the training of autonomous driving systems, X i correspond to images in which patterns of different types of vehicles, pedestrians, public roads, etc. appear. Each of these images has a category Y i associated with it, which specifies the patterns that appear in that image. It should be noted that in the current state of development of AI systems, the dataset is made by humans, so learning is supervised and requires significant resources.

In unsupervised learning the category Y i is generated automatically, although its state of development is very incipient. A very illustrative example is the Alpha Zero program developed by DeepMind [8], in such a way that learning is performed by providing it with the rules of the game (chess, go, shogi) and developing against itself matches, in such a way that the moves and the result configure (X i , Y i). The neural network is continuously updated with these results, sequentially improving its behavior and therefore the new results (X i , Y i), reaching a superhuman level of play.

It is important to note that in the case of upper living beings, unsupervised learning takes place through the interaction of the afferent (sensory) neuronal system and the efferent (motor) neuronal system. Although from a functional point of view there are no substantial differences, this interaction takes place at two levels, as shown in Figure 4:

  • The interaction with the inanimate environment.
  • Interaction with other living beings, especially of the same species.

The first level of interaction provides knowledge about physical reality. On the other contrary, the second level of interaction allows the establishment of survival habits and, above all, social habits. In the case of humans, this level acquires great importance and complexity, since from it emerge concepts such as morality and ethics, as well as the capacity to accumulate and transmit knowledge from generation to generation.

Figure 4. Structure of unsupervised learning

Consequently, unsupervised learning is based on the recursion of afferent and efferent systems. This means that unlike the models used in Deep Learning, which are unidirectional, unsupervised AI systems require the implementation of two independent systems. An afferent system that produces a response from a stimulus and an efferent system that, based on the response, corrects the behavior of the afferent system by means of a reinforcement technique.

What is the foundation of consciousness?

Two fundamental aspects can be deduced from the development of AI:

  • The learning capability of algorithms.
  • The need for afferent and efferent structures to support unsupervised learning.

On the other hand, it is known that traumatic processes in the brain or pathologies associated with aging can produce changes in personality and conscious perception.  This clearly indicates that these functions are located in the brain and supported by neural tissue.

But it is necessary to rely on anthropology to have a more precise idea of what are the foundations of consciousness and how it has developed in human beings. Thus, a direct correlation can be observed between the cranial capacity of a hominid species and its abilities, social organization, spirituality and, above all, in the abstract perception of the surrounding world. This correlation is clearly determined by the size of the neocortex and can be observed to a lesser extent in other species, such as primates, showing a capacity for emotional pain, a structured social organization and a certain degree of abstract learning.

According to all of the above, it could be concluded that consciousness emerges from the learning capacity of the neural tissue and would be achieved as the structural complexity and functional resources of the brain acquire an appropriate level of development. But this leads directly to the scenario proposed by the Turing Test, in such a way that we would obtain a system with a complex behavior indistinguishable from a human, which does not provide any proof of the existence of consciousness. 

To understand this, we can ask how a human comes to the conclusion that all other humans are self-awareness. In reality, it has no argument to reach this conclusion, since at most it could check that they verify the Turing test. The human comes to the conclusion that other humans have consciousness by resemblance to itself. By introspection, a human is self-awareness and since the rest of the humans are similar to him it concludes that the rest of the humans are self-awareness.

Ultimately, the only answer that can be given to what is the basis of consciousness is the introspection mechanism of the brain itself. In the unsupervised learning scheme, the afferent and efferent mechanisms that allow the brain to interact with the outside world through the sensory and motor organs have been highlighted. However, to this model we must add another flow of information, as shown in Figure 5, which enhances learning and corresponds to the interconnection of neuronal structures of the brain that recursively establish the mechanisms of reasoning, imagination and, why not, consciousness.

Figure 5. Mechanism of reasoning and imagination.

This statement may seem radical, but if we meditate on it we will see that the only difference between imagination and consciousness is that the capacity of humans to identify themselves raises existential questions that are difficult to answer, but which from the point of view of information processing require the same resources as reasoning or imagination.

But how can this hypothesis be verified? One possible solution would be to build a system based on learning technologies that would confirm the hypothesis, but would this confirmation be accepted as true, or would it simply be decided that the system verifies the Turing Test?

[1]Stanford Encyclopedia of Philosophy, «Kant’s View of the Mind and Consciousness of Self,» 2020 Oct 8. [On line]. Available: https://plato.stanford.edu/entries/kant-mind/. [Last access: 2021 Jun 6].
[2]S. J. Russell y P. Norvig, Artificial Intelligence: A Modern Approach, Pearson, 2021.
[3]A. Turing, «Computing Machinery and Intelligence,» Mind, vol. LIX, nº 236, p. 433–60, 1950.
[4]C. C. Aggarwal, Neural Networks and Deep Learning, Springer, 2018.
[5]L. M. Adleman, «Molecular computation of solutions to combinatorial problems,» Science, vol. 266, nº 5187, pp. 1021-1024, 1994.
[6]E. R. Kandel, J. D. Koester, S. H. Mack y S. A. Siegelbaum, Principles of Neural Science, Macgraw Hill, 2021.
[7]F. Rosenblatt, «The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,» Psychological Review, vol. 65, nº 6, pp. 386-408, 1958.
[8]D. Silver, T. Hubert y J. Schrittwieser, «DeepMind,» [On line]. Available: https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go. [Last access: 2021 Jun 6].

The unreasonable effectiveness of mathematics

In the post “What is the nature of mathematics“, the dilemma of whether mathematics is discovered or invented by humans has been exposed, but so far no convincing evidence has been provided in either direction.

A more profound way of approaching the issue is as posed by Eugene P. Wigner [1], asking about the unreasonable effectiveness of mathematics in the natural sciences. 

According to Roger Penrose this poses three mysteries [2] [3], identifying three distinct “worlds”: the world of our conscious perception, the physical world and the Platonic world of mathematical forms. Thus:

  • The world of physical reality seems to obey laws that actually reside in the world of mathematical forms.  
  • The perceiving minds themselves – the realm of our conscious perception – have managed to emerge from the physical world.
  • Those same minds have been able to access the mathematical world by discovering, or creating, and articulating a capital of mathematical forms and concepts.

The effectiveness of mathematics has two different aspects. An active one in which physicists develop mathematical models that allow them to accurately describe the behavior of physical phenomena, but also to make predictions about them, which is a striking fact.

Even more extraordinary, however, is the passive aspect of mathematics, such that the concepts that mathematicians explore in an abstract way end up being the solutions to problems firmly rooted in physical reality.

But this view of mathematics has detractors especially outside the field of physics, in areas where mathematics does not seem to have this behavior. Thus, the neurobiologist Jean-Pierre Changeux notes [4], “Asserting the physical reality of mathematical objects on the same level as the natural phenomena studied in biology raises, in my opinion, a considerable epistemological problem. How can an internal physical state of our brain represent another physical state external to it?”

Obviously, it seems that analyzing the problem using case studies from different areas of knowledge does not allow us to establish formal arguments to reach a conclusion about the nature of mathematics. For this reason, an abstract method must be sought to overcome these difficulties. In this sense, Information Theory (IT) [5], Algorithmic Information Theory (AIT) [6] and Theory of Computation (TC) [7] can be tools of analysis that help to solve the problem.

What do we understand by mathematics?

The question may seem obvious, but mathematics is structured in multiple areas: algebra, logic, calculus, etc., and the truth is that when we refer to the success of mathematics in the field of physics, it underlies the idea of physical theories supported by mathematical models: quantum physics, electromagnetism, general relativity, etc.

However, when these mathematical models are applied in other areas they do not seem to have the same effectiveness, for example in biology, sociology or finance, which seems to contradict the experience in the field of physics.

For this reason, a fundamental question is to analyze how these models work and what are the causes that hinder their application outside the field of physics. To do this, let us imagine any of the successful models of physics, such as the theory of gravitation, electromagnetism, quantum physics or general relativity. These models are based on a set of equations defined in mathematical language, which determine the laws that control the described phenomenon, which admit analytical solutions that describe the dynamics of the system. Thus, for example, a body subjected to a central attractive force describes a trajectory defined by a conic.

This functionality is a powerful analysis tool, since it allows to analyze systems under hypothetical conditions and to reach conclusions that can be later verified experimentally. But beware! This success scenario masks a reality that often goes unnoticed, since generally the scenarios in which the model admits an analytical solution are very limited. Thus, the gravitational model does not admit an analytical solution when the number of bodies is n>=3 [8], except in very specific cases such as the so-called Lagrange points. Moreover, the system has a very sensitive behavior to the initial conditions, so that small variations in these conditions can produce large deviations in the long term.

This is a fundamental characteristic of nonlinear systems and, although the system is governed by deterministic laws, its behavior is chaotic. Without going into details that are beyond the scope of this analysis, this is the general behavior of the cosmos and everything that happens in it.

One case that can be considered extraordinary is the quantum model which, according to the Schrödinger equation or the Heisenberg matrix model, is a linear and reversible model. However, the information that emerges from quantum reality is stochastic in nature.  

In short, the models that describe physical reality only have an analytical solution in very particular cases. For complex scenarios, particular solutions to the problem can be obtained by numerical series, but the general solution of any mathematical proposition is obtained by the Turing Machine (TM) [9].

This model can be represented in an abstract form by the concatenation of three mathematical objectsxyz〉(bit sequences) which, when executed in a Turing machine TM(〈xyz〉), determine the solution. Thus, for example, in the case of electromagnetism, the object z will correspond to the description of the boundary conditions of the system, y to the definition of Maxwell’s equations and x to the formal definition of the mathematical calculus. TM is the Turing machine defined by a finite set of states. Therefore, the problem is reduced to the treatment of a set of bits〈xyz〉 according to axiomatic rules defined in TM, and that in the optimal case can be reduced to a machine with three states (plus the HALT state) and two symbols (bit).

Nature as a Turing machine

And here we return to the starting point. How is it possible that reality can be represented by a set of bits and a small number of axiomatic rules?

Prior to the development of IT, the concept of information had no formal meaning, as evidenced by its classic dictionary definition. In fact, until communication technologies began to develop, words such as “send” referred exclusively to material objects.

However, everything that happens in the universe is interaction and transfer, and in the case of humans the most elaborate medium for this interaction is natural language, which we consider to be the most important milestone on which cultural development is based. It is perhaps for this reason that in the debate about whether mathematics is invented or discovered, natural language is used as an argument.

But TC shows that natural language is not formal, not being defined on axiomatic grounds, so that arguments based on it may be of questionable validity. And it is here that IT and TC provide a broad view on the problem posed.

In a physical system each of the component particles has physical properties and a state, in such a way that when it interacts with the environment it modifies its state according to its properties, its state and the external physical interaction. This interaction process is reciprocal and as a consequence of the whole set of interactions the system develops a temporal dynamics.

Thus, for example, the dynamics of a particle is determined by the curvature of space-time which indicates to the particle how it should move and this in turn interacts with space-time, modifying its curvature.

In short, a system has a description that is distributed in each of the parts that make up the system. Thus, the system could be described in several different ways:

  • As a set of TMs interacting with each other. 
  • As a TM describing the total system.
  • As a TM partially describing the global behavior, showing emergent properties of the system.

The fundamental conclusion is that the system is a Turing machine. Therefore, the question is not whether the mathematics is discovered or invented or to ask ourselves how it is possible for mathematics to be so effective in describing the system. The question is how it is possible for an intelligent entity – natural or artificial – to reach this conclusion and even to be able to deduce the axiomatic laws that control the system.

The justification must be based on the fact that it is nature that imposes the functionality and not the intelligent entities that are part of nature. Nature is capable of developing any computable functionality, so that among other functionalities, learning and recognition of behavioral patterns is a basic functionality of nature. In this way, nature develops a complex dynamic from which physical behavior, biology, living beings, and intelligent entities emerge.

As a consequence, nature has created structures that are able to identify its own patterns of behavior, such as physical laws, and ultimately identify nature as a Universal Turing Machine (UTM). This is what makes physical interaction consistent at all levels. Thus, in the above case of the ability of living beings to establish a spatio-temporal map, this allows them to interact with the environment; otherwise their existence would not be possible. Obviously this map corresponds to a Euclidean space, but if the living being in question were able to move at speeds close to light, the map learned would correspond to the one described by relativity.

A view beyond physics

While TC, IT and AIT are the theoretical support that allows sustaining this view of nature, advances in computer technology and AI are a source of inspiration, showing how reality can be described as a structured sequence of bits. This in turn enables functions such as pattern extraction and recognition, complexity determination and machine learning.

Despite this, fundamental questions remain to be answered, in particular what happens in those cases where mathematics does not seem to have the same success as in the case of physics, such as biology, economics or sociology. 

Many of the arguments used against the previous view are based on the fact that the description of reality in mathematical terms, or rather, in terms of computational concepts does not seem to fit, or at least not precisely, in areas of knowledge beyond physics. However, it is necessary to recognize that very significant advances have been made in areas such as biology and economics.

Thus, knowledge of biology shows that the chemistry of life is structured in several overlapping languages:

  • The language of nucleic acids, consisting of an alphabet of 4 symbols that encodes the structure of DNA and RNA.
  • The amino acid language, consisting of an alphabet of 64 symbols that encodes proteins. The transcription process for protein synthesis is carried out by means of a concordance between both languages.
  • The language of the intergenic regions of the genome. Their functionality is still to be clarified, but everything seems to indicate that they are responsible for the control of protein production in different parts of the body, through the activation of molecular switches. 

On the other hand, protein structure prediction by deep learning techniques is a solid evidence that associates biology to TC [10]. To emphasize also that biology as an information process must verify the laws of logic, in particular the recursion theorem [11], so DNA replication must be performed at least in two phases by independent processes.

In the case of economics there have been relevant advances since the 80’s of the twentieth century, with the development of computational finance [12]. But as a paradigmatic example we will focus on the financial markets, which should serve to test in an environment far from physics the hypothesis that nature has the behavior of a Turing machine. 

Basically, financial markets are a space, which can be physical or virtual, through which financial assets are exchanged between economic agents and in which the prices of such assets are defined.

A financial market is governed by the law of supply and demand. In other words, when an economic agent wants something at a certain price, he can only buy it at that price if there is another agent willing to sell him that something at that price.

Traditionally, economic agents were individuals but, with the development of complex computer applications, these applications now also act as economic agents, both supervised and unsupervised, giving rise to different types of investment strategies.

This system can be modeled by a Turing machine that emulates all the economic agents involved, or as a set of Turing machines interacting with each other, each of which emulates an economic agent.

The definition of this model requires implementing the axiomatic rules of the market, as well as the functionality of each of the economic agents, which allow them to determine the purchase or sale prices at which they are willing to negotiate. This is where the problem lies, since this depends on very diverse and complex factors, such as the availability of information on the securities traded, the agent’s psychology and many other factors such as contingencies or speculative strategies.

In brief, this makes emulation of the system impossible in practice. It should be noted, however, that brokers and automated applications can gain a competitive advantage by identifying global patterns, or even by insider trading, although this practice is punishable by law in suitably regulated markets.

The question that can be raised is whether this impossibility of precise emulation invalidates the hypothesis put forward. If we return to the case study of Newtonian gravitation, determined by the central attractive force, it can be observed that, although functionally different, it shares a fundamental characteristic that makes emulation of the system impossible in practice and that is present in all scenarios. 

If we intend to emulate the case of the solar system we must determine the position, velocity and angular momentum of all celestial bodies involved, sun, planets, dwarf planets, planetoids, satellites, as well as the rest of the bodies located in the system, such as the asteroid belt, the Kuiper belt and the Oort cloud, as well as the dispersed mass and energy. In addition, the shape and structure of solid, liquid and gaseous bodies must be determined. It will also be necessary to consider the effects of collisions that modify the structure of the resulting bodies. Finally, it will be necessary to consider physicochemical activity, such as geological, biological and radiation phenomena, since they modify the structure and dynamics of the bodies and are subject to quantum phenomena, which is another source of uncertainty.  And yet the model is not adequate, since it is necessary to apply a relativistic model.

This makes accurate emulation impossible in practice, as demonstrated by the continuous corrections in the ephemerides of GPS satellites, or the adjustments of space travel trajectories, where the journey to Pluto by NASA’s New Horizons spacecraft is a paradigmatic case.

Conclusions

From the previous analysis it can be hypothesized that the universe is an axiomatic system governed by laws that determine a dynamic that is a consequence of the interaction and transference of the entities that compose it.

As a consequence of the interaction and transfer phenomena, the system itself can partially and approximately emulate its own behavior, which gives rise to learning processes and finally gives rise to life and intelligence. This makes it possible for living beings to interact in a complex way with the environment and for intelligent entities to observe reality and establish models of this reality.

This gave rise to abstract representations such as natural language and mathematics. With the development of IT [5] it is concluded that all objects can be represented by a set of bits, which can be processed by axiomatic rules [7] and which optimally encoded determine the complexity of the object, defined as Kolmogorov complexity [6].

The development of TC establishes that these models can be defined as a TM, so that in the limit it can be hypothesized that the universe is equivalent to a Turing machine and that the limits of reality can go beyond the universe itself, in what is defined as multiverse and that it would be equivalent to a UTM. Esta concordancia entre un universo y una TM  permite plantear la hipótesis de que el universo no es más que información procesada por reglas axiomáticas.

Therefore, from the observation of natural phenomena we can extract the laws of behavior that constitute the abstract models (axioms), as well as the information necessary to describe the cases of reality (information). Since this representation is made on a physical reality, its representation will always be approximate, so that only the universe can emulate itself. Since the universe is consistent, models only corroborate this fact. But reciprocally, the equivalence between the universe and a TM implies that the deductions made from consistent models must be satisfied by reality.

However, everything seems to indicate that this way of perceiving reality is distorted by the senses, since at the level of classical reality what we observe are the consequences of the processes that occur at this functional level, appearing concepts such as mass, energy, inertia.

But when we explore the layers that support classical reality, this perception disappears, since our senses do not have the direct capability for its observation, in such a way that what emerges is nothing more than a model of axiomatic rules that process information, and the physical sensory conception disappears. This would justify the difficulty to understand the foundations of reality.

It is sometimes speculated that reality may be nothing more than a complex simulation, but this poses a problem, since in such a case a support for its execution would be necessary, implying the existence of an underlying reality necessary to support such a simulation [13].

There are two aspects that have not been dealt with and that are of transcendental importance for the understanding of the universe. The first concerns irreversibility in the layer of classical reality. According to the AIT, the amount of information in a TM remains constant, so the irreversibility of thermodynamic systems is an indication that these systems are open, since they do not verify this property, an aspect to which physics must provide an answer.

The second is related to the non-cloning theorem. Quantum systems are reversible and, according to the non-cloning theorem, it is not possible to make exact copies of the unknown quantum state of a particle. But according to the recursion theorem, at least two independent processes are necessary to make a copy. This would mean that in the quantum layer it is not possible to have at least two independent processes to copy such a quantum state. An alternative explanation would be that these quantum states have a non-computable complexity.

Finally, it should be noted that the question of whether mathematics was invented or discovered by humans is flawed by an anthropic view of the universe, which considers humans as a central part of it. But it must be concluded that humans are a part of the universe, as are all the entities that make up the universe, particularly mathematics.

References

[1]E. P. Wigner, “The unreasonable effectiveness of mathematics in the natural sciences.,” Communications on Pure and Applied Mathematics, vol. 13, no. 1, pp. 1-14, 1960.
[2]R. Penrose, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford: Oxford University Press, 1989.
[3]R. Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe, London: Jonathan Cape, 2004.
[4]J.-P. Changeux and A. Connes, Conversations on Mind, Matter, and Mathematics, Princeton N. J.: Princeton University Press, 1995.
[5]C. E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
[6]P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[7]M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[8]H. Poincaré, New Methods of Celestial Mechanics, Springer, 1992.
[9]A. M. Turing, “On computable numbers, with an application to the Entscheidungsproblem.,” Proceedings, London Mathematical Society, pp. 230-265, 1936.
[10]A. W. Senior, R. Evans and e. al., “Improved protein structure prediction using potentials from deep learning,” Nature, vol. 577, pp. 706-710, Jan 2020.
[11]S. Kleene, “On Notation for ordinal numbers,” J. Symbolic Logic, no. 3, p. 150–155, 1938.
[12]A. Savine, Modern Computational Finance: AAD and Parallel Simulations, Wiley, 2018.
[13]N. Bostrom, “Are We Living in a Computer Simulation?,” The Philosophical Quarterly, vol. 53, no. 211, p. 243–255, April 2003.

What is the nature of mathematics?

The ability of mathematics to describe the behavior of nature, particularly in the field of physics, is a surprising fact, especially when one considers that mathematics is an abstract entity created by the human mind and disconnected from physical reality.  But if mathematics is an entity created by humans, how is this precise correspondence possible?

Throughout centuries this has been a topic of debate, focusing on two opposing ideas: Is mathematics invented or discovered by humans?

This question has divided the scientific community: philosophers, physicists, logicians, cognitive scientists and linguists, and it can be said that not only is there no consensus, but generally positions are totally opposed. Mario Livio in the essay “Is God a Mathematician? [1] describes in a broad and precise way the historical events on the subject, from Greek philosophers to our days.

The aim of this post is to analyze this dilemma, introducing new analysis tools  such as Information Theory (IT) [2], Algorithmic Information Theory (AIT) [3] and Computer Theory (CT) [4], without forgetting the perspective that shows the new knowledge about Artificial Intelligence (AI).

In this post we will make a brief review of the current state of the issue, without entering into its historical development, trying to identify the difficulties that hinder its resolution, for in subsequent posts to analyze the problem from a different perspective to the conventional, using the logical tools that offer us the above theories.

Currents of thought: invented or discovered?

In a very simplified way, it can be said that at present the position that mathematics is discovered by humans is headed by Max Tegmark, who states in “Our Mathematical Universe” [5] that the universe is a purely mathematical entity, which would justify that mathematics describes reality with precision, but that reality itself is a mathematical entity.

On the other extreme, there is a large group of scientists, including cognitive scientists and biologists who, based on the fact of the brain’s capabilities, maintain that mathematics is an entity invented by humans.

Max Tegmark: Our Mathematical Universe

In both cases, there are no arguments that would tip the balance towards one of the hypotheses. Thus, in Max Tegmark’s case he maintains that the definitive theory (Theory of Everything) cannot include concepts such as “subatomic particles”, “vibrating strings”, “space-time deformation” or other man-made constructs. Therefore, the only possible description of the cosmos implies only abstract concepts and relations between them, which for him constitute the operative definition of mathematics.

This reasoning assumes that the cosmos has a nature completely independent of human perception, and its behavior is governed exclusively by such abstract concepts. This view of the cosmos seems to be correct insofar as it eliminates any anthropic view of the universe, in which humans are only a part of it. However, it does not justify that physical laws and abstract mathematical concepts are the same entity.  

In the case of those who maintain that mathematics is an entity invented by humans, the arguments do not usually have a formal structure and it could be said that in many cases they correspond more to a personal position and sentiment. An exception is the position maintained by biologists and cognitive scientists, in which the arguments are based on the creative capacity of the human brain and which would justify that mathematics is an entity created by humans.

For these, mathematics does not really differ from natural language, so mathematics would be no more than another language. Thus, the conception of mathematics would be nothing more than the idealization and abstraction of elements of the physical world. However, this approach presents several difficulties to be able to conclude that mathematics is an entity invented by humans.

On the one hand, it does not provide formal criteria for its demonstration. But it also presupposes that the ability to learn is an attribute exclusive to humans. This is a crucial point, which will be addressed in later posts. In addition, natural language is used as a central concept, without taking into account that any interaction, no matter what its nature, is carried out through language, as shown by the TC [4], which is a theory of language.

Consequently, it can be concluded that neither current of thought presents conclusive arguments about what the nature of mathematics is. For this reason, it seems necessary to analyze from new points of view what is the cause for this, since physical reality and mathematics seem intimately linked.

Mathematics as a discovered entity

In the case that considers mathematics the very essence of the cosmos, and therefore that mathematics is an entity discovered by humans, the argument is the equivalence of mathematical models with physical behavior. But for this argument to be conclusive, the Theory of Everything should be developed, in which the physical entities would be strictly of a mathematical nature. This means that reality would be supported by a set of axioms and the information describing the model, the state and the dynamics of the system.

This means a dematerialization of physics, something that somehow seems to be happening as the development of the deeper structures of physics proceeds. Thus, the particles of the standard model are nothing more than abstract entities with observable properties. This could be the key, and there is a hint in Landauer’s principle [6], which establishes an equivalence between information and energy.

But solving the problem by physical means or, to be more precise, by contrasting mathematical models with reality presents a fundamental difficulty. In general, mathematical models describe the functionality of a certain context or layer of reality, and all of them have a common characteristic, in such a way that these models are irreducible and disconnected from the underlying layers. Therefore, the deepest functional layer should be unraveled, which from the point of view of AIT and TC is a non-computable problem.

Mathematics as an invented entity

The current of opinion in favor of mathematics being an entity invented by humans is based on natural language and on the brain’s ability to learn, imagine and create. 

But this argument has two fundamental weaknesses. On the one hand, it does not provide formal arguments to conclusively demonstrate the hypothesis that mathematics is an invented entity. On the other hand, it attributes properties to the human brain that are a general characteristic of the cosmos.

The Hippocampus: A paradigmatic example of the dilemma discovered or invented

To clarify this last point, let us take as an example the invention of whole numbers by humans, which is usually used to support this view. Let us now imagine an animal interacting with the environment. Therefore, it has to interpret spacetime accurately as a basic means of survival. Obviously, the animal must have learned or invented the space-time map, something much more complex than natural numbers.

Moreover, nature has provided or invented the hippocampus [7], a neuronal structure specialized in acquiring long-term information that forms a complex convolution, forming a recurrent neuronal network, very suitable for the treatment of the space-time map and for the resolution of trajectories. And of course this structure is physical and encoded in the genome of higher animals. The question is: Is this structure discovered or invented by nature?

Regarding the use of language as an argument, it should be noted that language is the means of interaction in nature at all functional levels. Thus, biology is a language, the interaction between particles is formally a language, although this point requires a deeper analysis for its justification. In particular, natural language is in fact a non-formal language, so it is not an axiomatic language, which makes it inconsistent.

Finally, in relation to the learning capability attributed to the brain, this is a fundamental characteristic of nature, as demonstrated by mathematical models of learning and evidenced in an incipient manner by AI.

Another way of approaching the question about the nature of mathematics is through Wigner’s enigma [8], in which he asks about the inexplicable effectiveness of mathematics. But this topic and the topics opened before will be dealt with and expanded in later posts.

References

[1] M. Livio, Is God a Mathematician?, New York: Simon & Schuster Paperbacks, 2009.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948. 
[3] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[4] M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[5] M. Tegmark, Our Mathematical Universe: My Quest For The Ultimate Nature Of Reality, Knopf Doubleday Publishing Group, 2014.
[6] R. Landauer, «Irreversibility and Heat Generation in Computing Process,» IBM J. Res. Dev., vol. 5, pp. 183-191, 1961.
[7] S. Jacobson y E. M. Marcus, Neuroanatomy for the Neuroscientist, Springer, 2008.
[8] E. P. Wigner, «The unreasonable effectiveness of mathematics in the natural sciences.,» Communications on Pure and Applied Mathematics, vol. 13, nº 1, pp. 1-14, 1960.

COVID-19: What makes this pandemic different?

Zoonosis, or the jump from an animal virus to humans, has the characteristics of a contingent event. In principle, this leap can be limited by sanitary control of domestic animal species and by regulation of trade, contact and consumption of wild species. However, given the complexity of modern society and the close contact between humans at a global level, the probability of a virus jump to humans is not an avoidable event, so zoonosis can be considered a contingent phenomenon.

This situation has been clearly shown in recent times with the appearance of MERS (MERS-Cov), SARS (SARS-Cov) and recently the COVID-19 (SARS-Cov-2).  This propagation is fundamentally motivated by globalization, although the factors are multiple and complex, such as health controls and the structure of livestock farms. But the list is long, and we can also mention the expansion of other viral diseases due to climate change, such as Zika, Chikungunya or Dengue.

The question that arises in this scenario is: What factors influence the magnitude and speed of the spread of a pandemic? Thus, in the cases mentioned above, a very significant difference in the behavior and spread of infection can be seen. Except in the case of COVID-19, the spread has been limited and outbreaks have been localized and isolated, avoiding a global spread.

In contrast, the situation has been completely different with CoVID-19. Thus, its rapid expansion has caught societies unfamiliar with this type of problem unawares, so that health systems have been overwhelmed and without appropriate protocols for the treatment of the infection. On the other hand, authorities unaware of the magnitude of the problem, and ignorant of the minimum precautions to prevent the spread of the virus, seem to have made a series of chained errors, typical of catastrophic processes, such as economic bankruptcies and air accidents.

The long-term impact is still very difficult to assess, as it has triggered a vicious circle of events affecting fundamental activities of modern society.

In particular, the impact on health services will leave a deep imprint, with extension to areas that in principle are not directly related to the COVID-19, such as the psychological and psychiatric effects derived from the perception of danger and social confinement. But even more important is the detraction of resources in other health activities, having reduced the flow of daily health activity, so it is foreseeable a future increase in morbidity and mortality rates of other diseases, especially cancer.

To all this must be added the deterioration of economic activity, with reductions in GDP of up to two figures, which will trigger an increase in poverty, especially in the most disadvantaged segments of the population. And since the economic factor is the transmission belt of human activity, it is easy to imagine a perfect storm scenario.

Pandemic Influencing Factors COVID-19

But let’s return to the question that has been raised, about the singularity of SARS-Cov-2, so that its expansion has been unstoppable and that we are now facing a second wave.

To unravel this question we can analyze what the mathematical models of expansion of an infection show us, starting with the classic SIR model. This type of model allows us to determine the rates of infection (β) and recovery (γ), as well as the basic reproduction rate (R0=β/γ) from the observed morbidity.

The origin of the SIR models (Susceptible, Infectious, and Recovered) goes back to the beginning of the 20th century, proposed by Kermack and McKendrick in 1927. The advantage of these models is that they are based on a system of differential equations, which can be solved analytically and therefore suitable for resolution at the time they were proposed.

However, these types of models are basic and do not facilitate considerations of geographical distribution, mobility, probability of infection, clinical status, temporal development of each of the phases of the infection, age, sex, social distance, protection, tracking and testing strategies. On the other hand, the classic SIR model has a deductive structure, exclusively. This means that from the morbidity data it is possible to determine the basic reproduction rate exclusively, hiding fundamental parameters in the pandemic process, as will be justified below.

To contrast this idea, it is necessary to propose new approaches to the simulation of the pandemic process, as is the case of the study proposed in “A model of the spread of Covid-19” and in its implementation. In this case, the model is a discrete SIR structure, in which individuals go through an infection and recovery process with realistic states, in addition to including all the parameters for defining the scenario mentioned above, that is, probability of infection, geographical distribution of the population, mobility, etc. This allows an accurate simulation of the pandemic and, despite its complexity, its structure is very suitable for implementation with existing computational means.

The first conclusion drawn from the simulations of the initial phase of the pandemic was the need to consider the existence of a very significant asymptomatic population. Thus, in the classical model it is possible to obtain a rapid expansion of the pandemic simply by considering high values of the infection rate (β).

On the contrary, in the discrete model the application of existing data did not justify the observed data, unless there was a very significant asymptomatic population that hid the true magnitude of the spread of the infection. The symptomatic population in the early stages of the pandemic should be considered to be small. This, together with the data on spread through different geographical areas and the possible probability of infection, produced temporary results of much slower expansion that did not even trigger the priming of the model.

In summary, the result of the simulations led to totally inconsistent scenarios, until a high population of asymptomatic people was included, from which the model began to behave according to the observed data. At present, there are already more precise statistics that confirm this behavior that, in the group of infected people, get to establish that 80% are asymptomatic, 15% are symptomatic that require some type of medical attention by means of treatment or hospital admission and, the rest, 5% that require from basic level life support to advanced life support.

These figures help explain the virulence of a pandemic, which is strongly regulated by the percentage of asymptomatic individuals. This behavior justifies the enormous difference between the behaviors of different types of viruses. Thus, if a virus has a high morbidity it is easy to track and isolate, since the infectious cases do not remain hidden. On the contrary, a virus with low morbidity keeps hidden the individuals who are vectors of the disease, since they belong to the group of asymptomatic people. Unlike the viruses mentioned above, COVID-19 is a paradigmatic example of this scenario, with the added bonus that it is a virus that has demonstrated a great capacity for contagion.

This behavior has meant that when the pandemic has shown its face there was already a huge group of individual vectors. And this has probably been the origin of a chain of events with serious health, economic and social consequences.

The mechanisms of expansion and containment of the pandemic

In retrospect, the apparent low incidence in the first few weeks suggested that the risk of a pandemic was low and not very virulent. Obviously, an observation clearly distorted by the concealment of the problem caused by the asymptomatic nature of the majority of those infected.

This possibly also conditioned the response to their containment. The inadequate management of the threat by governments and institutions, the lack of protection resources and the message transmitted to the population ended up materializing the pandemic.

In this context, there is one aspect that calls for deep attention. A disease with a high infectious capacity requires a very effective means of transmission and since the first symptoms were of pulmonary type it should have been concluded that the airway was the main means of transmission. However, much emphasis was placed on direct physical contact and social distance. The minimization of the effect of aerosols, which are very active in closed spaces, as is now being recognized, is remarkable.

Another seemingly insignificant nuance related to the behavior of the pandemic under protective measures should also be noted. This is related to the modeling of the pandemic. The classical SIR model assumes that the infection rate (β) and recovery rate (γ) are uniquely proportional to the sizes of the populations in the different States. However, this is an approach that masks the underlying statistical process, and in the case of the recovery is also a conceptual flaw. This assumption determines the structure of the differential equations of the model, imposing a general solution of exponential type that is not necessarily the real one.

By the way, the exponential functions introduce a phase delay, which produces the effect that the recovery of an individual occurs in pieces, for example, first the head and then the legs!

But the reality is that the process of infection is a totally stochastic process that is a function of the probability of contagion determined by the capacity of the virus, the susceptibility of the individual, the interaction between infected and susceptible individuals, the geographical distribution, mobility, etc. In short, this process has a Gaussian nature.

As will later be justified, this Gaussian process is masked by the overlap of infection in different geographical areas, so they are only visible in separate local outbreaks, as a result of effective containment. An example of this can be found in the case of South Korea, represented in the figure below.

In the case of recovery, the process corresponds to a stochastic delay line and therefore Gaussian, since it only depends on the temporary parameters of recovery imposed by the virus, the response of the individual and the healing treatments. Therefore, the recovery process is totally independent for each individual.

The result is that the general solution of the discrete SIR model is Gaussian and therefore responds to a quadratic exponential function, unlike the order one exponential functions of the classical SIR model. This makes the protection measures much more effective than those exposed by the conventional models. So they must be considered a fundamental element to determine the strategy for the containment of the pandemic.

The point is that once a pandemic is evident, containment and confinement measures must be put in place. It is at this point that COVID-19 poses a challenge of great complexity, as a result of the large proportion of asymptomatic individuals, who are the main contributors to the spread of infection.

A radical solution to the problem requires strict confinement of the entire population for a period no less than the latency period of the virus in an infected person. To be effective, this measure must be accompanied by protective measures in the family or close environment, as well as extensive screening campaigns. This strategy has shown its effectiveness in some Asian countries. 

In reality, early prophylaxis and containment is the only measure to effectively contain the pandemic, as the model output for different dates of containment shows. Interestingly, the dispersion of the curves in the model’s priming areas is a consequence of the stochastic nature of the model.

But the late implementation of this measure, when the number of people infected in hiding was already very high, together with the lack of a culture of prophylaxis against pandemics in Western countries has meant that these measures have been ineffective and very damaging.

In this regard, it should be noted that the position of the governments has been lukewarm and in most cases totally erratic, which has contributed to the fact that the confinement measures have been followed very laxly by the population.

Here it is important to note that in the absence of effective action, governments have based their distraction strategy on the availability of a vaccine, which is clearly not a short-term solution.

As a consequence of the ineffectiveness of the measure, the period of confinement has been excessively prolonged, with restrictions being lifted once morbidity and mortality statistics were lowered. The result is that, since the virus is widespread in the population, new waves of infection have inevitably occurred.

This is another important aspect in interpreting the pandemic’s spread figures. According to the classic SIR model, everything seems to indicate that in the progression of the figures, a peak of infections should be expected, which should decrease exponentially. Throughout the first months, those responsible for the control of the pandemic have been looking for this peak, as well as the flattening of the integration curve of the total cases. Something expected but never seemed to come.

The explanation for this phenomenon is quite simple. The spread of the pandemic is not subject to infection of a closed group of individuals, as the classical SIR model assumes. Rather, the spread of the virus is a function of geographic areas with specific population density and the mobility of individuals between them. The result is that the curves that describe the pandemic are a complex superposition of the results of this whole conglomerate, as shown by the curve of deaths in Spain, on the dates indicated. 

The result is that the process can be spread out over time, so that the dynamics of the curves are a complex overlap of outbreaks that evolve according to multiple factors, such as population density and mobility, protective measures, etc. 

This indicates that the concepts of pandemic spread need to be thoroughly reviewed. This should not be surprising if we consider that throughout history there have been no reliable data that have allowed contrasting their behavior.

Evolution of morbidity and mortality

Another interesting aspect is the study of the evolution of morbidity and mortality of SARS-Cov-2. For this purpose, case records can be used, especially now that data from a second wave of infection are beginning to be available, as shown in the figure below.

In view of these data a premature conclusion could be drawn, assuring that the virus is affecting the population with greater virulence, increasing morbidity, but on the other hand it could also be said that mortality is decreasing dramatically.

But nothing could be further from reality if we consider the procedure for obtaining data on diagnosed cases. Thus, it can be seen that the magnitude of the curve of diagnosed cases in the second phase is greater than in the first phase, indicating greater morbidity. However, in the first phase the diagnosis was mainly of a symptomatic type, given the lack of resources for testing. On the contrary, in the second phase the diagnosis was made in a symptomatic way and by means of tests, PCR and serology.

This has only brought to light the magnitude of the group of asymptomatic infected, which were hidden in the first phase. Therefore, we cannot speak of a greater morbidity. On the contrary, if we look at the slope of evolution of the curve, it is smoother, indicating that the probability of infection is being much lower than that shown in the month of March. This is a clear indication that the protective measures are effective. And they would be even more so if the discipline were greater and the messages would converge on this measure, instead of creating confusion and uncertainty.

If the slopes of the case curves are compared, it is clear that the expansion of the pandemic in the first phase was very abrupt, as a result of the existence of a multitude of asymptomatic vectors and the absolute lack of prevention measures. In the second phase, the slope is gentler, attributable to the prevention measures. The comparison of these slopes is by a factor of approximately 4.

However, it is possible that without prevention measures the second phase could be much more aggressive. This is true considering that it is very possible that the number of vectors of infection at present is much higher than in the first phase, since the pandemic is much more widespread. Therefore the spread factor could have been much higher in the second phase, as a consequence of this parameter.

In terms of mortality, the ratio deceased/diagnosed seems to have dropped dramatically, which would lead to say that the lethality of the virus has dropped. Thus at the peak of the first phase its value was approximately 0.1, while in the second phase it has a value of approximately 0.01, that is, an order of magnitude lower.

But considering that in the figures of diagnosed in the first phase the asymptomatic were hidden, both ratios are not comparable. Obviously, the term corresponding to the asymptomatic would allow us to explain this apparent decrease, although we must also consider that the real mortality has decreased as a result of improved treatment protocols.

Consequently, it is not possible to draw consequences on the evolution of the lethality of the virus, but what is certain is that the magnitudes of mortality are decreasing for two reasons. One is virtual one, such as the availability of more reliable figures of infected people, and the other is real, as a result of improved treatment protocols.

Strategies for the future

At present, it seems clear that the spread of the virus is a consolidated fact, so the only possible strategy in the short and medium term is to limit its impact. In the long term, the availability of a vaccine could finally eradicate the disease, although the possibility of the disease becoming endemic or recurrent will also have to be considered.

For this reason, and considering the implications of the pandemic on human activity of all kinds, future plans must be based on a strategy of optimization, so as to minimize the impact on the general health of the population and on the economy. This is because increased poverty may have a greater impact than the pandemic itself.

Under this point of view and considering the aspects analyzed above, the strategy should be based on the following points:

  • Strict protection and prophylaxis measures: masks, cleaning, ventilation, social distance in all areas.
  • Protection of the segments of the population at risk.
  • Maintain as much as possible the economic and daily activities.
  • Social awareness: Voluntary declaration and isolation in case of infection. Compliance with regulations without the need for coercive measures. 
  • Implementing an organizational structure for mass testing, tracking and isolation of infected.

It is important to note that, as experience is demonstrating, aggressive containment measures are not adequate to prevent successive waves of infection and are generally highly ineffective, producing distrust and rejection, which is a brake on fighting the pandemic.

Another interesting aspect is that the implementation of the previous points does not correspond to strictly health-related projects, but rather to resource management and control projects. For this reason, the activities aimed at fighting the pandemic must be ad hoc projects, since the pandemic is an eventual event, to which specific efforts must be devoted.

Directing the effort through organizations such as the health system itself will only result in a destructuring of the organization and a dispersion of resources, a task for which it has not been created nor does it have the profile to do so.