Tag Archives: Philosophy

The perception of time

In the post “What is the nature of time?” the essence of time has been analyzed from the point of view of physics. Several conclusions have been drawn from it, which can be summarized in the following points:

  • Time is an observable that emerges at the classical level from quantum reality.
  • Time is determined by the sequence of events that determines the dynamics of classical reality.
  • Time is not reversible, but is a unidirectional process determined by the sequence of events (arrow of time), in which entropy grows in the direction of the sequence of events. 
  • Quantum reality has a reversible nature, so the entropy of the system is constant and therefore its description is an invariant.
  • The space-time synchronization of events requires an intimate connection of space-time at the level of quantum reality, which is deduced from the theory of relativity and quantum entanglement.

Therefore, a sequence of events can be established which allows describing the dynamics of a classical system (CS) in the following way:

CS = {… Si-2, Si-1, Si, Si+1, Si+2,…}, where Si is the state of the system at instant i.

This perspective has as a consequence that from a perceptual point of view the past can be defined as the sequence {… S-2, S-1}, the future as the sequence {S+1, S+2,…} and the present as the state S0.

At this point it is important to emphasize that these states are perfectly distinguishable from a sequential conception (time) since the amount of information of each state, determined by its entropy, verifies that:

  H(Si) < H(Si+1) [1].

Therefore, it seems necessary to analyze how this sequence of states can be interpreted by an observer, the process of perception being a very prominent factor in the development of philosophical theories on the nature of time.

Without going into the foundation of these theories, since we have exhaustive references on the subject [2], we will focus on how the sequence of events produced by the dynamics of a system can be interpreted from the point of view of the mechanisms of perception [3] and from the perspective currently offered by the knowledge on Artificial Intelligence (AI) [4].

Nevertheless, let us make a brief note on what physical time means. According to the theory of relativity, space-time is as if in a vacuum there were a network of clocks and rules of measurement, forming a reference system, in such a way that its geometry depends on the gravitational effects and the relative velocity of the observer’s own reference system. And it is at this point where we can scale in the interpretation of time if we consider the observer as a perceptive entity and establish a relationship between physics and perception.

The physical structure of space-time

What we are going to discuss next is whether the sequence of states {… S-2, S-1, S0, S+1, S+2,…} is a physical reality or, on the contrary, is a purely mathematical construction, such that the concept of past, present and future is exclusively a consequence of the perception of this sequence of states. Which means that the only physical reality would be the state of the system S0, and that the sequences {… S-2, S-1} and {S+1, Si+2,…} would be an abstraction or fiction created by the mathematical model.

The contrast between these two views has an immediate consequence. In the first case, in which the sequence of states has physical reality, the physical system would be formed by the set of states {… S-2, S-1, S0, S+1, S+2,…}, which would imply a physical behavior different from the observed universe, which would reinforce the strictly mathematical nature of the sequence of states.

In the second hypothesis there would only be a physical reality determined by the state of the system S0, in such a way that physical time would be an emergent property, consequence of the entropy difference between states that would differentiate them and make them observable.

This conception must be consistent with the theory of relativity, which is possible if we consider that one of the consequences of its postulates is the causality of the system, so that the sequence of events is the same in all reference systems, regardless of the fact that the space-time geometry is different in each of them and therefore the emergent space-time magnitudes are different.

At this point one could posit as fundamental postulates of the theory of relativity the invariance of the sequence of events and covariance. But this is another subject.

Past , present and future

From this physical conception of space-time, the question that arises is how this physical reality determines or conditions an observer’s perception of time.

Thus, in the post “the predictive brain” the ability of neural tissue to process time, which allows higher living beings to interact with the environment, has been indirectly discussed. This requires not only establishing space-time models, but also making space-time predictions [5]. Thus, time perception requires discriminating time intervals of the order of milliseconds to coordinate in real time the stimuli produced by the sensory organs and the actions to activate the motor organs. The performance of these functions is distributed in the brain and involves multiple neural structures, such as the basal ganglia, cerebellum, hippocampus and cerebral cortex [6] [7].

To this we must add that the brain is capable of establishing long-term timelines, as shown by the perception of time in humans [8], in such a way that it allows establishing a narrative of the sequence of events, which is influenced by the subjective interest of those events.

This indicates that when we speak generically of “time” we should establish the context to which we refer. Thus, when we speak of physical time we would be referring to relativistic time, as the time that elapses between two events and that we measure by means of what we define as a clock.

But when we refer to the perception of time, a perceptual entity, human or artificial, interprets the past as something physically real, based on the memory provided by classical reality. But such reality does not exist once the sequence of events has elapsed, since physically only the state S0 exists, so that the states Si, i<0, are only a fiction of the mathematical model. In fact, the very foundation of the mathematical model shows, through chaos theory [9], that it is not possible to reconstruct the states Si, i<0, from S0. In the same way it is not possible to define the future states, although here an additional element appears determined by the increase of the entropy of the system.

With this, we are hypothesizing that the classical universe is S≡S0, and that the states Si, i≠0 have no physical reality (another thing is the quantum universe, which is reversible, so all its states have the same entropy! Although at the moment it is nothing more than a set of mathematical models). Colloquially, this would mean that the classical universe does not have a repository of Si states. In other words, the classical universe would have no memory of itself.

Thus, it is S that supports the memory mechanisms and this is what makes it possible to make a virtual reconstruction of the past, giving support to our memories, as well as to areas of knowledge such as history, archeology or geology. In the same way, state S provides the information to make a virtual construction of what we define as the future, although this issue will be argued later. Without going into details, we know that in previous states we have had some experiences that we store in our memory and in our photo albums.

Therefore, according to this hypothesis it can be concluded that the concepts of past and future do not correspond to a physical reality, since the sequences of states {… S-2, S-1} and {S+1, S+2,…}  have no physical reality, since they are only a mathematical artifact. This means that the concepts of past and future are virtual constructs that are materialized on the basis of the present state S, through the mechanisms of perception and memory. The arising question that we will try to answer is the one about how the mechanisms of perception construct these concepts.

Mechanisms of perception

Natural processes are determined by the dynamics of the system in such a way that, according to the proposed model, there is only what we define as present state S. Consequently, if the past and the future have no physical reality, it is worth asking whether plants, inanimate beings are aware of the passage of time.

It is obvious that for humans the answer is yes, otherwise we would not be talking about it. And the reason for this is the information about the past contained in the state S. But this requires the existence of information processing mechanisms that make it possible to virtually construct the past. Similarly, these mechanisms may allow the construction of predictions about future states that constitute the perception of the future [10].

For this, the cognitive function of the brain requires the coordination of neural activity at different levels, from neurons, neural circuits, to large-scale neural networks [7]. As an example of this, the post “The predictive brain” highlights the need to coordinate the stimuli perceived by the sensory organs with the motor organs, in order to be able to interact with the environment. Not only that, but it is essential for the neural tissue to perform predictive processing functions [5], thus overcoming the limitations caused by the response times of neurons.

As already indicated, the perception of time involves several neural structures, which allow the measurement of time at different scales. Thus, the cerebellum allows establishing a time base on the scale of tens of milliseconds [11], analogous to a spatiotemporal metric. Since the dynamics of events is something physical that modifies the state of the system S, the measurement of these changes by the brain requires a physical mechanism that memorizes these changes, analogous to a delay line, which seems to be supported by the cerebellum.

However, this estimation of time cannot be considered at the psychological level as a high-level perceptual functionality, since it is only effective within very short temporal windows, necessary for the performance of functions of an automatic or unconscious nature. For this reason, one could say that time as a physical entity is not perceived by the brain at the conscious level. Thus, what we generally define as time perception is a relationship between events that constitute a story or narrative. This involves processes of attention, memory and consciousness supported in a complex way, involving structures from the basal ganglia to the cerebral cortex, with links between temporal and non-temporal perception mechanisms [12] [13].

Given the complexity of the brain and the mechanisms of perception, attention, memory and self-awareness, it is not possible, at least for the time being, to understand in detail how humans construct temporal stories. Fortunately, we now have AI models that allow to understanding how this can be possible and how stories and narratives can be constructed from the sequential perception of daily life events. A paradigmatic example of this are the “Large Language Models” (LLMs), which based on natural language processing (NLP) techniques and neural networks, are capable of understanding, summarizing, generating and predicting new content and which raise the debate on whether human cognitive capabilities could emerge in these generic models, if provided with sufficient processing resources and training data [14].

Without delving into this debate, today anyone can verify through this type of applications (ChatGPT, BARD, Claude, etc.) how a completely consistent story can be constructed, both in its content and in its temporal plot, from the human experiences reflected in written texts with which these models have been trained.

Taking these models as a reference provides solid evidence on perception in general and on the perception of time in particular. However, it should be noted that these models also show how new properties emerge in their behavior as their complexity grows [15]. This gives a clue as to how new perceptual capabilities or even concepts such as self-awareness may emerge, although this last term is purely speculative, and that in the event that this ends up being the case, it raises the problem discussed in the post “Consciousness from the AI point of view” concerning how to know that an entity is self-aware.

But returning to the subject at hand, what is really important from the point of view of the perception of the passage of time is how the timeline of stories or narratives is a virtual construction that transcends physical time. Thus, the chronological line of events does not refer to a measure of physical time, but is a structure in which a hierarchy or order is established in the course of events.

Virtual perception of time

It can therefore be concluded that the brain only needs to measure physical time in the very short term, in order to be able to interact with the physical environment. But from this point on, all that is needed is to establish a chronological order without a precise reference to physical time. Thus we can refer to an hour, day, month, year, or a reference to another event as a way of ordering events, but always within a purely virtual context. This is one of the reasons for how the passage of time is perceived, so that virtual time will be extended according to the amount of information or relevance of events, something that is evident in playful or stressful situations [16].

Conclusions

The first conclusion that results from the above analysis is the existence of two conceptions of time. One is the one related to physical time that corresponds to the sequence of states of a physical system and the other is the one corresponding to the stimuli produced by this sequence of states on a perceptual intelligence.

Both concepts are elusive when it comes to understanding them. We are able to measure physical time with great precision. However, the theory of relativity shows space-time as an emergent reality that depends on the reference system. On the other hand, the synchronization of clocks and the establishment of a space-measuring structure may seem somewhat contrived, oriented simply to the understanding of space-time from the point of view of physics. On the other hand, the compression of cognitive processes still has many unknowns, although new developments in AI allow us to intuit its foundation, which sheds some light on the concept of psychological time.

The interpretation of time as the sequence of events or states occurring within a reference system is consistent with the theory of relativity and also allows for a simple justification of the psychological perception of time as a narrative.

The hypothesis that the past and the future have no physical reality and that, therefore, the universe keeps no record of the sequence of states, supports the idea that these concepts are an emergent reality at the cognitive level, so that the conception of time at the perceptual level would be based on the information contained in the current state of the system, exclusively. 

From the point of view of physics this hypothesis does not contradict any physical law. Moreover, it can be considered fundamental in the theory of relativity, since it assures a causal behavior that would solve the question of temporal irreversibility and the impossibility of traveling both to the past and to the future. Moreover, invariance in the time sequence supports the concept of causality, which is fundamental for the emergent system to be logically consistent.

References

[1]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.
[2]N. Emery, N. Markosian y M. Sullivan, «”Time”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2020/entries/time/&gt;,» [En línea].
[3]E. R. Kandel, J. H. Schwartz, S. A. Siegenbaum y A. J. Hudspeth, Principles of Neural Science, The McGraw-Hill, 2013.
[4]F. Emmert-Streib, Z. Yang, S. Tripathi y M. Dehmer, «An Introductory Review of Deep Learning for Prediction Models With Big Data,» Front. Artif. Intell., 2020.
[5]W. Wiese y T. Metzinger, «Vanilla PP for philosophers: a primer on predictive processing.,» In Philosophy and Predictive Processing. T. Metzinger &W.Wiese, Eds., pp. 1-18, 2017.
[6]J. Hawkins y S. Ahmad, «Why Neurons Have Tousands of Synapses, Theory of Sequence Memory in Neocortex,» Frontiers in Neural Circuits, vol. 10, nº 23, 2016.
[7]S. Rao, A. Mayer y D. Harrington, «The evolution of brain activation during temporal processing.,» Nature Neuroscience, vol. 4, p. 317–323, 2001.
[8]V. Evans, Language and Time: A Cognitive Linguistics Approach, Cambridge University Press, 2013.
[9]R. Bishop, «Chaos: The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed).,» Bishop, Robert, “Chaos”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), 2017. [En línea]. Available: https://plato.stanford.edu/archives/spr2017/entries/chaos/. [Último acceso: 7 9 2023].
[10]A. Nayebi, R. Rajalingham, M. Jazayeri y G. R. Yang, «Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes,» arXiv.2305.11772v2.pdf, 2023.
[11]R. B. Ivry, R. M. Spencer, H. N. Zelaznik y J. Diedrichsen, «Ivry, Richard B., REBECCA M. Spencer, Howard N. Zelaznik and Jörn Diedrichsen. The Cerebellum and Event Timing,» Ivry, Richard B., REBECCA M. Spencer, Howard N. Zelaznik and Jörn DiedrichAnnals of the New York Academy of Sciences, vol. 978, 2002.
[12]W. J. Matthews y W. H. Meck, «Temporal cognition: Connecting subjective time to perception, attention, and memory.,» Psychol Bull., vol. 142, nº 8, pp. 865-907, 2016.
[13]A. Kok, Functions of the Brain: A Conceptual Approach to Cognitive Neuroscience, Routledge, 2020.
[14]J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean y W. Fedus, «Emergent Abilities of Large Language Models,» Transactions on Machine Learning Research. https://openreview.net/forum?id=yzkSU5zdwD, 2022.
[15]T. Webb, K. J. Holyoak y H. Lu, «Emergent Analogical Reasoning in Large Language Models,» Nature Human Behaviour, vol. 7, p. 1526–1541, 3 8 2023.
[16]P. U. Tse, J. Intriligator, J. Rivest y P. Cavanagh, «Attention and the subjective expansion of time,» Perception & Psychophysics, vol. 66, pp. 1171-1189, 2004.

Teleportation: Fact and Fiction

When we talk about teleportation, we quickly remember science fiction stories in which both people and artifacts are teleported over great distances instantaneously, overcoming the limitations of relativistic physical laws.

Considering that the theoretical possibility of teleporting quantum information was proposed in the scientific field, Bennett et al [1] (1993), and that it was later experimentally demonstrated to be possible, Bouwmeester et al (1997)  [2] and Boschi et al (1998) [3], we can ask the question what is true in this assumption.

For this reason, the aim of this post is to expose the basics of quantum teleportation, analyze its possible practical applications and clarify what is true in the scenarios proposed by science fiction.

Fundamentals of quantum teleportation

Before delving into the fundamentals, it should be clarified that quantum teleportation consists of converting the quantum state of a system into an exact replica of the unknown quantum state of another system with which it is quantum entangled. Therefore, teleportation in no way means the transfer of matter or energy. And as we will see below, teleportation also does not imply the violation of the non-cloning theorem [4] [5].

Thus, the model proposed by Bennet et al [1] is the one shown in the figure below, which is constituted by a set of quantum logic gates that process the states of the three qubits, named A, B and Ancillary. The A qubit corresponds to the system whose state is to be teleported, while the B qubit is the system on which the quantum state of system A is transferred. The ancillary qubit is a qubit necessary to perform the transfer.

Once the three qubits are processed by the logic gates located up to the point indicated by ③ they are quantum entangled [6] [7] [8] [9], in such a way that when a measurement is performed on qubit A and ancillary qubit ④ its state collapses into one of the possible states (|00〉,|01〉,|10〉,|11〉).

From this information, qubit B is processed by a quantum gate U, whose functionality depends on the state obtained from the measurement performed on qubits A and ancillary, according to the following criterion, where I, X, Z are Pauli gates.

  • |00〉 → U = I.
  • |01〉 → U = X.
  • |10〉 → U = Z.
  • |11〉 → U = XZ.

As a consequence, the state of qubit B corresponds to the original state of qubit A, which in turn is modified as a consequence of the measurement process. This means that once the measurement of qubit A and the ancillary qubit is performed, their state collapses, verifying the non-cloning theorem [4] [5] which establishes the impossibility of creating copies of a quantum state.

From a practical point of view, once the three qubits are entangled, qubit B can be moved to another spatial position, which is constrained by the laws of general relativity, so the velocity of qubit B cannot exceed the speed of light. On the other hand, the measurement result of the A ancillary qubits must be transferred to the location of qubit B by means of a classical information channel, so the information transfer speed cannot exceed the speed of light. The result is that teleportation makes it possible to transfer the state of a quantum particle to another remotely located quantum particle, but this transfer is bound by the laws of general relativity, so it cannot exceed the speed of light.

It is very important to note that in reality the only thing that is transferred between qubit A and qubit B is the information describing the wave function, since what physically constitutes the particles that support the qubit are not teleported. This raises a fundamental question concerning the meaning of teleportation at the level of classical reality, which we will analyze in the context of complex systems consisting of multiple qubits.

But a fundamental aspect in determining the nature of information is the fact that teleportation is based on the transfer of information, which is another indication that information is the support of reality, as we concluded in the post “Reality as an Information Process“.

Quantum teleportation of macroscopic objects

Analogous to the teleportation scenario proposed by Bennett et al [1], it is possible to teleport the quantum state of a complex system consisting of N quantum particles. As shown in the figure below, teleportation from system A to system B requires the use of N ancillary qubits.

This is because the number of combinations of the coefficients aI of the wave function |ψC〉 and their signs is of the order of 22N. Thus, when the measurement of the qubits of system A and the auxiliary qubits is performed, 2N classical bits are obtained, which encode 22N  configurations of the unitary transform U. Thus, the coefficients of the wave function |ψC〉 can be rearranged, transforming the wave function of system B into |ψ〉. 

Consequently, from the theoretical point of view, the teleportation of complex quantum systems consisting of a large number of particles is possible. However, its practical realization faces the difficulty of maintaining the quantum entanglement of all particles, as a consequence of quantum decoherence [10]. This causes the quantum particles to no longer be entangled as a consequence of the interaction with the environment, which causes the transferred quantum information to contain errors.

Since decoherence effect grows exponentially with the number of particles forming the quantum system, it is evident that the teleportation of N-particle systems is in practice a huge challenge, since the system is composed of 3N particles. The difficulty is even greater if it is considered that in the preparation of the teleportation scenario systems A, B and ancillary qubits will be in the same location. But subsequently system B will have to move to another space-time location in order for the teleportation to make any practical sense. This makes system B under physical conditions that make decoherence much more likely and produce a higher error rate in the transferred quantum state, with respect to the original quantum state of system A.

But suppose that these limitations are overcome in such a way that it is possible in practice to teleport macroscopic objects, even objects of a biological nature. The question arises: what properties of the teleported object are transferred to the receiving object?

In principle, it can be assumed that the properties of the receiving object have the same properties as the original object from the point of view of classical reality, since after the teleportation is completed the receiving object has the same wave function as the teleported object.

In the case of inanimate objects it can be assumed that the classical properties of the receiving object are the same as those of the original object, since its wave function is exactly the same. This must be so since the observables of the object are determined by the wave function. This means that the receiving object will not be distinguishable from the original object, so for all intents and purposes it must be considered the same object. But from this conclusion the question again arises as to what is the nature of reality, since the process of teleportation is based on the transfer of information between the original object and the receiving object. Therefore, it seems obvious that information is a fundamental part of reality.

Another issue is the teleportation of biological objects. In this case the same argument could be used as in the case of non-animate objects. However, in this case it must be considered that in the framework of classical reality decoherence plays a fundamental role, since classical reality emerges as a consequence of the interaction of quantum systems, which observe one another, producing the collapse of their quantum functions, emerging states of classical reality.

This makes the process of entanglement of biological systems necessary in teleportation incompatible with what is defined as life, since this process would inhibit decoherence and therefore the emergence of classical reality. This issue has already been discussed in the posts Reality as an irreducible layered structure and A macroscopic view of Schrodinger’s cat, in which it is made clear that a living being is a set of independent quantum systems, and therefore not entangled among them. Therefore, the process of entanglement of all these systems will require the inhibition of all biological activity, something that will certainly have a profound effect on what is defined as a living being.

Since if teleportation is to be used to move an object to another location, system B must be relocated to that location prior to making measurements on system A and the ancillary system, which is governed by the laws of general relativity. Additionally, once the measurement has been performed, the information must be transferred to the location of system B, which is also limited by general relativity. In short, the teleportation process has no practical advantage over a classical transport process, especially considering that it is also susceptible to possible quantum errors.

Consequently, quantum applications are limited to the implementation of quantum networks and quantum computing systems, the structure of which can be found in the specialized literature [11] [12].

A bit of theory

The functionality of quantum systems is based on tensor calculus and quantum computation [13]. In particular, in order to illustrate the mathematical foundation underpinning quantum teleportation, the figure below shows the functionality of the Hadamard and CNOT logic gates needed to implement quantum teleportation.

Additionally, the following figure shows the functionality of the Pauli gates, necessary to perform the transformation of the wave function of qubit B, once the measurement is performed on the A and auxiliary qubits.

Conclusion

As discussed, quantum teleportation allows the transfer of quantum information between two remote locations by means of particle entanglement. This makes it possible to implement quantum communication and computing systems.

Although for the moment its experimental realization is limited to a very small number of particles, from a theoretical point of view it can be applied to macroscopic objects, which raises the possibility of applying it to transport objects of classical reality, even objects of a biological nature.

However, as has been analyzed, the application of teleportation to macroscopic objects poses a difficulty as a consequence of quantum decoherence, which implies the appearance of errors in the transferred quantum information.

On the other hand, quantum teleportation does not involve overcoming the limitations imposed by the theory of relativity, so the fictitious idea of using quantum teleportation as a means of transferring macroscopic objects at a distance instantaneously is not an option. But in addition, it must be considered that quantum entanglement of biological objects may not be compatible with what is defined as life.

[1]C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters, “Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels,” Phys. Rev. Lett., vol. 70, pp. 1895-1899, 1993.
[2]D. Bouwmeester, J.-W. Pan, K. Matte, M. Eibl, H. Weinfurter y A. Heilinger, «Experimental quantum teleportation,» arXiv:1901.11004v1 [quant-ph], 1997.
[3]D. Boschi, S. Branca, F. De Martini, L. Hardy y S. Popescu, «Experimental Realization of Teleporting an Unknown Pure Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels,» Physical Review Letters, vol. 80, nº 6, pp. 1121-1125, 1998.
[4]W. K. Wootters y W. H. Zurek, «A Single Quantum Cannot be Cloned,» Nature, vol. 299, pp. 802-803, 1982.
[5]D. Dieks, «Communication by EPR devices,» Physics Letters, vol. 92A, nº 6, pp. 271-272, 1982.
[6]E. Schrödinger, «Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 32, nº 3, pp. 446­-452, 1936.
[7]A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[8]J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics, vol. 1, nº 3, pp. 195-290, 1964.
[9]A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[10]H. D. Zeh, «On the Interpretation of Measurement in Quantum Theory,» Found. Phys., vol. 1, nº 1, pp. 69-76, 1970.
[11]T. Liu, «The Applications and Challenges of Quantum Teleportation,» Journal of Physics: Conference Series, vol. 1634, nº 1, 2020.
[12]Z.-H. Yan, J.-L. Qin, Z.-Z. Qin, X.-L. Su, X.-J. Jia, C.-D. Xie y K.-C. Peng, «Generation of non-classical states of light and their application in deterministic quantum teleportation,» Fundamental Research, vol. 1, nº 1, pp. 43-49, 2021.
[13]M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011.

What is the nature of time?

Undoubtedly, the concept of time is possibly one of the greatest mysteries of nature. The nature of time has always been a subject of debate both from the point of view of philosophy and physics. But this has taken on special relevance as a consequence of the development of the theory of relativity, which has marked a turning point in the perception of space-time.

Throughout history, different philosophical theories have been put forward on the nature of time [1], although it has been from the twentieth century onwards when the greatest development has taken place, mainly due to advances in physics. Thus, it is worth mentioning the argument against the reality of time put forward by McTaggart [2], such that time does not exist and that the perception of a temporal order is simply an appearance, which has had a great influence on philosophical thought.

However, McTaggart’s argument is based on the ordering of events, as we perceive them. From this idea, several philosophical theories have been developed, such as A-theory, B-theory, C-theory and D-theory [3]. However, this philosophical development is based on abstract reasoning, without relying on the knowledge provided by physical models, which raises questions of an ontological nature. 

Thus, both relativity theory and quantum theory show that the emergent reality is an observable reality, which means that in the case of space-time both spatial and temporal coordinates are observable parameters, emerging from an underlying reality. In the case of time this raises the question: Does the fact that something is past, present or future imply that it is something real? Consequently, how does the reality shown by physics connect with the philosophical thesis?

If we focus on an analysis based on physical knowledge, there are two fundamental aspects in the conception of time. The first and most obvious is the perception of the passage of time, on which the idea of past, present and future is based, which Arthur Eddington defined as the arrow of time [4], which highlights its irreversibility. The second aspect is what Carlo Rovelli [5] defines as “loss of unity” and refers to space-time relativity, which makes the concept of past, present and future an arbitrary concept, based on the perception of physical events.

But, in addition to using physical criteria in the analysis of the nature of time, it seems necessary to analyze it from the point of view of information theory  [6], which allows an abstract approach to overcome the limitations derived from the secrets locked in the underlying reality. This is possible since any element of reality must have an abstract representation, i.e. by information, otherwise it cannot be perceived by any means, be it sensory organ or measuring device, so it will not be an element of reality.

The topology of time

From the Newtonian point of view, the dynamics of classical systems develops in the context of space-time of four dimensions, three spatial dimensions (x,y,z) and one temporal dimension (t), so that the state of the system can be expressed as a function of the generalized coordinates q and the generalized moments p as a function f(q,p,t), where q and p are tuples (ordered lists of coordinates and moments) that determine the state of each of the elements that compose the system.

Thus, for a system of point particles, the state of each particle is determined by the coordinates of its position q = (x,y,z) and of its momentum p = (mẋ, mẏ, mż). This representation is very convenient, since it allows the analysis of the systems by calculating continuous time functions. However, this view can lead to a wrong interpretation since identifying time as a mathematical variable makes it conceived as a reversible variable. This becomes clear if the dynamics of the system is represented as a sequence of states, which according to quantum theory has a discrete nature [7] and can be expressed in terms of a classical system (CS) as:

        CS = {… Si-2(qi-2,pi-2), Si-1(qi-1,pi+-), Si(qi,pi), Si+1(qi+1,pi+1), Si+2(qi+2,pi+2),…}

According to this representation, we define the past as the sequence {… Si-2(qi-2,pi-2), Si-1(qi-1,pi+-)},  the future as the sequence {Si+1(qi+1,pi+1), Si+2(qi+2,pi+2),…} and the present as the state Si(qi,pi). The question that arises is: Do the sequences {… Si-3(qi-3,pi-3), Si-2(qi-2,pi-2), Si-1(qi-1,pi+-)} y {Si+1(qi+1,pi+1), Si+2(qi+2,pi+2), Si+3(qi+3,pi+3),…} have real existence? Or on the contrary: Are they the product of the perception of the emergent reality?

In the case of a quantum system its state is represented by its wave function Ψ(q), which is a superposition of the wave functions that compose the system:

        Ψ(q,t) = Ψ(q1,t) ⊗ Ψ(q1,t) …⊗ Ψ(qi,t) …⊗ Ψ(qn,t)

Thus, the dynamics of the system can be expressed as a discrete sequence of states:

        QS = {… Ψi-2(q i-2), Ψi-1(q i-1), Ψi(q i), Ψi+1(q i+1), Ψi+2(q i+2), …}

As in the case of the classical system Ψi(q) would represent the present state, while {… Ψi-2(q), ΨYi-1(q)} represents the past and {Ψi+1(q), Ψi+2(q), …} the future, although as will be discussed later this interpretation is questionable.

However, it is essential to emphasize that the sequences of the classical system CS and the quantum system QS have, from the point of view of information theory, a characteristic that makes that their nature, and therefore their interpretation, must be different. Thus, quantum systems have a reversible nature, since their dynamics is determined by unitary transformations [8], so that all the states of the sequence contain the same amount of information. In other words, their entropy remains constant throughout the sequence:

        H(Ψi(q i)) = Hi(q i+1)).

In contrast, classical systems are irreversible [9], so the amount of information of the sequence states grows systematically, such that:

        H(Si(qi,pi)) < H(Si+1(qi+1,pi+1)).

Concerning the entropy increase of classical systems, the post “An interpretation of the collapse of the wave function” has dealt with the nature of entropy growth from the “Pauli’s Master Equation” [10], which demonstrates that quantum reality is a source of emergent information towards classical reality. However, this demonstration is abstract in nature and provides no clues as to how this occurs physically, so it remains a mystery. Obviously, the entropy growth of classical systems assumes that there must be a source of information and, as has been justified, this source is quantum reality.

This makes the states of the classical system sequence distinguishable, establishing a directional order. On the contrary, the states of the quantum system are not distinguishable, since they all contain the same information because quantum theory has a reversible nature. And here we must make a crucial point, linked to the process of observation of quantum states, which may lead us to think that this interpretation is not correct. Thus, the classical states emerge as a consequence of the interaction of the quantum components of the system, which may lead to the conclusion that the quantum states are distinguishable, but the truth is that the states that are distinguishable are the emerging classical states.

According to this reasoning the following logical conclusion can be drawn. Time is a property that emerges from quantum reality as a consequence of the fact that the classical states of the system are distinguishable, establishing in addition what has been called the arrow of time, in such a way that the sequence of states has a distinguishable characteristic such as the entropy of the system.

This also makes it possible to hypothesize that time only has an observable existence at the classical level, while at the quantum level the dynamics of the system would not be subject to the concept of time, and would therefore be determined by means of other mechanisms. In principle this may seem contradictory, since according to the formulation of quantum mechanics the time variable appears explicitly. In reality this would be nothing more than a mathematical contraption that allows expressing a quantum model at the boundary that separates the quantum system and the classical system and thus describe the classical reality from the quantum mathematical model. In this sense it should be considered that the quantum model is nothing more than a mathematical model of the emerging reality that arises from an underlying nature, which for the moment is unknown and which tries to be interpreted by new models, such as string theory.

An argument that can support this idea is also found in the theory of loop quantum gravitation (LQG) [11], which is defined as a substrate-independent theory, meaning that it is not embedded in a space-time structure, and which posits that space and time emerge at distances about 10 times the Planck length [12].

The arrow of time

When analyzing the sequences of states CS and QS we have alluded to the past, present and future, which would be an emergent concept determined by the evolution of the entropy of the system. This seems clear in classical reality. But as reasoned, the sequence of quantum states is indistinguishable, so it would not be possible to establish the concept of past, present and future.

A fundamental aspect that must be overcome is the influence of the Newtonian view of the interpretation of time. Thus, in the fundamental equation of dynamics:

        F = m d2x/dt2

the variable time is squared, this indicates that the equation does not distinguish t from -t, i.e., it is the same backward or forward in time, so that the dynamics of the system is reversible. This at the time led to Laplace’s causal determinism, which remained in force until the development of statistical mechanics and Boltzmann’s interpretation of the concept of entropy. To this we must add that throughout the twentieth century scientific development has led without any doubt to the conclusion that physics cannot be completely deterministic, both classical physics and quantum physics [13].

Therefore, it can be said that the development of calculus and the use of the continuous variable time (t) in the determination of dynamical processes has been fundamental and very fruitful for the development of physics. However, it must be concluded that this can be considered a mathematical contraption that does not reflect the true nature of time. Thus, when a trajectory is represented on coordinate axes, the sensation is created that time can be reversed at will, which would be justified by the reversibility of the processes.

However, classical processes are always subject to thermodynamic constraints, which make these processes irreversible, which mean that for an isolated system its state evolves in such a way that its entropy grows steadily and therefore the quantity and information describing the system, so that a future state cannot be reverted to a past state. Consequently, if the state of the system is represented as a function of time, it could be thought that the time variable could be reverted as if a cursor were moved on the time axis, which does not seem to have physical reality, since the growth of entropy is not compatible with this operation.

To further emphasize the idea of the possibility of moving in time as if it were an axis or a cursor, we can consider the evolution of a reversible system, which can reach a certain state Si and continue to evolve, and after a certain moment it can reach the state Si again. But this does not mean that time has been reversed, but rather that time always evolves in the direction of the dynamics of the system and the only thing that happens is that the state of the system can return to a past state in a reversible way. However, in classical systems this is only a hypothetical proposal, since reversible systems are ideal systems free of thermodynamic behavior, such as gravitational, electromagnetic and frictionless mechanical systems. To say, ideal models that do not interact with an underlying reality.

In short, the state of a system is a sequence determined by an index that grows systematically. Therefore, the idea of a time axis, although it allows us to visualize and treat systems intuitively, should be something we should discard, since it leads us to a misconception of the nature of time. Therefore, time is not a free variable, but the perception of a sequence of states.

Returning to the concept of past, present and future, it can be assured that according to information theory, the state of present is supported by the state Si(qi,pi), and therefore is part of the classical reality. As for the sequence of past states {… Si-3(qi-3,pi-3), Si-2(qi-2,pi-2), Si-1(qi-1,pi-1)}  to be a classical reality would require that these states continue to exist physically, something totally impossible since it would require an increase of information in the system that is not in accordance with the increase of its entropy, so this concept is also purely perceptual. On the other hand, if this were possible the system would be reversible.

In the case of the future sequence of states {Si+1(qi+1,pi+1), Si+2(qi+2,pi+2),…}  it is a classical reality for occurring with a degree of uncertainty that makes it not predictable. Even supposing this were possible, the states of the present would have to increase the amount of information to hold accurate forecasts of the future, which would increase their entropy, which is at disagreement with observable reality. Therefore, the concept of the future is not a classical reality, being a purely perceptual concept. In short, it can be concluded that the only concept of classical reality is the state of the present.

The relativistic context

Consequently, classical systems offer a vision of reality as a continuous sequence of states, while quantum physics modifies it, establishing that the dynamics of systems is a discrete sequence of states. However, the classical view is no more than an appearance at the macroscopic level. However, the theory of relativity [14] modifies the classical view, such that the description of a system is a sequence of events. If to this we add the quantum view, the description of the system is a discrete sequence of events.

But in addition, the theory of relativity offers a perspective in which the perception of time depends on the reference system and therefore on the observer. Thus, as the following figure shows, clocks in motion are slower than stationary clocks, so that we can no longer speak of a single time sequence, but that it depends on the observer.

However, this does not modify the hypothesis put forward, which is to consider time as the perception of a sequence of states or events. This reinforces the idea that time emerges from an underlying reality and that its perception varies according to how it is observed. Thus, each observer has an independent view of time, determined by a sequence of events.

In addition to the relative perception of time, the theory of relativity has deeper implications, since it establishes a link between space and time, such that the relativistic interval

        ds2 = c2 dt2 – dx2 – dy2 – dz2 = c2 dt2 – (dx2 + dy2+ + dz2)

is invariant and therefore takes the same value in any reference frame.

As a consequence, both the perception of time and space depends on the observer and as the following figure shows, simultaneous events in one reference frame are observed as events occurring at different instants of time in another reference frame, so that in this one they are not simultaneous, giving rise to the concept of relativity of simultaneity.

In spite of this behavior, the view of time as the perception of a sequence of events is not modified, since although the sequences of events in each reference system are correlated, in each reference system there is a sequence of events that will be interpreted as the flow of time corresponding to each observer.

The above arguments are valid for inertial reference frames, i.e. free of acceleration. However, the theory of general relativity [15], based on the principles of covariance and equivalence, establishes the metric of the deformation of space-time in the presence of matter-energy and how this deformation acts as a gravitational field. These principles are defined as:

  • The Covariance Principle states that the laws of physics must take the same form in all reference frames.
  • The Equivalence Principle states that a system subjected to a gravitational field is indistinguishable from a non-inertial reference frame (subjected to acceleration).

It should be noted that, although the equivalence principle was fundamental in the development of general relativity, it is not a fundamental ingredient, and is not verified in the presence of electromagnetic fields. 

It follows from the theory of general relativity that acceleration bends space-time, paradigmatic examples being the gravitational redshift of photons escaping from the gravitational field, or gravitational lenses. For this reason, it is essential to analyze the concept of time perception from the point of view of this perspective.

Thus, the following figure shows a round trip to Andromeda by a spacecraft propelled with acceleration a = g. It shows the time course in the Earth reference frame t and the proper time in the spacecraft reference frame T, such that the time course on Earth is slower than in the spacecraft by a value determined by g. The fact that the time course is produced by the velocity of the spacecraft in an inertial system or by the acceleration of the spacecraft does not modify the reasoning used throughout the test, since the time course is determined exclusively in each of the reference systems by the sequence of events observed in each of them independently.

Therefore, it can be concluded that the perception of time is produced by the sequence of events occurring in the observing reference system. To avoid possible anthropic interpretations, an entity endowed with the ability to detect events and to develop artificial intelligence (AI) algorithms can be proposed as an observer. As a consequence, it can be concluded that the entity will develop a concept of time based on the sequence of events. Evidently, the developed concept will not be reversible, since this sequence is organized by an index.

However, if the event detection mechanisms were not sufficiently accurate, the entity could deduce that the dynamics of the process could be cyclic and therefore reversible. However, the sequence of events is ordered and will therefore be interpreted as flowing in a single direction.

Thus, identical entities located in different reference systems will perceive a different sequence of events of the dynamics, determined by the laws of relativity. But the underlying reality sets a mark on each of the events that is defined as physical time, and to which the observing entities are inexorably subject in their real time clocks. Therefore, the question that remains to be answered is what the nature of this behavior is.

Physical time

So far, the term perception has been used to sidestep this issue. But it is clear that although real time clocks run at different rates in different reference systems, all clocks are perfectly synchronized. But for this to be possible a total connection of the universe in its underlying reality is necessary. This must be so, since the clocks located in the different reference systems run synchronously, regardless of their location, even though they run at different speeds.

Thus, in the example of the trip to Andromeda, when the ship returns to Earth, the elapsed time of the trip in the Earth’s reference system is T = 153.72 years, while in the ship’s clock it is t = 16.92 years, but both clocks are synchronized by the parameter g, so that they run according to the expression dt = γdT. The question arises: What indications are there that the underlying reality of the universe is a fully connected structure?

There are several physical clues arising from relativistic and quantum physics, such as space-time in the photon reference frame and quantum particle entanglement. Thus, in the case of the photon γ→∞, so that any interval of time and space in the direction of motion in the reference frame of the observer tends to zero in the reference frame of the photon. If we further consider that the state of the photon is a superposition of states in any direction, the universe for a photon is a singular point without space-time dimensions. This suggests that space-time arises from an underlying reality from which time emerges as a completely cosmologically synchronized reality.

In the context of quantum physics, particle entanglement provides another clue to the interconnections in the structure on which classical reality is based. Thus, the measurement of two entangled particles implies the exchange of quantum information between them independently of their position in space and instantaneously, as deduced from the superposition of quantum states and which Schrödinger posed as a thought experiment in “Schrödinger’s cat” [16]. This behavior seems to contradict the impossibility of transferring information faster than the speed of light, which raised a controversy known as the EPR paradox [17], which has been resolved theoretically and experimentally [18],  [19].

Therefore, at the classical scale information cannot travel faster than the speed of light. However, at the quantum scale reality behaves as if there were no space-time constraints. This indicates that space and time are realities that emerge at the classical scale but do not have a quantum reality, whereas space-time at the classical scale emerges from a quantum reality, which is unknown so far.

But perhaps the argument that most clearly supports the global interconnectedness of space-time is the Covariance Principle, which explicitly recognizes this interconnectedness by stating that the laws of physics must take the same form in all reference frames.

Finally, the question that arises is the underlying nature of space-time. In the current state of development of physics, the Standard Particle Model is available, which describes the quantum interactions between particles in the context of space-time. In this theoretical scheme, space-time is identified with the vacuum, which in quantum field theory is identified with the quantum vacuum which is the quantum state with the lowest possible energy, but this model does not seem to allow a theoretical analysis of how space-time emerges. Perhaps, the development of a model of fields that give sense to the physical reality of the vacuum and that integrates the standard model of particles will allow in the future to investigate how the space-time reality emerges from this model.

[1]N. Emery, N. Markosian y M. Sullivan, «”Time”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2020/entries/time/&gt;,» [En línea].
[2]J. M. E. McTaggart, «The Unreality of Time, http://www.jstor.org/stable/2248314,» Mind, vol. 17, nº 68, pp. 457-474, 1908.
[3]S. Baron, K. Miller y J. Tallant, Out of Time. A Philosophical Study of Timelessness, Oxford University Press, 2022.
[4]A. S. Eddington, The nature of the physical world, Cambridge University Press, 1948.
[5]C. Rovelli, The order of time, Riverhead Books, 2018.
[6]C. E. Shannon, “A Mathematical Theory of Communication,” The Bell system technical journal, vol. 27, pp. 379-423, 623-656, 1948.
[7]P. Ball, Designing the Molecular World, Princeton University Press, 1994.
[8]L. E. Ballentine, Quantum Mechanics. A Modern Development. Chapter 3., World Scientific Publishing Co., 2000.
[9]A. Ben-Naim, A Farewell to Entropy: Statistical Thermodynamics Based on Information, World Publishing Company, 2008.
[10]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.
[11]A. Ashtekar y E. Bianchi, «A Short Review of Loop Quantum Gravity URL= <arXiv:2104.04394v1 [gr-qc]>,» 2021.
[12]L. Smolin, «The case for background independence. URL = < https://arxiv.org/abs/hep-th/0507235v1&gt;,» 2005. [En línea].
[13]I. Reznikoff, «A class of deductive theories that cannot be deterministic: classical and quantum physics are not deterministic. URL = https://arxiv.org/abs/1203.2945v3,» 2013. [En línea].
[14]A. Einstein, «On The Electrodynamics Of Moving Bodies,» 1905.
[15]T. P. Cheng, Relativity, Gravitation and Cosmology, Oxford University Press, 2010.
[16]E. Schrödinger, «The Present Situation in Quantum Mechanics. (Trans. John Trimmer),» Naturwissenschaften, vol. 23, pp. 844-849, 1935.
[17]A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[18]J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics, vol. 1, nº 3, pp. 195-290, 1964.
[19]A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.

Consciousness from the point of view of AI

The self-awareness of human beings, which constitutes the concept of consciousness, has been and continues to be an enigma faced by philosophers, anthropologists and neuroscientists. But perhaps most suggestive is the fact that consciousness is a central concept in human behavior and that being aware of it does not find an explanation for it.

Without going into details, until the modern age the concept of consciousness had deep roots in the concept of soul and religious beliefs, often attributing to divine intervention in the differentiation of human nature from other species.

The modern age saw a substantial change based on Descartes’ concept “cogito ergo sum ( I think, therefore I am”) and later on the model proposed by Kant, which is structured around what are known as “transcendental arguments” [1].

Subsequently, a variety of schools of thought have developed, among which dualistic, monistic, materialistic and neurocognitive theories stand out. In general terms, these theories focus on the psychological and phenomenological aspects that describe conscious reality. In the case of neurocognitive theories, neurological evidence is a fundamental pillar. But ultimately, all these theories are abstract in nature and, for the time being, have failed to provide a formal justification of consciousness and how a “being” can develop conscious behavior, as well as concepts such as morality or ethics.

One aspect that these models deal with and that brings into question the concept of the “cogito” is the change of behavior produced by brain damage and that in some cases can be re-educated, which shows that the brain and the learning processes play a fundamental role in consciousness.

In this regard, advances in Artificial Intelligence (AI) [2] highlight the formal foundations of learning, by which an algorithm can acquire knowledge and in which neural networks are now a fundamental component. For this reason, the use of this new knowledge can shed light on the nature of consciousness.

The Turing Test paradigm

To analyze what may be the mechanisms that support consciousness we can start with the Turing Test [3], in which a machine is tested to see if it shows a behavior similar to that of a human being.

Without going into the definition of the Turing Test, we can assimilate this concept to that of a chatbot, as shown in Figure 1, which can give us an intuitive idea of this concept. But we can go even further if we consider its implementation. This requires the availability of a huge amount of dialogues between humans, which allows us to train the model using Deep Learning techniques [4]. And although it may seem strange, the availability of dialogues is the most laborious part of the process.

Figure 1. Schematic of the Turing Test

Once the chatbot has been trained, we can ask about its behavior from a psychophysical point of view. The answer seems quite obvious, since although it can show a very complex behavior, this will always be a reflex behavior, even though the interlocutor can deduce that the chatbot has feelings and even an intelligent behavior. The latter is a controversial issue because of the difficulty of defining what constitutes intelligent behavior, which is highlighted by the questions: Intelligent? Compared to what?

But the Turing Test only aims to determine the ability of a machine to show human-like behavior, without going into the analysis of the mechanisms to establish this functionality.

In the case of humans, these mechanisms can be classified into two sections: genetic learning and neural learning.

Genetic learning

Genetic learning is based on the learning capacity of biology to establish functions adapted to the processing of the surrounding reality. Expressed in this way it does not seem an obvious or convincing argument, but DNA computing [5] is a formal demonstration of the capability of biological learning. The evolution of capabilities acquired through this process is based on trial and error, which is inherent to learning. Thus, biological evolution is a slow process, as nature shows.

Instinctive reactions are based on genetic learning, so that all species of living beings are endowed with certain faculties without the need for significant subsequent training. Examples are the survival instinct, the reproductive instinct, and the maternal and paternal instinct. These functions are located in the inner layers of the brain, which humans share with vertebrates.

We will not go into details related to neuroscience [6], since the only thing that interests us in this analysis is to highlight two fundamental aspects: the functional specialization and plasticity of each of its neural structures. Thus, structure, plasticity and specialization are determined by genetic factors, so that the inner layers, such as the limbic system, have a very specialized functionality and require little training to be functional. In contrast, the external structures, located in the neocortex, are very plastic and their functionality is strongly influenced by learning and experience.

Thus, genetic learning is responsible for structure, plasticity and specialization, whereas neural learning is intimately linked to the plastic functionality of neural tissue.

A clear example of functional specialization based on genetic learning is the space-time processing that we share with the rest of higher living beings and that is located in the limbic system. This endows the brain with structures dedicated to the establishment of a spatial map and the processing of temporal delay, which provides the ability to establish trajectories in advance, vital for survival and for interacting with spatio-temporal reality.

This functionality has a high degree of automaticity, which makes its functional capacity effective from the moment of birth. However, this is not exactly the case in humans, since these neural systems function in coordination with the neocortex, which requires a high degree of neural training.

Thus, for example, this functional specialization precludes visualizing and intuitively understanding geometries of more than three spatial dimensions, something that humans can only deal with abstractly at a higher level by means of the neocortex, which has a plastic functionality and is the main support for neural learning.

It is interesting to consider that the functionality of the neocortex, whose response time is longer than that of the lower layers, can interfere in the reaction of automatic functions. This is clearly evident in the loss of concentration in activities that require a high degree of automatism, as occurs in certain sports activities. This means that in addition to having an appropriate physical capacity and a well-developed and trained automatic processing capacity, elite athletes require specific psychological preparation.

This applies to all sensory systems, such as vision, hearing, balance, in which genetic learning determines and conditions the interpretation of information coming from the sensory organs. But as this information ascends to the higher layers of the brain, the processing and interpretation of the information is determined by neural learning.

This is what differentiates humans from the rest of the species, being endowed with a highly developed neocortex, which provides a very significant neural learning capacity, from which the conscious being seems to emerge.

Nevertheless, there is solid evidence of the ability to feel and to have a certain level of consciousness in some species. This is what has triggered a movement for legal recognition of feelings in certain species of animals, and even recognition of personal status for some species of hominids.

Neural learning: AI as a source of intuition

Currently, AI is made up of a set of mathematical strategies that are grouped under different names depending on their characteristics. Thus, Machine Learning (ML) is made up of classical mathematical algorithms, such as statistical algorithms, decision trees, clustering, support vector machine, etc. Deep Learning, on the other hand, is inspired by the functioning of neural tissue, and exhibits complex behavior that approximates certain capabilities of humans.

In the current state of development of this discipline, designs are reduced to the implementation and training of specific tasks, such as automatic diagnostic systems, assistants, chatbots, games, etc., so these systems are grouped in what is called Artificial Narrow Intelligence.

The perspective offered by this new knowledge makes it possible to establish three major categories within AI:

  • Artificial Narrow Intelligence.
  • Artificial General Intelligence. AI systems with a capacity similar to that of human beings.
  • Artificial Super Intelligence: Self-aware AI systems with a capacity equal to or greater than that of human beings. 

The implementation of neural networks used in Deep Learning is inspired by the functionality of neurons and neural tissue, as shown in Figure 2 [7]. As a consequence, the nerve stimuli coming from the axon terminals that connect to the dendrites (synapses) are weighted and processed according to the functional configuration of the neuron acquired by learning, producing a nerve stimulus that propagates to other neurons, through the terminal axons.

Figure 2. Structure of a neuron and mathematical model

Artificial neural networks are structured by creating layers of the mathematical neuron model, as shown in Figure 3. A fundamental issue in this model is to determine the mechanisms necessary to establish the weighting parameters Wi in each of the units that form the neural network. Neural mechanisms could be used for this purpose. However, although there is a very general idea of how the functionality of the synapses is configured, the establishment of the functionality at the neural network level is still a mystery.

Figure 3. Artificial Neural Network Architecture

In the case of artificial neural networks, mathematics has found a solution that makes it possible to establish the Wi values, by means of what is known as supervised learning. This requires having a dataset in which each of its elements represents a stimulus X i and the response to this stimulus Y i. Thus, once the Wi values have been randomly initialized, the training phase proceeds, presenting each of the X i stimuli and comparing the response with the Y i values. The errors produced are propagated backwards by means of an algorithm known as backpropagation.

Through the sequential application of the elements of a training set belonging to the dataset in several sessions, a state of convergence is reached, in which the neural network achieves an appropriate degree of accuracy, verified by means of a validation set of elements belonging to the dataset that are not used for training.

An example is much more intuitive to understand the nature of the elements of a dataset. Thus, in a dataset used in the training of autonomous driving systems, X i correspond to images in which patterns of different types of vehicles, pedestrians, public roads, etc. appear. Each of these images has a category Y i associated with it, which specifies the patterns that appear in that image. It should be noted that in the current state of development of AI systems, the dataset is made by humans, so learning is supervised and requires significant resources.

In unsupervised learning the category Y i is generated automatically, although its state of development is very incipient. A very illustrative example is the Alpha Zero program developed by DeepMind [8], in such a way that learning is performed by providing it with the rules of the game (chess, go, shogi) and developing against itself matches, in such a way that the moves and the result configure (X i , Y i). The neural network is continuously updated with these results, sequentially improving its behavior and therefore the new results (X i , Y i), reaching a superhuman level of play.

It is important to note that in the case of upper living beings, unsupervised learning takes place through the interaction of the afferent (sensory) neuronal system and the efferent (motor) neuronal system. Although from a functional point of view there are no substantial differences, this interaction takes place at two levels, as shown in Figure 4:

  • The interaction with the inanimate environment.
  • Interaction with other living beings, especially of the same species.

The first level of interaction provides knowledge about physical reality. On the other contrary, the second level of interaction allows the establishment of survival habits and, above all, social habits. In the case of humans, this level acquires great importance and complexity, since from it emerge concepts such as morality and ethics, as well as the capacity to accumulate and transmit knowledge from generation to generation.

Figure 4. Structure of unsupervised learning

Consequently, unsupervised learning is based on the recursion of afferent and efferent systems. This means that unlike the models used in Deep Learning, which are unidirectional, unsupervised AI systems require the implementation of two independent systems. An afferent system that produces a response from a stimulus and an efferent system that, based on the response, corrects the behavior of the afferent system by means of a reinforcement technique.

What is the foundation of consciousness?

Two fundamental aspects can be deduced from the development of AI:

  • The learning capability of algorithms.
  • The need for afferent and efferent structures to support unsupervised learning.

On the other hand, it is known that traumatic processes in the brain or pathologies associated with aging can produce changes in personality and conscious perception.  This clearly indicates that these functions are located in the brain and supported by neural tissue.

But it is necessary to rely on anthropology to have a more precise idea of what are the foundations of consciousness and how it has developed in human beings. Thus, a direct correlation can be observed between the cranial capacity of a hominid species and its abilities, social organization, spirituality and, above all, in the abstract perception of the surrounding world. This correlation is clearly determined by the size of the neocortex and can be observed to a lesser extent in other species, such as primates, showing a capacity for emotional pain, a structured social organization and a certain degree of abstract learning.

According to all of the above, it could be concluded that consciousness emerges from the learning capacity of the neural tissue and would be achieved as the structural complexity and functional resources of the brain acquire an appropriate level of development. But this leads directly to the scenario proposed by the Turing Test, in such a way that we would obtain a system with a complex behavior indistinguishable from a human, which does not provide any proof of the existence of consciousness. 

To understand this, we can ask how a human comes to the conclusion that all other humans are self-awareness. In reality, it has no argument to reach this conclusion, since at most it could check that they verify the Turing test. The human comes to the conclusion that other humans have consciousness by resemblance to itself. By introspection, a human is self-awareness and since the rest of the humans are similar to him it concludes that the rest of the humans are self-awareness.

Ultimately, the only answer that can be given to what is the basis of consciousness is the introspection mechanism of the brain itself. In the unsupervised learning scheme, the afferent and efferent mechanisms that allow the brain to interact with the outside world through the sensory and motor organs have been highlighted. However, to this model we must add another flow of information, as shown in Figure 5, which enhances learning and corresponds to the interconnection of neuronal structures of the brain that recursively establish the mechanisms of reasoning, imagination and, why not, consciousness.

Figure 5. Mechanism of reasoning and imagination.

This statement may seem radical, but if we meditate on it we will see that the only difference between imagination and consciousness is that the capacity of humans to identify themselves raises existential questions that are difficult to answer, but which from the point of view of information processing require the same resources as reasoning or imagination.

But how can this hypothesis be verified? One possible solution would be to build a system based on learning technologies that would confirm the hypothesis, but would this confirmation be accepted as true, or would it simply be decided that the system verifies the Turing Test?

[1]Stanford Encyclopedia of Philosophy, «Kant’s View of the Mind and Consciousness of Self,» 2020 Oct 8. [On line]. Available: https://plato.stanford.edu/entries/kant-mind/. [Last access: 2021 Jun 6].
[2]S. J. Russell y P. Norvig, Artificial Intelligence: A Modern Approach, Pearson, 2021.
[3]A. Turing, «Computing Machinery and Intelligence,» Mind, vol. LIX, nº 236, p. 433–60, 1950.
[4]C. C. Aggarwal, Neural Networks and Deep Learning, Springer, 2018.
[5]L. M. Adleman, «Molecular computation of solutions to combinatorial problems,» Science, vol. 266, nº 5187, pp. 1021-1024, 1994.
[6]E. R. Kandel, J. D. Koester, S. H. Mack y S. A. Siegelbaum, Principles of Neural Science, Macgraw Hill, 2021.
[7]F. Rosenblatt, «The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,» Psychological Review, vol. 65, nº 6, pp. 386-408, 1958.
[8]D. Silver, T. Hubert y J. Schrittwieser, «DeepMind,» [On line]. Available: https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go. [Last access: 2021 Jun 6].

The unreasonable effectiveness of mathematics

In the post “What is the nature of mathematics“, the dilemma of whether mathematics is discovered or invented by humans has been exposed, but so far no convincing evidence has been provided in either direction.

A more profound way of approaching the issue is as posed by Eugene P. Wigner [1], asking about the unreasonable effectiveness of mathematics in the natural sciences. 

According to Roger Penrose this poses three mysteries [2] [3], identifying three distinct “worlds”: the world of our conscious perception, the physical world and the Platonic world of mathematical forms. Thus:

  • The world of physical reality seems to obey laws that actually reside in the world of mathematical forms.  
  • The perceiving minds themselves – the realm of our conscious perception – have managed to emerge from the physical world.
  • Those same minds have been able to access the mathematical world by discovering, or creating, and articulating a capital of mathematical forms and concepts.

The effectiveness of mathematics has two different aspects. An active one in which physicists develop mathematical models that allow them to accurately describe the behavior of physical phenomena, but also to make predictions about them, which is a striking fact.

Even more extraordinary, however, is the passive aspect of mathematics, such that the concepts that mathematicians explore in an abstract way end up being the solutions to problems firmly rooted in physical reality.

But this view of mathematics has detractors especially outside the field of physics, in areas where mathematics does not seem to have this behavior. Thus, the neurobiologist Jean-Pierre Changeux notes [4], “Asserting the physical reality of mathematical objects on the same level as the natural phenomena studied in biology raises, in my opinion, a considerable epistemological problem. How can an internal physical state of our brain represent another physical state external to it?”

Obviously, it seems that analyzing the problem using case studies from different areas of knowledge does not allow us to establish formal arguments to reach a conclusion about the nature of mathematics. For this reason, an abstract method must be sought to overcome these difficulties. In this sense, Information Theory (IT) [5], Algorithmic Information Theory (AIT) [6] and Theory of Computation (TC) [7] can be tools of analysis that help to solve the problem.

What do we understand by mathematics?

The question may seem obvious, but mathematics is structured in multiple areas: algebra, logic, calculus, etc., and the truth is that when we refer to the success of mathematics in the field of physics, it underlies the idea of physical theories supported by mathematical models: quantum physics, electromagnetism, general relativity, etc.

However, when these mathematical models are applied in other areas they do not seem to have the same effectiveness, for example in biology, sociology or finance, which seems to contradict the experience in the field of physics.

For this reason, a fundamental question is to analyze how these models work and what are the causes that hinder their application outside the field of physics. To do this, let us imagine any of the successful models of physics, such as the theory of gravitation, electromagnetism, quantum physics or general relativity. These models are based on a set of equations defined in mathematical language, which determine the laws that control the described phenomenon, which admit analytical solutions that describe the dynamics of the system. Thus, for example, a body subjected to a central attractive force describes a trajectory defined by a conic.

This functionality is a powerful analysis tool, since it allows to analyze systems under hypothetical conditions and to reach conclusions that can be later verified experimentally. But beware! This success scenario masks a reality that often goes unnoticed, since generally the scenarios in which the model admits an analytical solution are very limited. Thus, the gravitational model does not admit an analytical solution when the number of bodies is n>=3 [8], except in very specific cases such as the so-called Lagrange points. Moreover, the system has a very sensitive behavior to the initial conditions, so that small variations in these conditions can produce large deviations in the long term.

This is a fundamental characteristic of nonlinear systems and, although the system is governed by deterministic laws, its behavior is chaotic. Without going into details that are beyond the scope of this analysis, this is the general behavior of the cosmos and everything that happens in it.

One case that can be considered extraordinary is the quantum model which, according to the Schrödinger equation or the Heisenberg matrix model, is a linear and reversible model. However, the information that emerges from quantum reality is stochastic in nature.  

In short, the models that describe physical reality only have an analytical solution in very particular cases. For complex scenarios, particular solutions to the problem can be obtained by numerical series, but the general solution of any mathematical proposition is obtained by the Turing Machine (TM) [9].

This model can be represented in an abstract form by the concatenation of three mathematical objectsxyz〉(bit sequences) which, when executed in a Turing machine TM(〈xyz〉), determine the solution. Thus, for example, in the case of electromagnetism, the object z will correspond to the description of the boundary conditions of the system, y to the definition of Maxwell’s equations and x to the formal definition of the mathematical calculus. TM is the Turing machine defined by a finite set of states. Therefore, the problem is reduced to the treatment of a set of bits〈xyz〉 according to axiomatic rules defined in TM, and that in the optimal case can be reduced to a machine with three states (plus the HALT state) and two symbols (bit).

Nature as a Turing machine

And here we return to the starting point. How is it possible that reality can be represented by a set of bits and a small number of axiomatic rules?

Prior to the development of IT, the concept of information had no formal meaning, as evidenced by its classic dictionary definition. In fact, until communication technologies began to develop, words such as “send” referred exclusively to material objects.

However, everything that happens in the universe is interaction and transfer, and in the case of humans the most elaborate medium for this interaction is natural language, which we consider to be the most important milestone on which cultural development is based. It is perhaps for this reason that in the debate about whether mathematics is invented or discovered, natural language is used as an argument.

But TC shows that natural language is not formal, not being defined on axiomatic grounds, so that arguments based on it may be of questionable validity. And it is here that IT and TC provide a broad view on the problem posed.

In a physical system each of the component particles has physical properties and a state, in such a way that when it interacts with the environment it modifies its state according to its properties, its state and the external physical interaction. This interaction process is reciprocal and as a consequence of the whole set of interactions the system develops a temporal dynamics.

Thus, for example, the dynamics of a particle is determined by the curvature of space-time which indicates to the particle how it should move and this in turn interacts with space-time, modifying its curvature.

In short, a system has a description that is distributed in each of the parts that make up the system. Thus, the system could be described in several different ways:

  • As a set of TMs interacting with each other. 
  • As a TM describing the total system.
  • As a TM partially describing the global behavior, showing emergent properties of the system.

The fundamental conclusion is that the system is a Turing machine. Therefore, the question is not whether the mathematics is discovered or invented or to ask ourselves how it is possible for mathematics to be so effective in describing the system. The question is how it is possible for an intelligent entity – natural or artificial – to reach this conclusion and even to be able to deduce the axiomatic laws that control the system.

The justification must be based on the fact that it is nature that imposes the functionality and not the intelligent entities that are part of nature. Nature is capable of developing any computable functionality, so that among other functionalities, learning and recognition of behavioral patterns is a basic functionality of nature. In this way, nature develops a complex dynamic from which physical behavior, biology, living beings, and intelligent entities emerge.

As a consequence, nature has created structures that are able to identify its own patterns of behavior, such as physical laws, and ultimately identify nature as a Universal Turing Machine (UTM). This is what makes physical interaction consistent at all levels. Thus, in the above case of the ability of living beings to establish a spatio-temporal map, this allows them to interact with the environment; otherwise their existence would not be possible. Obviously this map corresponds to a Euclidean space, but if the living being in question were able to move at speeds close to light, the map learned would correspond to the one described by relativity.

A view beyond physics

While TC, IT and AIT are the theoretical support that allows sustaining this view of nature, advances in computer technology and AI are a source of inspiration, showing how reality can be described as a structured sequence of bits. This in turn enables functions such as pattern extraction and recognition, complexity determination and machine learning.

Despite this, fundamental questions remain to be answered, in particular what happens in those cases where mathematics does not seem to have the same success as in the case of physics, such as biology, economics or sociology. 

Many of the arguments used against the previous view are based on the fact that the description of reality in mathematical terms, or rather, in terms of computational concepts does not seem to fit, or at least not precisely, in areas of knowledge beyond physics. However, it is necessary to recognize that very significant advances have been made in areas such as biology and economics.

Thus, knowledge of biology shows that the chemistry of life is structured in several overlapping languages:

  • The language of nucleic acids, consisting of an alphabet of 4 symbols that encodes the structure of DNA and RNA.
  • The amino acid language, consisting of an alphabet of 64 symbols that encodes proteins. The transcription process for protein synthesis is carried out by means of a concordance between both languages.
  • The language of the intergenic regions of the genome. Their functionality is still to be clarified, but everything seems to indicate that they are responsible for the control of protein production in different parts of the body, through the activation of molecular switches. 

On the other hand, protein structure prediction by deep learning techniques is a solid evidence that associates biology to TC [10]. To emphasize also that biology as an information process must verify the laws of logic, in particular the recursion theorem [11], so DNA replication must be performed at least in two phases by independent processes.

In the case of economics there have been relevant advances since the 80’s of the twentieth century, with the development of computational finance [12]. But as a paradigmatic example we will focus on the financial markets, which should serve to test in an environment far from physics the hypothesis that nature has the behavior of a Turing machine. 

Basically, financial markets are a space, which can be physical or virtual, through which financial assets are exchanged between economic agents and in which the prices of such assets are defined.

A financial market is governed by the law of supply and demand. In other words, when an economic agent wants something at a certain price, he can only buy it at that price if there is another agent willing to sell him that something at that price.

Traditionally, economic agents were individuals but, with the development of complex computer applications, these applications now also act as economic agents, both supervised and unsupervised, giving rise to different types of investment strategies.

This system can be modeled by a Turing machine that emulates all the economic agents involved, or as a set of Turing machines interacting with each other, each of which emulates an economic agent.

The definition of this model requires implementing the axiomatic rules of the market, as well as the functionality of each of the economic agents, which allow them to determine the purchase or sale prices at which they are willing to negotiate. This is where the problem lies, since this depends on very diverse and complex factors, such as the availability of information on the securities traded, the agent’s psychology and many other factors such as contingencies or speculative strategies.

In brief, this makes emulation of the system impossible in practice. It should be noted, however, that brokers and automated applications can gain a competitive advantage by identifying global patterns, or even by insider trading, although this practice is punishable by law in suitably regulated markets.

The question that can be raised is whether this impossibility of precise emulation invalidates the hypothesis put forward. If we return to the case study of Newtonian gravitation, determined by the central attractive force, it can be observed that, although functionally different, it shares a fundamental characteristic that makes emulation of the system impossible in practice and that is present in all scenarios. 

If we intend to emulate the case of the solar system we must determine the position, velocity and angular momentum of all celestial bodies involved, sun, planets, dwarf planets, planetoids, satellites, as well as the rest of the bodies located in the system, such as the asteroid belt, the Kuiper belt and the Oort cloud, as well as the dispersed mass and energy. In addition, the shape and structure of solid, liquid and gaseous bodies must be determined. It will also be necessary to consider the effects of collisions that modify the structure of the resulting bodies. Finally, it will be necessary to consider physicochemical activity, such as geological, biological and radiation phenomena, since they modify the structure and dynamics of the bodies and are subject to quantum phenomena, which is another source of uncertainty.  And yet the model is not adequate, since it is necessary to apply a relativistic model.

This makes accurate emulation impossible in practice, as demonstrated by the continuous corrections in the ephemerides of GPS satellites, or the adjustments of space travel trajectories, where the journey to Pluto by NASA’s New Horizons spacecraft is a paradigmatic case.

Conclusions

From the previous analysis it can be hypothesized that the universe is an axiomatic system governed by laws that determine a dynamic that is a consequence of the interaction and transference of the entities that compose it.

As a consequence of the interaction and transfer phenomena, the system itself can partially and approximately emulate its own behavior, which gives rise to learning processes and finally gives rise to life and intelligence. This makes it possible for living beings to interact in a complex way with the environment and for intelligent entities to observe reality and establish models of this reality.

This gave rise to abstract representations such as natural language and mathematics. With the development of IT [5] it is concluded that all objects can be represented by a set of bits, which can be processed by axiomatic rules [7] and which optimally encoded determine the complexity of the object, defined as Kolmogorov complexity [6].

The development of TC establishes that these models can be defined as a TM, so that in the limit it can be hypothesized that the universe is equivalent to a Turing machine and that the limits of reality can go beyond the universe itself, in what is defined as multiverse and that it would be equivalent to a UTM. Esta concordancia entre un universo y una TM  permite plantear la hipótesis de que el universo no es más que información procesada por reglas axiomáticas.

Therefore, from the observation of natural phenomena we can extract the laws of behavior that constitute the abstract models (axioms), as well as the information necessary to describe the cases of reality (information). Since this representation is made on a physical reality, its representation will always be approximate, so that only the universe can emulate itself. Since the universe is consistent, models only corroborate this fact. But reciprocally, the equivalence between the universe and a TM implies that the deductions made from consistent models must be satisfied by reality.

However, everything seems to indicate that this way of perceiving reality is distorted by the senses, since at the level of classical reality what we observe are the consequences of the processes that occur at this functional level, appearing concepts such as mass, energy, inertia.

But when we explore the layers that support classical reality, this perception disappears, since our senses do not have the direct capability for its observation, in such a way that what emerges is nothing more than a model of axiomatic rules that process information, and the physical sensory conception disappears. This would justify the difficulty to understand the foundations of reality.

It is sometimes speculated that reality may be nothing more than a complex simulation, but this poses a problem, since in such a case a support for its execution would be necessary, implying the existence of an underlying reality necessary to support such a simulation [13].

There are two aspects that have not been dealt with and that are of transcendental importance for the understanding of the universe. The first concerns irreversibility in the layer of classical reality. According to the AIT, the amount of information in a TM remains constant, so the irreversibility of thermodynamic systems is an indication that these systems are open, since they do not verify this property, an aspect to which physics must provide an answer.

The second is related to the non-cloning theorem. Quantum systems are reversible and, according to the non-cloning theorem, it is not possible to make exact copies of the unknown quantum state of a particle. But according to the recursion theorem, at least two independent processes are necessary to make a copy. This would mean that in the quantum layer it is not possible to have at least two independent processes to copy such a quantum state. An alternative explanation would be that these quantum states have a non-computable complexity.

Finally, it should be noted that the question of whether mathematics was invented or discovered by humans is flawed by an anthropic view of the universe, which considers humans as a central part of it. But it must be concluded that humans are a part of the universe, as are all the entities that make up the universe, particularly mathematics.

References

[1]E. P. Wigner, “The unreasonable effectiveness of mathematics in the natural sciences.,” Communications on Pure and Applied Mathematics, vol. 13, no. 1, pp. 1-14, 1960.
[2]R. Penrose, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford: Oxford University Press, 1989.
[3]R. Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe, London: Jonathan Cape, 2004.
[4]J.-P. Changeux and A. Connes, Conversations on Mind, Matter, and Mathematics, Princeton N. J.: Princeton University Press, 1995.
[5]C. E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
[6]P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[7]M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[8]H. Poincaré, New Methods of Celestial Mechanics, Springer, 1992.
[9]A. M. Turing, “On computable numbers, with an application to the Entscheidungsproblem.,” Proceedings, London Mathematical Society, pp. 230-265, 1936.
[10]A. W. Senior, R. Evans and e. al., “Improved protein structure prediction using potentials from deep learning,” Nature, vol. 577, pp. 706-710, Jan 2020.
[11]S. Kleene, “On Notation for ordinal numbers,” J. Symbolic Logic, no. 3, p. 150–155, 1938.
[12]A. Savine, Modern Computational Finance: AAD and Parallel Simulations, Wiley, 2018.
[13]N. Bostrom, “Are We Living in a Computer Simulation?,” The Philosophical Quarterly, vol. 53, no. 211, p. 243–255, April 2003.

What is the nature of mathematics?

The ability of mathematics to describe the behavior of nature, particularly in the field of physics, is a surprising fact, especially when one considers that mathematics is an abstract entity created by the human mind and disconnected from physical reality.  But if mathematics is an entity created by humans, how is this precise correspondence possible?

Throughout centuries this has been a topic of debate, focusing on two opposing ideas: Is mathematics invented or discovered by humans?

This question has divided the scientific community: philosophers, physicists, logicians, cognitive scientists and linguists, and it can be said that not only is there no consensus, but generally positions are totally opposed. Mario Livio in the essay “Is God a Mathematician? [1] describes in a broad and precise way the historical events on the subject, from Greek philosophers to our days.

The aim of this post is to analyze this dilemma, introducing new analysis tools  such as Information Theory (IT) [2], Algorithmic Information Theory (AIT) [3] and Computer Theory (CT) [4], without forgetting the perspective that shows the new knowledge about Artificial Intelligence (AI).

In this post we will make a brief review of the current state of the issue, without entering into its historical development, trying to identify the difficulties that hinder its resolution, for in subsequent posts to analyze the problem from a different perspective to the conventional, using the logical tools that offer us the above theories.

Currents of thought: invented or discovered?

In a very simplified way, it can be said that at present the position that mathematics is discovered by humans is headed by Max Tegmark, who states in “Our Mathematical Universe” [5] that the universe is a purely mathematical entity, which would justify that mathematics describes reality with precision, but that reality itself is a mathematical entity.

On the other extreme, there is a large group of scientists, including cognitive scientists and biologists who, based on the fact of the brain’s capabilities, maintain that mathematics is an entity invented by humans.

Max Tegmark: Our Mathematical Universe

In both cases, there are no arguments that would tip the balance towards one of the hypotheses. Thus, in Max Tegmark’s case he maintains that the definitive theory (Theory of Everything) cannot include concepts such as “subatomic particles”, “vibrating strings”, “space-time deformation” or other man-made constructs. Therefore, the only possible description of the cosmos implies only abstract concepts and relations between them, which for him constitute the operative definition of mathematics.

This reasoning assumes that the cosmos has a nature completely independent of human perception, and its behavior is governed exclusively by such abstract concepts. This view of the cosmos seems to be correct insofar as it eliminates any anthropic view of the universe, in which humans are only a part of it. However, it does not justify that physical laws and abstract mathematical concepts are the same entity.  

In the case of those who maintain that mathematics is an entity invented by humans, the arguments do not usually have a formal structure and it could be said that in many cases they correspond more to a personal position and sentiment. An exception is the position maintained by biologists and cognitive scientists, in which the arguments are based on the creative capacity of the human brain and which would justify that mathematics is an entity created by humans.

For these, mathematics does not really differ from natural language, so mathematics would be no more than another language. Thus, the conception of mathematics would be nothing more than the idealization and abstraction of elements of the physical world. However, this approach presents several difficulties to be able to conclude that mathematics is an entity invented by humans.

On the one hand, it does not provide formal criteria for its demonstration. But it also presupposes that the ability to learn is an attribute exclusive to humans. This is a crucial point, which will be addressed in later posts. In addition, natural language is used as a central concept, without taking into account that any interaction, no matter what its nature, is carried out through language, as shown by the TC [4], which is a theory of language.

Consequently, it can be concluded that neither current of thought presents conclusive arguments about what the nature of mathematics is. For this reason, it seems necessary to analyze from new points of view what is the cause for this, since physical reality and mathematics seem intimately linked.

Mathematics as a discovered entity

In the case that considers mathematics the very essence of the cosmos, and therefore that mathematics is an entity discovered by humans, the argument is the equivalence of mathematical models with physical behavior. But for this argument to be conclusive, the Theory of Everything should be developed, in which the physical entities would be strictly of a mathematical nature. This means that reality would be supported by a set of axioms and the information describing the model, the state and the dynamics of the system.

This means a dematerialization of physics, something that somehow seems to be happening as the development of the deeper structures of physics proceeds. Thus, the particles of the standard model are nothing more than abstract entities with observable properties. This could be the key, and there is a hint in Landauer’s principle [6], which establishes an equivalence between information and energy.

But solving the problem by physical means or, to be more precise, by contrasting mathematical models with reality presents a fundamental difficulty. In general, mathematical models describe the functionality of a certain context or layer of reality, and all of them have a common characteristic, in such a way that these models are irreducible and disconnected from the underlying layers. Therefore, the deepest functional layer should be unraveled, which from the point of view of AIT and TC is a non-computable problem.

Mathematics as an invented entity

The current of opinion in favor of mathematics being an entity invented by humans is based on natural language and on the brain’s ability to learn, imagine and create. 

But this argument has two fundamental weaknesses. On the one hand, it does not provide formal arguments to conclusively demonstrate the hypothesis that mathematics is an invented entity. On the other hand, it attributes properties to the human brain that are a general characteristic of the cosmos.

The Hippocampus: A paradigmatic example of the dilemma discovered or invented

To clarify this last point, let us take as an example the invention of whole numbers by humans, which is usually used to support this view. Let us now imagine an animal interacting with the environment. Therefore, it has to interpret spacetime accurately as a basic means of survival. Obviously, the animal must have learned or invented the space-time map, something much more complex than natural numbers.

Moreover, nature has provided or invented the hippocampus [7], a neuronal structure specialized in acquiring long-term information that forms a complex convolution, forming a recurrent neuronal network, very suitable for the treatment of the space-time map and for the resolution of trajectories. And of course this structure is physical and encoded in the genome of higher animals. The question is: Is this structure discovered or invented by nature?

Regarding the use of language as an argument, it should be noted that language is the means of interaction in nature at all functional levels. Thus, biology is a language, the interaction between particles is formally a language, although this point requires a deeper analysis for its justification. In particular, natural language is in fact a non-formal language, so it is not an axiomatic language, which makes it inconsistent.

Finally, in relation to the learning capability attributed to the brain, this is a fundamental characteristic of nature, as demonstrated by mathematical models of learning and evidenced in an incipient manner by AI.

Another way of approaching the question about the nature of mathematics is through Wigner’s enigma [8], in which he asks about the inexplicable effectiveness of mathematics. But this topic and the topics opened before will be dealt with and expanded in later posts.

References

[1] M. Livio, Is God a Mathematician?, New York: Simon & Schuster Paperbacks, 2009.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948. 
[3] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[4] M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[5] M. Tegmark, Our Mathematical Universe: My Quest For The Ultimate Nature Of Reality, Knopf Doubleday Publishing Group, 2014.
[6] R. Landauer, «Irreversibility and Heat Generation in Computing Process,» IBM J. Res. Dev., vol. 5, pp. 183-191, 1961.
[7] S. Jacobson y E. M. Marcus, Neuroanatomy for the Neuroscientist, Springer, 2008.
[8] E. P. Wigner, «The unreasonable effectiveness of mathematics in the natural sciences.,» Communications on Pure and Applied Mathematics, vol. 13, nº 1, pp. 1-14, 1960.

Reality as an information process

The purpose of physics is the description and interpretation of physical reality based on observation. To this end, mathematics has been a fundamental tool to formalize this reality through models, which in turn have allowed predictions to be made that have subsequently been experimentally verified. This creates an astonishing connection between reality and abstract logic that makes suspect the existence of a deep relationship beyond its conceptual definition. In fact, the ability of mathematics to accurately describe physical processes can lead us to think that reality is nothing more than a manifestation of a mathematical world.

But perhaps it is necessary to define in greater detail what we mean by this. Usually, when we refer to mathematics we think of concepts such as theorems or equations. However, we can have another view of mathematics as an information processing system, in which the above concepts can be interpreted as a compact expression of the behavior of the system, as shown by the algorithmic information theory [1].

In this way, physical laws determine how the information that describes the system is processed, establishing a space-time dynamic. As a consequence, a parallelism is established between the physical system and the computational system that, from an abstract point of view, are equivalent. This equivalence is somewhat astonishing, since in principle we assume that both systems belong to totally different fields of knowledge.

But apart from this fact, we can ask what consequences can be drawn from this equivalence. In particular, computability theory [2] and information theory [3] [1] provide criteria for determining the computational reversibility and complexity of a system [4]. In particular:

  • In a reversible computing system (RCS) the amount of information remains constant throughout the dynamics of the system.
  • In a non-reversible computational system (NRCS) the amount of information never increases along the dynamics of the system.
  • The complexity of the system corresponds to the most compact expression of the system, called Kolmogorov complexity and is an absolute measure.

It is important to note that in an NRCS system information is not lost, but is explicitly discarded. This means that there is no fundamental reason why such information should not be maintained, as the complexity of an RCS system remains constant. In practice, the implementation of computer systems is non-reversible in order to optimize resources, as a consequence of the technological limitations for its implementation. In fact, the energy currently needed for its implementation is much higher than that established by the Landauer principle [5].

If we focus on the analysis of reversible physical systems, such as quantum mechanics, relativity, Newtonian mechanics or electromagnetism, we can observe invariant physical magnitudes that are a consequence of computational reversibility. These are determined by unitary mathematical processes, which mean that every process has an inverse process [6]. But the difficulties in understanding reality from the point of view of mathematical logic seem to arise immediately, with thermodynamics and quantum measurement being paradigmatic examples.

In the case of quantum measurement, the state of the system before the measurement is made is in a superposition of states, so that when the measurement is made the state collapses in one of the possible states in which the system was [7]. This means that the quantum measurement scenario corresponds to that of a non-reversible computational system, in which the information in the system decreases when the superposition of states disappears, making the system non-reversible as a consequence of the loss of information.

This implies that physical reality systematically loses information, which poses two fundamental contradictions. The first is the fact that quantum mechanics is a reversible theory and that observable reality is based on it. The second is that this loss of information contradicts the systematic increase of classical entropy, which in turn poses a deeper contradiction, since in classical reality there is a spontaneous increase of information, as a consequence of the increase of entropy.

The solution to the first contradiction is relatively simple if we eliminate the anthropic vision of reality. In general, the process of quantum measurement introduces the concept of observer, which creates a certain degree of subjectivity that is very important to clarify, as it can lead to misinterpretations. In this process there are two clearly separated layers of reality, the quantum layer and the classical layer, which have already been addressed in previous posts. The realization of quantum measurement involves two quantum systems, one that we define as the system to be measured and another that corresponds to the measurement system, which can be considered as a quantum observer, and both have a quantum nature. As a result of this interaction, classical information emerges, where the classical observer is located, who can be identified e.g. with a physicist in a laboratory. 

Now consider that the measurement is structured in two blocks, one the quantum system under observation and the other the measurement system that includes the quantum observer and the classical observer. In this case it is being interpreted that the quantum system under measurement is an open quantum system that loses quantum information in the measurement process and that as a result a lesser amount of classical information emerges. In short, this scenario offers a negative balance of information.

But, on the contrary, in the quantum reality layer the interaction of two quantum systems takes place which, it can be said, mutually observe each other according to unitary operators, so that the system is closed producing an exchange of information with a null balance of information. As a result of this interaction, the classical layer emerges. But then there seems to be a positive balance of information, as classical information emerges from this process. But what really happens is that the emerging information, which constitutes the classical layer, is simply a simplified view of the quantum layer. For this reason we can say that the classical layer is an emerging reality.

So, it can be said that the quantum layer is formed by subsystems that interact with each other in a unitary way, constituting a closed system in which the information and, therefore, the complexity of the system is invariant. As a consequence of these interactions, the classical layer emerges as an irreducible reality of the quantum layer.

As for the contradiction produced by the increase in entropy, the reasons justifying this behavior seem more subtle. However, a first clue may lie in the fact that this increase occurs only in the classical layer. It must also be considered that, according to the algorithmic information theory, the complexity of a system, and therefore the amount of information that describes the system, is the set formed by the processed information and the information necessary to describe the processor itself. 

A physical scenario that can illustrate this situation is the case of the big bang [8], in which it is considered that the entropy of the system in its beginning was small or even null. This is so because the microwave background radiation shows a fairly homogeneous pattern, so the amount of information for its description and, therefore, its entropy is small. But if we create a computational model of this scenario, it is evident that the complexity of the system has increased in a formidable way, which is incompatible from the logical point of view. This indicates that in the model not only the information is incomplete, but also the description of the processes that govern it. But what physical evidence do we have to show that this is so?

Perhaps the clearest sample of this is cosmic inflation [9], so that the space-time metric changes with time, so that the spatial dimensions grow with time. To explain this behavior the existence of dark energy has been postulated as the engine of this process [10], which in a physical form recognizes the gaps revealed by mathematical logic. Perhaps one aspect that is not usually paid attention is the interaction between vacuum and photons, which produces a loss of energy in photons as space-time expands. This loss supposes a decrease of information that necessarily must be transferred to space-time.

This situation causes the vacuum, which in the context of classical physics is nothing more than an abstract metric, to become a fundamental physical piece of enormous complexity. Aspects that contribute to this conception of vacuum are the entanglement of quantum particles [11], decoherence and zero point energy [12].  

From all of the above, a hypothesis can be made as to what the structure of reality is from a computational point of view, as shown in the following figure. If we assume that the quantum layer is a unitary and closed structure, its complexity will remain constant. But the functionality and complexity of this remains hidden from observation and it is only possible to model it through an inductive process based on experimentation, which has led to the definition of physical models, in such a way that these models allow us to describe classical reality. As a consequence, the quantum layer shows a reality that constitutes the classical layer and that is a partial vision and, according to the theoretical and experimental results, extremely reduced of the underlying reality and that makes the classical reality an irreducible reality.  

The fundamental question that can be raised in this model is whether the complexity of the classical layer is constant or whether it can vary over time, since it is only bound by the laws of the underlying layer and is a partial and irreducible view of that functional layer. But for the classical layer to be invariant, it must be closed and therefore its computational description must be closed, which is not verified since it is subject to the quantum layer. Consequently, the complexity of the classical layer may change over time.

Consequently, the question arises as to whether there is any mechanism in the quantum layer that justifies the fluctuation of the complexity of the classical layer. Obviously one of the causes is quantum decoherence, which makes information observable in the classical layer. Similarly, cosmic inflation produces an increase in complexity, as space-time grows. On the contrary, attractive forces tend to reduce complexity, so gravity would be the most prominent factor.

From the observation of classical reality we can answer that currently its entropy tends to grow, as a consequence of the fact that decoherence and inflation are predominant causes. However, one can imagine recession scenarios, such as a big crunch scenario in which entropy decreased. Therefore, the entropy trend may be a consequence of the dynamic state of the system.

In summary, it can be said that the amount of information in the quantum layer remains constant, as a consequence of its unitary nature. On the contrary, the amount of information in the classical layer is determined by the amount of information that emerges from the quantum layer. Therefore, the challenge is to determine precisely the mechanisms that determine the dynamics of this process. Additionally, it is possible to analyze specific scenarios that generally correspond to the field of thermodynamics. Other interesting scenarios may be quantum in nature, such as the one proposed by Hugh Everett on the Many-Worlds Interpretation (MWI).  

Bibliography

[1] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[2] M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[3] C. E. Shannon, “A Mathematical Theory of Communication,” vol. 27, pp. 379-423, 623-656, 1948.
[4] M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011.
[5] R. Landauer, «Irreversibility and Heat Generation in Computing Process,» IBM J. Res. Dev., vol. 5, pp. 183-191, 1961.
[6] J. Sakurai y J. Napolitano, Modern Quantum Mechanics, Cambridge University Press, 2017.
[7] G. Auletta, Foundations and Interpretation of Quantum Mechanics, World Scientific, 2001.
[8] A. H. Guth, The Inflationary Universe, Perseus, 1997.
[9] A. Liddle, An Introduction to Modern Cosmology, Wiley, 2003.
[10] P. J. E. Peebles and Bharat Ratra, “The cosmological constant and dark energy,” arXiv:astro-ph/0207347, 2003.
[11] A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[12] H. B. G. Casimir and D. Polder, “The Influence of Retardation on the London-van der Waals Forces,” Phys. Rev., vol. 73, no. 4, pp. 360-372, 1948.

A macroscopic view of the Schrödinger cat

From the analysis carried out in the previous post, it can be concluded that, in general, it is not possible to identify the macroscopic states of a complex system with its quantum states. Thus, the macroscopic states corresponding to the dead cat (DC) or to the living cat (AC) cannot be considered quantum states, since according to quantum theory the system could be expressed as a superposition of these states. Consequently, as it has been justified, for macroscopic systems it is not possible to define quantum states such as |DC⟩ and |DC⟩. On the other hand, the states (DC) and (AC) are an observable reality, indicating that the system presents two realities, a quantum reality and an emerging reality that can be defined as classical reality.

Quantum reality will be defined by its wave function, formed by the superposition of the quantum subsystems that make up the system and which will evolve according to the existing interaction between all the quantum elements that make up the system and the environment. For simplicity, if the CAT system is considered isolated from the environment, the succession of its quantum state can be expressed as:

            |CAT[n]⟩ = |SC1[n]⟩ ⊗|SC2[n]⟩ ⊗…⊗|SCi[n]⟩ ⊗…⊗|SCk[n][n]⟩.

Expression in which it has been taken into account that the number of non-entangled quantum subsystems k also varies with time, so it is a function of the sequence n, considering time as a discrete variable. 

The observable classical reality can be described by the state of the system that, if for the object “cat” is defined as (CAT[n]), from the previous reasoning it is concluded that (CAT[n]) ≢ |CAT[n]⟩. In other words, the quantum and classical states of a complex object are not equivalent. 

The question that remains to be justified is the irreducibility of the observable classical state (CAT) from the underlying quantum reality, represented by the quantum state |CAT⟩. This can be done if it is considered that the functional relationship between states |CAT⟩ and (CAT) is extraordinarily complex, being subject to the mathematical concepts on which complex systems are based, such as they are:

  • The complexity of the space of quantum states (Hilbert space).
  • The random behavior of observable information emerging from quantum reality.
  • The enormous number of quantum entities involved in a macroscopic system.
  • The non-linearity of the laws of classical physics.

Based on Kolmogorov complexity [1], it is possible to prove that the behavior of systems with these characteristics does not support, in most cases, an analytical solution that determines the evolution of the system from its initial state. This also implies that, in practice, the process of evolution of a complex object can only be represented by itself, both on a quantum and a classical level.

According to the algorithmic information theory [1], this process is equivalent to a mathematical object composed of an ordered set of bits processed according to axiomatic rules. In such a way that the information of the object is defined by the Kolmogorov complexity, in a manner that it remains constant throughout time, as long as the process is an isolated system. It should be pointed out that the Kolmogorov complexity makes it possible to determine the information contained in an object, without previously having an alphabet for the determination of its entropy, as is the case in the information theory [2], although both concepts coincide at the limit.

From this point of view, two fundamental questions arise. The first is the evolution of the entropy of the system and the second is the apparent loss of information in the observation process, through which classical reality emerges from quantum reality. This opens a possible line of analysis that will be addressed later.

But going back to the analysis of what is the relationship between classic and quantum states, it is possible to have an intuitive view of how the state (CAT) ends up being disconnected from the state |CAT⟩, analyzing the system qualitatively.

First, it should be noted that virtually 100% of the quantum information contained in the state |CAT⟩ remains hidden within the elementary particles that make up the system. This is a consequence of the fact that the physical-chemical structure [3] of the molecules is determined exclusively by the electrons that support its covalent bonds. Next, it must be considered that the molecular interaction, on which molecular biology is based, is performed by van der Waals forces and hydrogen bonds, creating a new level of functional disconnection with the underlying layer.

Supported by this functional level appears a new functional structure formed by cellular biology  [4], from which appear living organisms, from unicellular beings to complex beings formed by multicellular organs. It is in this layer that the concept of living being emerges, establishing a new border between the strictly physical and the concept of perception. At this level the nervous tissue [5] emerges, allowing the complex interaction between individuals and on which new structures and concepts are sustained, such as consciousness, culture, social organization, which are not only reserved to human beings, although it is in the latter where the functionality is more complex.

But to the complexity of the functional layers must be added the non-linearity of the laws to which they are subject and which are necessary and sufficient conditions for a behavior of deterministic chaos [6] and which, as previously justified, is based on the algorithmic information theory [1]. This means that any variation in the initial conditions will produce a different dynamic, so that any emulation will end up diverging from the original, this behavior being the justification of free will. In this sense, Heisenberg’s uncertainty principle [7] prevents from knowing exactly the initial conditions of the classical system, in any of the functional layers described above. Consequently, all of them will have an irreducible nature and an unpredictable dynamic, determined exclusively by the system itself.

At this point and in view of this complex functional structure, we must ask what the state (CAT) refers to, since in this context the existence of a classical state has been implicitly assumed. The complex functional structure of the object “cat” allows a description at different levels. Thus, the cat object can be described in different ways:

  • As atoms and molecules subject to the laws of physical chemistry.
  • As molecules that interact according to molecular biology.
  • As complex sets of molecules that give rise to cell biology.
  • As sets of cells to form organs and living organisms.
  • As structures of information processing, that give rise to the mechanisms of perception and interaction with the environment that allow the development of individual and social behavior.

As a result, each of these functional layers can be expressed by means of a certain state. So to speak of, the definition of a unique macroscopic state (CAT) is not correct. Each of these states will describe the object according to different functional rules, so it is worth asking what relationship exists between these descriptions and what their complexity is. Analogous to the arguments used to demonstrate that the states |CAT⟩ and (CAT) are not equivalent and are uncorrelated with each other, the states that describe the “cat” object at different functional levels will not be equivalent and may to some extent be disconnected from each other.

This behavior is a proof of how reality is structured in irreducible functional layers, in such a way that each one of the layers can be modeled independently and irreducibly, by means of an ordered set of bits processed according to axiomatic rules.

Refereces

[1] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
[3] P. Atkins and J. de Paula, Physical Chemestry, Oxford University Press, 2006.
[4] A. Bray, J. Hopkin, R. Lewis and W. Roberts, Essential Cell Biology, Garlan Science, 2014.
[5] D. Purves and G. J. Augustine, Neuroscience, Oxford Univesisty press, 2018.
[6] J. Gleick, Chaos: Making a New Science, Penguin Books, 1988.
[7] W. Heisenberg, «The Actual Content of Quantum Theoretical Kinematics and Mechanics,» Zeit-schrift fur Physik. Translation: NASA TM-77379., vol. 43, nº 3-4, pp. 172-198, 1927.

Reality as an irreducible layered structure

Note: This post is the first in a series in which macroscopic objects will be analyzed from a quantum and classical point of view, as well as the nature of the observation. Finally, all of them will be integrated into a single article.

Introduction

Quantum theory establishes the fundamentals of the behavior of particles and their interaction with each other. In general, these fundamentals apply to microscopic systems formed by a very limited number of particles. However, nothing indicates that the application of quantum theory cannot be applied to macroscopic objects, since the emerging properties of such objects must be based on the underlying quantum reality. Obviously, there is a practical limitation established by the increase in complexity, which grows exponentially as the number of elementary particles increases. 

The initial reference to this approach was made by Schrödinger [1], indicating that the quantum superposition of states did not represent any contradiction at the macroscopic level. To do this, he used what is known as Schrödinger’s cat paradox in which the cat could be in a superposition of states, one in which the cat was alive and another in which the cat was dead. Schrödinger’s original motivation was to raise a discussion about the EPR paradox [2], which revealed the incompleteness of quantum theory. This has finally been solved by Bell’s theorem [3] and its experimental verification by Aspect [4], making it clear that the entanglement of quantum particles is a reality on which quantum computation is based [5]. A summary of the aspects related to the realization of a quantum system that emulates Schrödinger cat has been made by Auletta [6], although these are restricted to non-macroscopic quantum systems.

But the question that remains is whether quantum theory can be used to describe macroscopic objects and whether the concept of quantum entanglement applies to these objects as well. Contrary to Schrödinger’s position, Wigner argued, through the friend paradox, that quantum mechanics could not have unlimited validity [7]. Recently, Frauchiger and Renner [8] have proposed a virtual experiment (Gedankenexperiment) that shows that quantum mechanics is not consistent when applied to complex objects. 

The Schrödinger cat paradigm will be used to analyze these results from two points of view, with no loss of generality, one as a quantum object and the other as a macroscopic object (in a next post). This will allow their consistency and functional relationship to be determined, leading to the establishment of an irreducible functional structure. As a consequence of this, it will also be necessary to analyze the nature of the observer within this functional structure (also in a later posts). 

Schrödinger’s cat as a quantum reality

In the Schrödinger cat experiment there are several entities [1], the radioactive particle, the radiation monitor, the poison flask and the cat. For simplicity, the experiment can be reduced to two quantum variables: the cat, which we will identify as CAT, and the system formed by the radioactive particle, the radiation monitor and the poison flask, which we will define as the poison system PS. 


Schrödinger Cat. (Source: Doug Hatfield https://commons.wikimedia.org/wiki/File:Schrodingers_cat.svg)

These quantum variables can be expressed as [9]: 

            |CAT⟩ = α1|DC⟩ + β1|LC⟩. Quantum state of the cat: dead cat |DC⟩, live cat |LC⟩.

            |PS⟩ = α2|PD⟩ + β2|PA⟩. Quantum state of the poison system: poison deactivated |PD⟩, poison activated |PA⟩.

The quantum state of the Schrödinger cat experiment SCE as a whole can be expressed as: 
               |SCE⟩ = |CAT⟩⊗|PS⟩= α1α2|DC⟩|PD⟩+α1β2|DC⟩|PA⟩+β1α2|LC⟩|PD⟩+β1β2|LC⟩|PA⟩.

Since for a classical observer the final result of the experiment requires that the states |DC⟩|PD⟩ and |LC⟩|PA⟩ are not compatible with observations,  the experiment must be prepared in such a way that the quantum states |CAT⟩ and |PS⟩ are entangled [10] [11], so that the wave function of the experiment must be: 

               |SCE⟩ = α|DC⟩|PA⟩ + β|LC⟩|PD⟩. 

As a consequence, the observation of the experiment [12] will result in a state:

            |SCE⟩ = |DC⟩|PA⟩, with probability α2, (poison activated, dead cat). 

or:

            |SCE⟩ =|LC⟩|PD⟩, with probability β2, (poison deactivated, live cat). 

Although from the formal point of view of quantum theory the approach of the experiment is correct, for a classical observer the experiment presents several objections. One of these is related to the fact that the experiment requires establishing “a priori” the requirement that the PS and CAT systems are entangled. Something contradictory, since from the point of view of the preparation of the quantum experiment there is no restriction, being able to exist results with quantum states |DC⟩|PD⟩, or |LC⟩|PA⟩, something totally impossible for a classical observer, assuming in any case that the poison is effective, that it is taken for granted in the experiment. Therefore, the SCE experiment is inconsistent, so it is necessary to analyze the root of the incongruence between the SCE quantum system and the result of the observation. 

Another objection, which may seem trivial, is that for the SCE experiment to collapse in one of its states the OBS observer must be entangled with the experiment, since the experiment must interact with it. Otherwise, the operation performed by the observer would have no consequence on the experiment. For this reason, this aspect will require more detailed analysis. 

Returning to the first objection, from the perspective of quantum theory it may seem possible to prepare the PS and CAT systems in an entangled superposition of states. However, it should be noted that both systems are composed of a huge number of non-entangled quantum subsystems Ssubject to continuous decoherence [13] [14]. It should be noted that the Si subsystems will internally have an entangled structure. Thus, the CAT and PS systems can be expressed as: 

            |CAT⟩ = |SC1⟩ ⊗ |SC2⟩ ⊗…⊗ |SCi⟩ ⊗…⊗ |SCk⟩,

            |PS⟩= |SP1⟩⊗|SP2⟩⊗…⊗|SPi⟩⊗…⊗|SPl⟩, 

in such a way that the observation of a certain subsystem causes its state to collapse, producing no influence on the rest of the subsystems, which will develop an independent quantum dynamics. This makes it unfeasible that the states |LC⟩ and |DC⟩ can be simultaneous and as a consequence the CAT system cannot be in a superposition of these states. An analogous reasoning can be made of the PS system, although it imay seem obvious that functionally it is much simpler. 

In short, from a theoretical point of view it is possible to have a quantum system equivalent to the SCE, for which all the subsystems must be fully entangled with each other, and in addition the system will require an “a priori” preparation of its state. However, the emerging reality differs radically from this scenario, so that the experiment seems to be unfeasible in practice. But the most striking fact is that, if the SCE experiment is generalized, the observable reality would be radically different from the observed reality. 

To better understand the consequences of the quantum state of the ECS system having to be prepared “a priori”, imagine that the supplier of the poison has changed its contents to a harmless liquid. As a result of this, the experiment will be able to kill the cat without cause. 

From these conclusions the question can be raised as to whether quantum theory can explain in a general and consistent way the observable reality at the macroscopic level. But perhaps the question is also whether the assumptions on which the SCE experiment has been conducted are correct. Thus, for example: Is it correct to use the concepts of live cat or dead cat in the domain of quantum physics? Which in turn raises other kinds of questions, such as: Is it generally correct to establish a strong link between observable reality and the underlying quantum reality? 

The conclusion that can be drawn from the contradictions of the SCE experiment is that the scenario of a complex quantum system cannot be treated in the same terms as a simple system. In terms of quantum computation these correspond, respectively, to systems made up of an enormous number and a limited number of qubits [5]. As a consequence of this, classical reality will be an irreducible fact, which based on quantum reality ends up being disconnected from it. This leads to defining reality in two independent and irreducible functional layers, a quantum reality layer and a classical reality layer. This would justify the criterion established by the Copenhagen interpretation [15] and its statistical nature as a means of functionally disconnecting both realities. Thus, quantum theory would be nothing more than a description of the information that can emerge from an underlying reality, but not a description of that reality. At this point, it is important to emphasize that statistical behavior is the means by which the functional correlation between processes can be reduced or eliminated [16] and that it would be the cause of irreducibility

References

[1] E. Schrödinger, «Die gegenwärtige Situation in der Quantenmechanik,» Naturwissenschaften, vol. 23, pp. 844-849, 1935.
[2] A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[3] J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics,vol. 1, nº 3, pp. 195-290, 1964.
[4] A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[5] M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011.
[6] G. Auletta, Foundations and Interpretation of Quantum Mechanics, World Scientific, 2001.
[7] E. P. Wigner, «Remarks on the mind–body question,» in Symmetries and Reflections, Indiana University Press, 1967, pp. 171-184.
[8] D. Frauchiger and R. Renner, “Quantum Theory Cannot Consistently Describe the Use of Itself,” Nature Commun., vol. 9, no. 3711, 2018.
[9] P. Dirac, The Principles of Quantum Mechanics, Oxford University Press, 1958.
[10] E. Schrödinger, «Discussion of Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 31, nº 4, pp. 555-563, 1935.
[11] E. Schrödinger, «Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 32, nº 3, pp. 446­-452, 1936.
[12] M. Born, «On the quantum mechanics of collision processes.,» Zeit. Phys.(D. H. Delphenich translation), vol. 37, pp. 863-867, 1926.
[13] H. D. Zeh, «On the Interpretation of Measurement in Quantum Theory,» Found. Phys., vol. 1, nº 1, pp. 69-76, 1970.
[14] W. H. Zurek, «Decoherence, einselection, and the quantum origins of the classical,» Rev. Mod. Phys., vol. 75, nº 3, pp. 715-775, 2003.
[15] W. Heisenberg, Physics and Philosophy. The revolution in Modern Science, Harper, 1958.
[16] E. W. Weisstein, «MathWorld,» [En línea]. Available http://mathworld.wolfram.com/Covariance.html.

Information and knowledge

What is information? 

If we stick to its definition, which can be found in dictionaries, we can see that it always refers to a set of data and often adds the fact that these are sorted and processed. But we are going to see that these definitions are imprecise and even erroneous in assimilating it to the concept of knowledge.

One of the things that information theory has taught us is that any object (news, profile, image, etc.) can be expressed precisely by a set of bits. Therefore, the formal definition of information is the ordered set of symbols that represent the object and that in their basic form constitute an ordered set of bits. However, information theory itself surprisingly reveals that information has no meaning, which is technically known as “information without meaning”.

This seems to be totally contradictory, especially if we take into account the conventional idea of what is considered as information. However, this is easy to understand. Let us imagine that we find a book in which symbols appear written that are totally unknown to us. We will immediately assume that it is a text written in a language unknown to us, since, in our culture, book-shaped objects are what they usually contain. Thus, we begin to investigate and conclude that it is an unknown language without reference or Rosetta stone with any known language. Therefore, we have information but we do not know its message and as a result, the knowledge contained in the text. We can even classify the symbols that appear in the text and assign them a binary code, as we do in the digitization processes, converting the text into an ordered set of bits.

However, to know the content of the message we must analyze the information through a process that must include the keys that allow extracting the content of the message. It is exactly the same as if the message were encrypted, so the message will remain hidden if the decryption key is not available, as shown by the one-time pad encryption technique.

Ray Solomonoff, co-founder of Algorithmic Information Theory together with Andrey Kolmogorov. 

What is knowledge?

This clearly shows the difference between information and knowledge. In such a way that information is the set of data (bits) that describe an object and knowledge is the result of a process applied to this information and that is materialized in reality. In fact, reality is always subject to this scheme.

For example, suppose we are told a certain story. From the sound pressure applied to our eardrums we will end up extracting the content of the news and also we will be able to experience subjective sensations, such as pleasure or sadness. There is no doubt that the original stimulus can be represented as a set of bits, considering that audio information can be a digital content, e.g. MP3.

But for knowledge to emerge, information needs to be processed. In fact, in the previous case it is necessary to involve several different processes, among which we must highlight:

  • Biological processes responsible for the transduction of information into nerve stimuli.
  • Extraction processes of linguistic information, established by the rules of language in our brain by learning.
  • Extraction processes of subjective information, established by cultural rules in our brain by learning.

In short, knowledge is established by means of information processing. And here the debate may arise as a consequence of the diversity of processes, of their structuring, but above all because of the nature of the ultimate source from which they emerge. Countless examples can be given. But, since doubts can surely arise that this is the way reality emerges, we can try to look for a single counterexample!

A fundamental question is: Can we measure knowledge? The answer is yes and is provided by the algorithmic information theory (AIT) which, based on information theory and computer theory, allows us to establish the complexity of an object, by means of the Kolmogorov complexity K(x), which is defined as follows:

For a finite object x, K(x) is defined as the length of the shortest effective binary description of x.

Without going into complex theoretical details, it is important to mention that K(x) is an intrinsic property of the object and not a property of the evaluation process. But don’t panic! Since, in practice, we are familiar with this idea.

Let’s imagine audio, video, or general bitstream content. We know that these can be compressed, which significantly reduces their size. This means that the complexity of these objects is not determined by the number of bits of the original sequence, but by the result of the compression since through an inverse decompression process we can recover the original content. But be careful! The effective description of the object must include the result of the compression process and the description of the decompression process, needed to retrieve the message.

Complexity of digital content, equivalent to a compression process

A similar scenario is the modeling of reality, where physical processes stand out. Thus, a model is a compact definition of a reality. For example, Newton’s universal gravitation model is the most compact definition of the behavior of a gravitational system in a non-relativistic context. In this way, the model, together with the rules of calculus and the information that defines the physical scenario, will be the most compact description of the system and constitutes what we call algorithm. It is interesting to note that this is the formal definition of algorithm and that until these mathematical concepts were developed in the first half of the 20th century by Klein, Chruch and Turing, this concept was not fully established.

Alan Turing, one of the fathers of computing

It must be considered that the physical machine that supports the process is also part of the description of the object, providing the basic functions. These are axiomatically defined and in the case of the Turing machine correspond to an extremely small number of axiomatic rules.

Structure of the models, equivalent to a decompression process

In summary, we can say that knowledge is the result of information processing. Therefore, information processing is the source of reality. But this raises the question: Since there are non-computable problems, to what depth is it possible to explore reality?