Author Archives: Jose Pozas

On the complexity of PI (π)

Introduction

There is no doubt that since the origins of geometry humans have been seduced by the number π. Thus, one of its fundamental characteristics is that it determines the relationship between the length of a circumference and its radius. But this does not stop here, since this constant appears systematically in mathematical and scientific models that describe the behavior of nature. In fact, it is so popular that it is the only number that has its own commemorative day. The great fascination around π has raised speculations about the information encoded in its figures and above all has unleashed an endless race for its determination, having calculated several tens of billions of figures to date.

Formally, the classification of real numbers is done according to the rules of calculus. In this way, Cantor showed that real numbers can be classified as countable infinities and uncountable infinities, what are commonly called rational and irrational. Rational numbers are those that can be expressed as a quotient of two whole numbers. While irrational numbers cannot be expressed this way. These in turn are classified as algebraic numbers and transcendent numbers. The former correspond to the non-rational roots of the algebraic equations, that is, the roots of polynomials. On the contrary, transcendent numbers are solutions of transcendent equations, that is, non-polynomial, such as exponential and trigonometric functions.

Georg Cantor. Co-creator of Set Theory

Without going into greater detail, what should catch our attention is that this classification of numbers is based on positional rules, in which each figure has a hierarchical value. But what happens if the numbers are treated as ordered sequences of bits, in which the position is not a value attribute.  In this case, the Algorithmic Information Theory (AIT) allows to establish a measure of the information contained in a finite sequence of bits, and in general of any mathematical object, and that therefore is defined in the domain of natural numbers.

What does the AIT tell us?

This measure is based on the concept of Kolmogorov complexity (KC). So that, the Kolmogorov complexity K(x) of a finite object x is defined as the length of the shortest effective binary description of x. Where the term “effective description” connects the Kolmogorov complexity with the Theory of Computation, so that K(x) would correspond to the length of the shortest program that prints x and enters the halt state. To be precise, the formal definition of K(x) is:

K(x) = minp,i{K(i) + l(p):Ti (p) = x } + O(1)

Where Ti(p) is the Turing machine (TM) i that executes p and prints x, l(p) is the length of p, and K(i) is the complexity of Ti. Therefore, object p is a compressed representation of object x, relative to Ti, since x can be retrieved from p by the decoding process defined by Ti, so it is defined as meaningful information. The rest is considered as meaningless, redundant, accidental or noise (meaningless information). The term O(1) indicates that K(i) is a recursive function and in general it is non-computable, although by definition it is machine independent, and whose result has the same order of magnitude in each one of the implementations. In this sense, Gödel’s incompleteness theorems, Turing machine and Kolmogorov complexity lead to the same conclusion about undecidability, revealing the existence of non-computable functions.

KC shows that information can be compressed, but does not establish any general procedure for its implementation, which is only possible for certain sequences. In effect, from the definition of KC it is demonstrated that this is an intrinsic property of bitstreams, in such a way that there are sequences that cannot be compressed. Thus, the number of n-bit sequences that can be encoded by m bits is less than 2m, so the fraction of n-bit sequences with K(x) ≥ n-k is less than 2-k. If the n-bit possible sequences are considered, each one of them will have a probability of occurrence equal to 2-n, so the probability that the complexity of a sequence is K(x) ≥ n-k is equal to or greater than (1-2-k). In short, most bit sequences cannot be compressed beyond their own size, showing a high complexity as they do not present any type of pattern. Applied to the field of physics, this behavior justifies the ergodic hypothesis. As a consequence, this means that most of the problems cannot be solved analytically, since they can only be represented by themselves and as a consequence they cannot be described in a compact way by means of formal rules.

It could be thought that the complexity of a sequence can be reduced at will, by applying a coding criterion that modifies the sequence into a less complex sequence. In general, this only increases the complexity, since in the calculation of K(x) we would have to add the complexity of the coding algorithm that makes it grow as n2. Finally, add that the KC is applicable to any mathematical object, integers, sets, functions, and it is demonstrated that, as the complexity of the mathematical object grows, K(x) is equivalent to the entropy H defined in the context of Information Theory. The advantage of AIT is that it performs a semantic treatment of information, being an axiomatic process, so it does not require having a priori any type of alphabet to perform the measurement of information.

What can be said about the complexity of π?

According to its definition, KC cannot be applied to irrational numbers, since in this case the Turing machine does not reach the halt state, and as we know these numbers have an infinite number of digits. In other words, and to be formally correct, the Turing machine is only defined in the field of natural numbers (it must be noted that their cardinality is the same as that of the rationals), while irrational numbers have a cardinality greater than that of rational numbers. This means that KC and the equivalent entropy H of irrational numbers are undecidable and therefore non-computable.

To overcome this difficulty we can consider an irrational number X as the concatenation of a sequence of bits composed of a rational number x and a residue δx, so that in numerical terms X=x+δx, but in terms of information X={x,δx}. As a consequence, δx is an irrational number δx→0, and therefore δx is a sequence of bits with an undecidable KC and hence non-computable. In this way, it can be expressed:

K(X) = K(x)+K(δx)

The complexity of X can be assimilated to the complexity of x. A priori this approach may seem surprising and inadmissible, since the term K(δx) is neglected when in fact it has an undecidable complexity. But this is similar to the approximation made in the calculation of the entropy of a continuous variable or to the renormalization process used in physics, in order to circumvent the complexity of the underlying processes that remain hidden from observable reality.

Consequently, the sequence p, which runs the Turing machine i to get x, will be composed of the concatenation of:

  • The sequence of bits that encode the rules of calculus in the Turing machine i.
  • The bitstream that encodes the compressed expression of x, for example a given numerical series of x.
  • The length of the sequence x that is to be decoded and that determines when the Turing machine should reach the halt state, for example a googol (10100).

In short, it can be concluded that the complexity K(x) of known irrational numbers, e.g. √2, π, e,…, is limited. For this reason, the challenge must be to obtain the optimum expression of K(x) and not the figures that encode these numbers, since according to what has been said, their uncompressed expression, or the development of their figures, has a high degree of redundancy (meaningless information).

What in theory is a surprising and questionable fact is in practice an irrefutable fact, since the complexity of δx will always remain hidden, since it is undecidable and therefore non-computable.

Another important conclusion is that it provides a criterion for classifying irrational numbers into two groups: representable and non-representable. The former correspond to irrational numbers that can be represented by mathematical expressions, which would be the compressed expression of these numbers. While non-representable numbers would correspond to irrational numbers that could only be expressed by themselves and are therefore undecidable. In short, the cardinality of representable irrational numbers is that of natural numbers. It should be noted that the previous classification criterion is applicable to any mathematical object.

On the other hand, it is evident that mathematics, and calculus in particular, de facto accepts the criteria established to define the complexity K(x). This may go unnoticed because, traditionally in this context, numbers are analyzed from the perspective of positional coding, in such a way that the non-representable residue is filtered out through the concept of limit, in such a way that δx→0. However, when it comes to evaluating the informative complexity of a mathematical object, it may be required to apply a renormalization procedure.

A macroscopic view of the Schrödinger cat

From the analysis carried out in the previous post, it can be concluded that, in general, it is not possible to identify the macroscopic states of a complex system with its quantum states. Thus, the macroscopic states corresponding to the dead cat (DC) or to the living cat (AC) cannot be considered quantum states, since according to quantum theory the system could be expressed as a superposition of these states. Consequently, as it has been justified, for macroscopic systems it is not possible to define quantum states such as |DC⟩ and |DC⟩. On the other hand, the states (DC) and (AC) are an observable reality, indicating that the system presents two realities, a quantum reality and an emerging reality that can be defined as classical reality.

Quantum reality will be defined by its wave function, formed by the superposition of the quantum subsystems that make up the system and which will evolve according to the existing interaction between all the quantum elements that make up the system and the environment. For simplicity, if the CAT system is considered isolated from the environment, the succession of its quantum state can be expressed as:

            |CAT[n]⟩ = |SC1[n]⟩ ⊗|SC2[n]⟩ ⊗…⊗|SCi[n]⟩ ⊗…⊗|SCk[n][n]⟩.

Expression in which it has been taken into account that the number of non-entangled quantum subsystems k also varies with time, so it is a function of the sequence n, considering time as a discrete variable. 

The observable classical reality can be described by the state of the system that, if for the object “cat” is defined as (CAT[n]), from the previous reasoning it is concluded that (CAT[n]) ≢ |CAT[n]⟩. In other words, the quantum and classical states of a complex object are not equivalent. 

The question that remains to be justified is the irreducibility of the observable classical state (CAT) from the underlying quantum reality, represented by the quantum state |CAT⟩. This can be done if it is considered that the functional relationship between states |CAT⟩ and (CAT) is extraordinarily complex, being subject to the mathematical concepts on which complex systems are based, such as they are:

  • The complexity of the space of quantum states (Hilbert space).
  • The random behavior of observable information emerging from quantum reality.
  • The enormous number of quantum entities involved in a macroscopic system.
  • The non-linearity of the laws of classical physics.

Based on Kolmogorov complexity [1], it is possible to prove that the behavior of systems with these characteristics does not support, in most cases, an analytical solution that determines the evolution of the system from its initial state. This also implies that, in practice, the process of evolution of a complex object can only be represented by itself, both on a quantum and a classical level.

According to the algorithmic information theory [1], this process is equivalent to a mathematical object composed of an ordered set of bits processed according to axiomatic rules. In such a way that the information of the object is defined by the Kolmogorov complexity, in a manner that it remains constant throughout time, as long as the process is an isolated system. It should be pointed out that the Kolmogorov complexity makes it possible to determine the information contained in an object, without previously having an alphabet for the determination of its entropy, as is the case in the information theory [2], although both concepts coincide at the limit.

From this point of view, two fundamental questions arise. The first is the evolution of the entropy of the system and the second is the apparent loss of information in the observation process, through which classical reality emerges from quantum reality. This opens a possible line of analysis that will be addressed later.

But going back to the analysis of what is the relationship between classic and quantum states, it is possible to have an intuitive view of how the state (CAT) ends up being disconnected from the state |CAT⟩, analyzing the system qualitatively.

First, it should be noted that virtually 100% of the quantum information contained in the state |CAT⟩ remains hidden within the elementary particles that make up the system. This is a consequence of the fact that the physical-chemical structure [3] of the molecules is determined exclusively by the electrons that support its covalent bonds. Next, it must be considered that the molecular interaction, on which molecular biology is based, is performed by van der Waals forces and hydrogen bonds, creating a new level of functional disconnection with the underlying layer.

Supported by this functional level appears a new functional structure formed by cellular biology  [4], from which appear living organisms, from unicellular beings to complex beings formed by multicellular organs. It is in this layer that the concept of living being emerges, establishing a new border between the strictly physical and the concept of perception. At this level the nervous tissue [5] emerges, allowing the complex interaction between individuals and on which new structures and concepts are sustained, such as consciousness, culture, social organization, which are not only reserved to human beings, although it is in the latter where the functionality is more complex.

But to the complexity of the functional layers must be added the non-linearity of the laws to which they are subject and which are necessary and sufficient conditions for a behavior of deterministic chaos [6] and which, as previously justified, is based on the algorithmic information theory [1]. This means that any variation in the initial conditions will produce a different dynamic, so that any emulation will end up diverging from the original, this behavior being the justification of free will. In this sense, Heisenberg’s uncertainty principle [7] prevents from knowing exactly the initial conditions of the classical system, in any of the functional layers described above. Consequently, all of them will have an irreducible nature and an unpredictable dynamic, determined exclusively by the system itself.

At this point and in view of this complex functional structure, we must ask what the state (CAT) refers to, since in this context the existence of a classical state has been implicitly assumed. The complex functional structure of the object “cat” allows a description at different levels. Thus, the cat object can be described in different ways:

  • As atoms and molecules subject to the laws of physical chemistry.
  • As molecules that interact according to molecular biology.
  • As complex sets of molecules that give rise to cell biology.
  • As sets of cells to form organs and living organisms.
  • As structures of information processing, that give rise to the mechanisms of perception and interaction with the environment that allow the development of individual and social behavior.

As a result, each of these functional layers can be expressed by means of a certain state. So to speak of, the definition of a unique macroscopic state (CAT) is not correct. Each of these states will describe the object according to different functional rules, so it is worth asking what relationship exists between these descriptions and what their complexity is. Analogous to the arguments used to demonstrate that the states |CAT⟩ and (CAT) are not equivalent and are uncorrelated with each other, the states that describe the “cat” object at different functional levels will not be equivalent and may to some extent be disconnected from each other.

This behavior is a proof of how reality is structured in irreducible functional layers, in such a way that each one of the layers can be modeled independently and irreducibly, by means of an ordered set of bits processed according to axiomatic rules.

Refereces

[1] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
[3] P. Atkins and J. de Paula, Physical Chemestry, Oxford University Press, 2006.
[4] A. Bray, J. Hopkin, R. Lewis and W. Roberts, Essential Cell Biology, Garlan Science, 2014.
[5] D. Purves and G. J. Augustine, Neuroscience, Oxford Univesisty press, 2018.
[6] J. Gleick, Chaos: Making a New Science, Penguin Books, 1988.
[7] W. Heisenberg, «The Actual Content of Quantum Theoretical Kinematics and Mechanics,» Zeit-schrift fur Physik. Translation: NASA TM-77379., vol. 43, nº 3-4, pp. 172-198, 1927.

Reality as an irreducible layered structure

Note: This post is the first in a series in which macroscopic objects will be analyzed from a quantum and classical point of view, as well as the nature of the observation. Finally, all of them will be integrated into a single article.

Introduction

Quantum theory establishes the fundamentals of the behavior of particles and their interaction with each other. In general, these fundamentals apply to microscopic systems formed by a very limited number of particles. However, nothing indicates that the application of quantum theory cannot be applied to macroscopic objects, since the emerging properties of such objects must be based on the underlying quantum reality. Obviously, there is a practical limitation established by the increase in complexity, which grows exponentially as the number of elementary particles increases. 

The initial reference to this approach was made by Schrödinger [1], indicating that the quantum superposition of states did not represent any contradiction at the macroscopic level. To do this, he used what is known as Schrödinger’s cat paradox in which the cat could be in a superposition of states, one in which the cat was alive and another in which the cat was dead. Schrödinger’s original motivation was to raise a discussion about the EPR paradox [2], which revealed the incompleteness of quantum theory. This has finally been solved by Bell’s theorem [3] and its experimental verification by Aspect [4], making it clear that the entanglement of quantum particles is a reality on which quantum computation is based [5]. A summary of the aspects related to the realization of a quantum system that emulates Schrödinger cat has been made by Auletta [6], although these are restricted to non-macroscopic quantum systems.

But the question that remains is whether quantum theory can be used to describe macroscopic objects and whether the concept of quantum entanglement applies to these objects as well. Contrary to Schrödinger’s position, Wigner argued, through the friend paradox, that quantum mechanics could not have unlimited validity [7]. Recently, Frauchiger and Renner [8] have proposed a virtual experiment (Gedankenexperiment) that shows that quantum mechanics is not consistent when applied to complex objects. 

The Schrödinger cat paradigm will be used to analyze these results from two points of view, with no loss of generality, one as a quantum object and the other as a macroscopic object (in a next post). This will allow their consistency and functional relationship to be determined, leading to the establishment of an irreducible functional structure. As a consequence of this, it will also be necessary to analyze the nature of the observer within this functional structure (also in a later posts). 

Schrödinger’s cat as a quantum reality

In the Schrödinger cat experiment there are several entities [1], the radioactive particle, the radiation monitor, the poison flask and the cat. For simplicity, the experiment can be reduced to two quantum variables: the cat, which we will identify as CAT, and the system formed by the radioactive particle, the radiation monitor and the poison flask, which we will define as the poison system PS. 


Schrödinger Cat. (Source: Doug Hatfield https://commons.wikimedia.org/wiki/File:Schrodingers_cat.svg)

These quantum variables can be expressed as [9]: 

            |CAT⟩ = α1|DC⟩ + β1|LC⟩. Quantum state of the cat: dead cat |DC⟩, live cat |LC⟩.

            |PS⟩ = α2|PD⟩ + β2|PA⟩. Quantum state of the poison system: poison deactivated |PD⟩, poison activated |PA⟩.

The quantum state of the Schrödinger cat experiment SCE as a whole can be expressed as: 
               |SCE⟩ = |CAT⟩⊗|PS⟩= α1α2|DC⟩|PD⟩+α1β2|DC⟩|PA⟩+β1α2|LC⟩|PD⟩+β1β2|LC⟩|PA⟩.

Since for a classical observer the final result of the experiment requires that the states |DC⟩|PD⟩ and |LC⟩|PA⟩ are not compatible with observations,  the experiment must be prepared in such a way that the quantum states |CAT⟩ and |PS⟩ are entangled [10] [11], so that the wave function of the experiment must be: 

               |SCE⟩ = α|DC⟩|PA⟩ + β|LC⟩|PD⟩. 

As a consequence, the observation of the experiment [12] will result in a state:

            |SCE⟩ = |DC⟩|PA⟩, with probability α2, (poison activated, dead cat). 

or:

            |SCE⟩ =|LC⟩|PD⟩, with probability β2, (poison deactivated, live cat). 

Although from the formal point of view of quantum theory the approach of the experiment is correct, for a classical observer the experiment presents several objections. One of these is related to the fact that the experiment requires establishing “a priori” the requirement that the PS and CAT systems are entangled. Something contradictory, since from the point of view of the preparation of the quantum experiment there is no restriction, being able to exist results with quantum states |DC⟩|PD⟩, or |LC⟩|PA⟩, something totally impossible for a classical observer, assuming in any case that the poison is effective, that it is taken for granted in the experiment. Therefore, the SCE experiment is inconsistent, so it is necessary to analyze the root of the incongruence between the SCE quantum system and the result of the observation. 

Another objection, which may seem trivial, is that for the SCE experiment to collapse in one of its states the OBS observer must be entangled with the experiment, since the experiment must interact with it. Otherwise, the operation performed by the observer would have no consequence on the experiment. For this reason, this aspect will require more detailed analysis. 

Returning to the first objection, from the perspective of quantum theory it may seem possible to prepare the PS and CAT systems in an entangled superposition of states. However, it should be noted that both systems are composed of a huge number of non-entangled quantum subsystems Ssubject to continuous decoherence [13] [14]. It should be noted that the Si subsystems will internally have an entangled structure. Thus, the CAT and PS systems can be expressed as: 

            |CAT⟩ = |SC1⟩ ⊗ |SC2⟩ ⊗…⊗ |SCi⟩ ⊗…⊗ |SCk⟩,

            |PS⟩= |SP1⟩⊗|SP2⟩⊗…⊗|SPi⟩⊗…⊗|SPl⟩, 

in such a way that the observation of a certain subsystem causes its state to collapse, producing no influence on the rest of the subsystems, which will develop an independent quantum dynamics. This makes it unfeasible that the states |LC⟩ and |DC⟩ can be simultaneous and as a consequence the CAT system cannot be in a superposition of these states. An analogous reasoning can be made of the PS system, although it imay seem obvious that functionally it is much simpler. 

In short, from a theoretical point of view it is possible to have a quantum system equivalent to the SCE, for which all the subsystems must be fully entangled with each other, and in addition the system will require an “a priori” preparation of its state. However, the emerging reality differs radically from this scenario, so that the experiment seems to be unfeasible in practice. But the most striking fact is that, if the SCE experiment is generalized, the observable reality would be radically different from the observed reality. 

To better understand the consequences of the quantum state of the ECS system having to be prepared “a priori”, imagine that the supplier of the poison has changed its contents to a harmless liquid. As a result of this, the experiment will be able to kill the cat without cause. 

From these conclusions the question can be raised as to whether quantum theory can explain in a general and consistent way the observable reality at the macroscopic level. But perhaps the question is also whether the assumptions on which the SCE experiment has been conducted are correct. Thus, for example: Is it correct to use the concepts of live cat or dead cat in the domain of quantum physics? Which in turn raises other kinds of questions, such as: Is it generally correct to establish a strong link between observable reality and the underlying quantum reality? 

The conclusion that can be drawn from the contradictions of the SCE experiment is that the scenario of a complex quantum system cannot be treated in the same terms as a simple system. In terms of quantum computation these correspond, respectively, to systems made up of an enormous number and a limited number of qubits [5]. As a consequence of this, classical reality will be an irreducible fact, which based on quantum reality ends up being disconnected from it. This leads to defining reality in two independent and irreducible functional layers, a quantum reality layer and a classical reality layer. This would justify the criterion established by the Copenhagen interpretation [15] and its statistical nature as a means of functionally disconnecting both realities. Thus, quantum theory would be nothing more than a description of the information that can emerge from an underlying reality, but not a description of that reality. At this point, it is important to emphasize that statistical behavior is the means by which the functional correlation between processes can be reduced or eliminated [16] and that it would be the cause of irreducibility

References

[1] E. Schrödinger, «Die gegenwärtige Situation in der Quantenmechanik,» Naturwissenschaften, vol. 23, pp. 844-849, 1935.
[2] A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935.
[3] J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics,vol. 1, nº 3, pp. 195-290, 1964.
[4] A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[5] M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011.
[6] G. Auletta, Foundations and Interpretation of Quantum Mechanics, World Scientific, 2001.
[7] E. P. Wigner, «Remarks on the mind–body question,» in Symmetries and Reflections, Indiana University Press, 1967, pp. 171-184.
[8] D. Frauchiger and R. Renner, “Quantum Theory Cannot Consistently Describe the Use of Itself,” Nature Commun., vol. 9, no. 3711, 2018.
[9] P. Dirac, The Principles of Quantum Mechanics, Oxford University Press, 1958.
[10] E. Schrödinger, «Discussion of Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 31, nº 4, pp. 555-563, 1935.
[11] E. Schrödinger, «Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 32, nº 3, pp. 446­-452, 1936.
[12] M. Born, «On the quantum mechanics of collision processes.,» Zeit. Phys.(D. H. Delphenich translation), vol. 37, pp. 863-867, 1926.
[13] H. D. Zeh, «On the Interpretation of Measurement in Quantum Theory,» Found. Phys., vol. 1, nº 1, pp. 69-76, 1970.
[14] W. H. Zurek, «Decoherence, einselection, and the quantum origins of the classical,» Rev. Mod. Phys., vol. 75, nº 3, pp. 715-775, 2003.
[15] W. Heisenberg, Physics and Philosophy. The revolution in Modern Science, Harper, 1958.
[16] E. W. Weisstein, «MathWorld,» [En línea]. Available http://mathworld.wolfram.com/Covariance.html.

Why the rainbow has 7 colors?

Published on OPENMIND August 8, 2018

Color as a physical concept

Visible light, heat, radio waves and other types of radiation all have the same physical nature and are constituted by a flow of particles called photons. The photon or “light quantum” was proposed by Einstein, for which he was awarded the Nobel Prize in 1921 and is one of the elementary particles of the standard model, belonging to the boson family. The fundamental characteristic of a photon is its capacity to transfer energy in quantized form, which is determined by its frequency, according to the expression E=h∙ν, where h is the Planck constant and ν the frequency of the photon.

Electromagnetic spectrum

Thus, we can find photons of very low frequencies located in the band of radio waves, to photons of very high energy called gamma rays, as shown in the following figure, forming a continuous frequency range that constitutes the electromagnetic spectrum. Since the photon can be modeled as a sinusoid traveling at the speed of light c, the length of a complete cycle is called the photon wavelength λ, so the photon can be characterized either by its frequency or its wavelength, since λ=c/ν. But it is common to use the term color as a synonym for frequency, since the color of light perceived by humans is a function of frequency. However, as we are going to see, this is not strictly physical but a consequence of the process of measuring and interpreting information, which makes color an emerging reality of another underlying reality, sustained by the physical reality of electromagnetic radiation.

Structure of an electromagnetic wave

But before addressing this issue, it should be considered that to detect photons efficiently it is necessary to have a detector called an antenna, whose size must be similar to the wavelength of the photons.

Color perception by humans

The human eye is sensitive to wavelengths ranging from deep red (700nm, nanometers=10-9 meters) to violet (400nm).  This requires receiving antennas of the order of hundreds of nanometres in size! But for nature this is not a big problem, as complex molecules can easily be this size. In fact, the human eye, for color vision, is endowed with three types of photoreceptor proteins, which produce a response as shown in the following figure.

Response of photoreceptor cells of the human retina

Each of these types configures a type of photoreceptor cell in the retina, which due to its morphology are called cones. The photoreceptor proteins are located in the cell membrane, so that when they absorb a photon they change shape, opening up channels in the cell membrane that generate a flow of ions. After a complex biochemical process, a flow of nerve impulses is produced that is preprocessed by several layers of neurons in the retina that finally reach the visual cortex through the optic nerve, where the information is finally processed.

But in this context, the point is that the retinal cells do not measure the wavelength of the photons of the stimulus. On the contrary, what they do is convert a stimulus of a certain wavelength into three parameters called L, M, S, which are the response of each of the types of photoreceptor cells to the stimulus. This has very interesting implications that need to be analyzed. In this way, we can explain aspects such as:

  • The reason why the rainbow has 7 colors.
  • The possibility of synthesizing the color by means of additive and subtractive mixing.
  • The existence of non-physical colors, such as white and magenta.
  • The existence of different ways of interpreting color according to the species.

To understand this, let us imagine that they provide us with the response of a measurement system that relates L, M, S to the wavelength and ask us to establish a correlation between them. The first thing we can see is that there are 7 different zones in the wavelength, 3 ridges and 4 valleys. 7 patterns! This explains why we perceive the rainbow composed of 7 colors, an emerging reality as a result of information processing that transcends physical reality.

But what answer will a bird give us if we ask it about the number of colors of the rainbow? Possibly, though unlikely, it will tell us nine! This is because the birds have a fourth type of photoreceptor positioned in the ultraviolet, so the perception system will establish 9 regions in the light perception band. And this leads us to ask: What will be the chromatic range perceived by our hypothetical bird, or by species that only have a single type of photoreceptor? The result is a simple case of combinatorial!

On the other hand, the existence of three types of photoreceptors in the human retina makes it possible to synthesize the chromatic range in a relatively precise way, by means of the additive combination of three colors, red, green and blue, as it is done in the video screens. In this way, it is possible to produce an L,M,S response at each point of the retina similar to that produced by a real stimulus, by means of the weighted application of a mixture of photons of red, green and blue wavelengths.

Similarly, it is possible to synthesize color by subtractive or pigmentary mixing of three colors, magenta, cyan and yellow, as in oil paint or printers. And this is where the virtuality of color is clearly shown, since there are no magenta photons, since this stimulus is a mixture of blue and red photons. The same happens with the white color, as there are no individual photons that produce this stimulus, since white is the perception of a mixture of photons distributed in the visible band, and in particular by the mixture of red, green and blue photons.

In short, the perception of color is a clear example of how reality emerges as a result of information processing. Thus, we can see how a given interpretation of the physical information of the visible electromagnetic spectrum produces an emerging reality, based a much more complex underlying reality.

In this sense, we could ask ourselves what an android with a precise wavelength measurement system would think of the images we synthesize in painting or on video screens. It would surely answer that they do not correspond to the original images, something that for us is practically imperceptible. And this connects with a subject, which may seem unrelated, as is the concept of beauty and aesthetics. The truth is that when we are not able to establish patterns or categories in the information we perceive it as noise or disorder.  Something unpleasant or unsightly!

Information and knowledge

What is information? 

If we stick to its definition, which can be found in dictionaries, we can see that it always refers to a set of data and often adds the fact that these are sorted and processed. But we are going to see that these definitions are imprecise and even erroneous in assimilating it to the concept of knowledge.

One of the things that information theory has taught us is that any object (news, profile, image, etc.) can be expressed precisely by a set of bits. Therefore, the formal definition of information is the ordered set of symbols that represent the object and that in their basic form constitute an ordered set of bits. However, information theory itself surprisingly reveals that information has no meaning, which is technically known as “information without meaning”.

This seems to be totally contradictory, especially if we take into account the conventional idea of what is considered as information. However, this is easy to understand. Let us imagine that we find a book in which symbols appear written that are totally unknown to us. We will immediately assume that it is a text written in a language unknown to us, since, in our culture, book-shaped objects are what they usually contain. Thus, we begin to investigate and conclude that it is an unknown language without reference or Rosetta stone with any known language. Therefore, we have information but we do not know its message and as a result, the knowledge contained in the text. We can even classify the symbols that appear in the text and assign them a binary code, as we do in the digitization processes, converting the text into an ordered set of bits.

However, to know the content of the message we must analyze the information through a process that must include the keys that allow extracting the content of the message. It is exactly the same as if the message were encrypted, so the message will remain hidden if the decryption key is not available, as shown by the one-time pad encryption technique.

Ray Solomonoff, co-founder of Algorithmic Information Theory together with Andrey Kolmogorov. 

What is knowledge?

This clearly shows the difference between information and knowledge. In such a way that information is the set of data (bits) that describe an object and knowledge is the result of a process applied to this information and that is materialized in reality. In fact, reality is always subject to this scheme.

For example, suppose we are told a certain story. From the sound pressure applied to our eardrums we will end up extracting the content of the news and also we will be able to experience subjective sensations, such as pleasure or sadness. There is no doubt that the original stimulus can be represented as a set of bits, considering that audio information can be a digital content, e.g. MP3.

But for knowledge to emerge, information needs to be processed. In fact, in the previous case it is necessary to involve several different processes, among which we must highlight:

  • Biological processes responsible for the transduction of information into nerve stimuli.
  • Extraction processes of linguistic information, established by the rules of language in our brain by learning.
  • Extraction processes of subjective information, established by cultural rules in our brain by learning.

In short, knowledge is established by means of information processing. And here the debate may arise as a consequence of the diversity of processes, of their structuring, but above all because of the nature of the ultimate source from which they emerge. Countless examples can be given. But, since doubts can surely arise that this is the way reality emerges, we can try to look for a single counterexample!

A fundamental question is: Can we measure knowledge? The answer is yes and is provided by the algorithmic information theory (AIT) which, based on information theory and computer theory, allows us to establish the complexity of an object, by means of the Kolmogorov complexity K(x), which is defined as follows:

For a finite object x, K(x) is defined as the length of the shortest effective binary description of x.

Without going into complex theoretical details, it is important to mention that K(x) is an intrinsic property of the object and not a property of the evaluation process. But don’t panic! Since, in practice, we are familiar with this idea.

Let’s imagine audio, video, or general bitstream content. We know that these can be compressed, which significantly reduces their size. This means that the complexity of these objects is not determined by the number of bits of the original sequence, but by the result of the compression since through an inverse decompression process we can recover the original content. But be careful! The effective description of the object must include the result of the compression process and the description of the decompression process, needed to retrieve the message.

Complexity of digital content, equivalent to a compression process

A similar scenario is the modeling of reality, where physical processes stand out. Thus, a model is a compact definition of a reality. For example, Newton’s universal gravitation model is the most compact definition of the behavior of a gravitational system in a non-relativistic context. In this way, the model, together with the rules of calculus and the information that defines the physical scenario, will be the most compact description of the system and constitutes what we call algorithm. It is interesting to note that this is the formal definition of algorithm and that until these mathematical concepts were developed in the first half of the 20th century by Klein, Chruch and Turing, this concept was not fully established.

Alan Turing, one of the fathers of computing

It must be considered that the physical machine that supports the process is also part of the description of the object, providing the basic functions. These are axiomatically defined and in the case of the Turing machine correspond to an extremely small number of axiomatic rules.

Structure of the models, equivalent to a decompression process

In summary, we can say that knowledge is the result of information processing. Therefore, information processing is the source of reality. But this raises the question: Since there are non-computable problems, to what depth is it possible to explore reality? 

What is the nature of the information?

Published on OPENMIND May 7, 2018

A historical perspective

Classically, information is considered to be human-to-human transactions. However, throughout history this concept has been expanded, not so much by the development of mathematical logic but by technological development. A substantial change occurred with the arrival of the telegraph at the beginning of the 19th century. Thus, “send” went from being strictly material to a broader concept, as many anecdotes make clear. Among the most frequent highlights the intention of many people to send material things by means of telegrams, or the anger of certain customers arguing that the telegraph operator had not sent the message when he returned them the message note.

Currently, “information” is an abstract concept based on the theory of information, created by Claude Shannon in the mid-twentieth century. However, computer technology is what has contributed most to the concept of “bit” being something totally familiar. Moreover, concepts such as virtual reality, based on the processing of information, have become everyday terms.

The point is that information is ubiquitous in all natural processes, physics, biology, economics, etc., in such a way that these processes can be described by mathematical models and ultimately by information processing. This makes us wonder: What is the relationship between information and reality? 

Information as a physical entity

It is evident that information emerges from physical reality, as computer technology demonstrates. The question is whether information is fundamental to physical reality or simply a product of it. In this sense, there is evidence of the strict relationship between information and energy.

Claude Elwood Shannon was a mathematician, electrical engineer and American cryptographer remembered as «the father of information theory» / Image: DobriZheglov

Thus, the Shannon-Hartley theorem of information theory establishes the minimum amount of energy required to transmit a bit, known as the Bekenstein bound. In a different way and in order to determine the energy consumption in the computation process, Rolf Landauer established the minimum amount of energy needed to erase a bit, a result known as Landauer principle, and its value exactly coincides with the Bekenstein bound, which is a function of the absolute temperature of the medium.

These results allow determining the maximum capacity of a communication channel and the minimum energy required by a computer to perform a given task. In both cases, the inefficiency of current systems is evidenced, whose performance is extremely far from theoretical limits. But in this context, the really important thing is that Shannon-Hartley’s theorem is a strictly mathematical development, in which the information is finally coded on physical variables, leading us to think that information is something fundamental in what we define as reality.

Both cases show the relationship between energy and information, but are not conclusive in determining the nature of information. What is clear is that for a bit to emerge and be observed on the scale of classical physics requires a minimum amount of energy determined by the Bekenstein bound. So, the observation of information is something related to the absolute temperature of the environment.

This behavior is fundamental in the process of observation, as it becomes evident in the experimentation of physical phenomena. A representative example is the measurement of the microwave background radiation produced by the big bang, which requires that the detector located in the satellite be cooled by liquid helium. The same is true for night vision sensors, which must be cooled by a Peltier cell. On the contrary, this is not necessary in a conventional camera since the radiation emitted by the scene is much higher than the thermal noise level of the image sensor.

Cosmic Microwave Background (CMB). NASA’s WMAP satellite

This proofs that information emerges from physical reality. But we can go further, as information is the basis for describing natural processes. Therefore, something that cannot be observed cannot be described. In short, every observable is based on information, something that is clearly evident in the mechanisms of perception.

From the emerging information it is possible to establish mathematical models that hide the underlying reality, suggesting a functional structure in irreducible layers. A paradigmatic example is the theory of electromagnetism, which accurately describes electromagnetism without relying itself on the photon’s existence, and the existence of photos cannot be inferred from it. Something that is generally extendable to all physical models.

Another indication that information is a fundamental entity of what we call reality is the impossibility of transferring information faster than light. This would make reality a non-causal and inconsistent system. Therefore, from this point of view information is subject to the same physical laws as energy. And considering a behavior such as particle entanglement, we can ask: How does information flow at the quantum level?

Is information the essence of reality?

Based on these clues, we could hypothesize that information is the essence of reality in each of the functional layers in which it is manifested. Thus, for example, if we think of space-time, its observation is always indirect through the properties of matter-energy, so we could consider it to be nothing more than the emergent information of a more complex underlying reality. This gives an idea of ​​why the vacuum remains one of the great enigmas of physics. This kind of argument leads us to ask: What is it and what do we mean by reality?

Space-Time perception

From this perspective, we can ask what conclusions we could reach if we analyze what we define as reality from the point of view of information theory and, in particular, from  the algorithmic information theory and the theory of computability. All this without losing sight of the knowledge provided by the different areas that study reality, especially physics.

 

Reality as emerging information

What is reality?

The idea that reality may be nothing more than a result of emerging information is not a novel idea at all. Plato, in what is known as the allegory of the cave, exposes how reality is perceived by a group of humans chained in a cave who from birth observe reality through the shadows projected on a wall.

Modern version of the allegory of the cave

It is interesting to note that when we refer to perception, anthropic vision plays an important role, which can create some confusion by associating perception with human consciousness. To clarify this point, let’s imagine an automaton of artificial vision. In the simplest case, it will be equipped with image sensors, processes for image processing and a database of patterns to be recognized. Therefore, the system is reduced to information encoded as a sequence of bits and to a set of processes, defined axiomatically, that convert information into knowledge.

Therefore, the acquisition of information always takes place by physical processes, which in the case of the automaton are materialized by means of an image sensor based on electronic technology and in the case of living beings by means of molecular photoreceptors. As algorithmic information theory shows us, this information has no meaning until it is processed, extracting patterns contained in it.

As a result, we can draw general conclusions about the process of perception. Thus, the information can be obtained and analyzed with different degrees of detail, giving rise to different layers of reality. This is what makes humans have a limited view of reality and sometimes a distorted one.

But in the case of physics, the scientific procedure aims to solve this problem by rigorously contrasting theory and experimentation. This leads to the definition of physical models such as the theory of electromagnetism or Newton’s theory of universal gravitation that condense the behavior of nature to a certain functional level, hiding a more complex underlying reality, which is why they are irreducible models of reality. Thus, Newton’s theory of gravitation models the gravitational behavior of massive bodies without giving a justification for it.

Today we know that the theory of general relativity gives an explanation to this behavior, through the deformation of space-time by the effect of mass, which in turn determines the movement of massive bodies. However, the model is again a description limited to a certain level of detail, proposing a space-time structure that may be static, expansive or recessive, but without giving justification for it. Neither does it establish a link with the quantum behavior of matter, which is one of the objectives of the unification theories. What we can say is that all these models are a description of reality at a certain functional level.

Universal Gravitation vs. Relativistic Mechanics

Reality as information processing

But the question is: What does this have to do with perception? As we have described, perception is the result of information processing, but this is a term generally reserved for human behavior, which entails a certain degree of subjectivity or virtuality. In short, perception is a mechanism to establish reality as the result of an interpretation process of information. For this reason, we handle concepts such as virtual reality, something that computers have boosted but that is nothing new and that we can experiment through daydreaming or simply by reading a book.

Leaving aside a controversial issue such as the concept of consciousness: What is the difference between the interaction of two atoms, two complex molecules or two individuals? Let’s look at the similarities first. In all these cases, the two entities exchange and process information, in each particular case making a decision to form a molecule, synthesize a new molecule or decide to go to the cinema. The difference is the information exchanged and the functionality of each entity. Can we make any other difference? Our anthropic vision tells us that we humans are superior beings, which makes a fundamental difference. But let’s think of biology: This is nothing more than a complex interaction between molecules, to which we owe our existence!

We could argue that in the case where human intelligence intervenes the situation is different. However, the structure of the three cases is the same, so the information transferred between the entities, which as far as we know have a quantum nature, is processed with a certain functionality. The difference that can be seen is that in the case of human intervention we say that functionality is intelligent. But we must consider that it is very easy to cheat with natural language, as it becomes clear when analyzing its nature.

In short, one could say that reality is the result of emerging information and its subsequent interpretation by means of processes, whose definition is always axiomatic, at least as far as knowledge reaches.

Perhaps, all this is very abstract so a simple example, which we find in advertising techniques, can give us a more intuitive idea. Let’s suppose an image whose pixels are actually images that appear when we zoom in, as shown in the figure.

Perception of a structure in functional layers

For an observer with a limited visual capacity, only a reality that shows a specific scene of a city will emerge. But for an observer with a much greater visual acuity, or who has an appropriate measuring instrument, he will observe a much more complex reality. This example shows that the process of observation of a mathematical object formed by a sequence of bits can be structured into irreducible functional layers, depending on the processes used to interpret the information. Since everything observable in our Universe seems to follow this pattern, we can ask ourselves the question: Is this functional structure the foundation of our Universe?

Biology as an axiomatic process

The replication mechanisms of living beings can be compared with the self-replication of automatons in the context of computability theory. In particular, DNA replication, analyzed from the perspective of the recursion theorem, indicates that its replication structure goes beyond biology and the quantum mechanisms that support it, as it is analyzed in the article Biology as an Axiomatic Process.

Physical chemistry establishes the principles by which atoms interact with each other to form molecules. In the inorganic world the resulting molecules are relatively simple, not allowing establishing a complex functional structure. On the other hand, in the organic world, molecules can be made up of thousands or even millions of atoms and have complex functionality. It highlights what is known as molecular recognition, through which the molecules interact with each other selectively and which is the basis of biology.

Molecular recognition plays a fundamental role in the structure of DNA, in the translation of the genetic code of DNA into proteins and in the biochemical interaction of proteins, which ultimately form the basis on which living beings are based.

The detailed study of these molecular interactions makes it possible to describe the functionality of the processes, in such a way that it is possible to establish formal models, to such an extent that they can be used as a computing technology, as is the case of DNA-based computing.

From this perspective, this allows us to ask if the process of information is something deeper and if in reality it is the foundation of biology itself, according to what is established by the principle of reality.

For this purpose, this section aims to analyze the basic processes on which biology is based, in order to establish a link with axiomatic processing and thus investigate the nature of biological processes. For this, it is not necessary to describe in detail the biological mechanisms described in the literature. We will simply describe its functionality, so that they can be identified with the theoretical foundations of information processing. To this end, we will explain the mechanisms on which DNA replication and protein synthesis are based.

DNA and RNA molecules are polymers formed from the ribose and deoxyribose nucleotides, respectively, bound by phosphates. On the basis of this nucleotide chain, one of the four possible nucleic acids can be linked. There are five different nucleic acids, adenine (A), guanine (G), cytosine (C), thymine (T) and uracil (U). In the case of DNA, nucleic acids that can be coupled by covalent bonds to nucleotides are A, G, C and T, whereas in the case of RNA they are A, G, C and U. As a consequence, molecules are structured in a helix shape, fitting the nucleic acids in a precise and compact way, due to the shape of their electronic clouds.

The helix structure allows the nucleic acids of two different strands to be bound together by hydrogen bonds, forming pairs A-T and G-C in the case of DNA, and A-U and G-C in the case of RNA, as shown in the following figure.

Base-pairing of nucleic acids in DNA

As a result, the DNA molecule is formed by a double helix, in which two chains of nucleotides polymers wind one on top of the other, remaining together by means of hydrogen bonds of nucleic acids. Thus, each strand of the DNA molecule contains the same genetic code, one of which can be considered the negative of the other.

Double helix structure of DNA molecule

The genetic information of an organism, called a genome, is not contained in a single DNA molecule, but is organized into chromosomes. These are made up of DNA strands bound together by proteins. Thus, in the case of humans, the genome is formed by 46 chromosomes, and so, the number of bases in the DNA molecules that compose it being about 3×109. Since each base can be encoded by means of 2 bits, the human genome, considered as an object of information, is equivalent to 6×109 bits.

The information contained in the genes is the basis for the synthesis of proteins, which are responsible for executing and controlling the biochemistry of living beings. The proteins are formed by the bonding of amino acids, through covalent bonds, which is done from the sequences of the bases contained in the DNA. The number of existing amino acids is 20 and since each base codes 2 bits, 3 bases (6 bits, 64 combinations) are necessary to be able to code each one of the amino acids. This means that there is some redundancy in the assignment of base sequences to amino acids, in addition to control codes for the synthesis process (Stop), as shown in the following table.

Translation of nucleic acids (Codons) to amino acids

However, protein synthesis is not done directly from DNA, since it requires the intermediation of RNA. This is called the translation process and involves two types of different RNA molecules, the messenger ARM (mRNA) and the transfer RNA (tRNA). The first step is the synthesis of mRNA from DNA. This process is called transcription, in such a way that the information corresponding to a gene is copied into the mRNA molecule, which is done through a process of recognition between the molecules of the nucleic acids, carried out by the hydrogen bonds, such as shows the following figure.

DNA transcription

Once the mRNA molecule is synthesized, the tRNA molecule is responsible for mediating between mRNA and amino acids to synthesize proteins, for which it has two specific molecular mechanisms. On the one hand, tRNA has a chain of three amino acids called anticodon at one end. On the opposite side, tRNA binds to a specific amino acid, according to the translation table of nucleic acid sequences into amino acids. In this way, tRNA is able to translate mRNA into a protein, as shown in the figure below. 

Protein synthesis (mRNA translation)

But perhaps the most complex process is undoubtedly DNA replication, so that each molecule produces two identical replicas. Replication is performed by unwinding each strand of the molecule and inserting the nucleic acid molecules on each of the strands, in a similar way to that shown in the mRNA synthesis. DNA replication is controlled by enzymatic processes supported by proteins. Without going into detail and in order to show its complexity, the table below shows the proteins involved in the replication process and their role.

The role of proteins in the DNA replication process

The processes described above are defined as the central dogma of molecular biology and are usually schematically represented schematically as shown in the following figure. It also depicts the reverse transcription that occurs in retroviruses, which synthesizes a DNA molecule from RNA.

Central dogma of molecular biology

The biological process from the perspective of computability theory

Molecular processes supported by DNA, RNA and proteins can be considered from an abstract point of view as information processes. As a result, input statements corresponding to a language are processed resulting in new output statements. Thus, the following languages can be identified:

  • DNA molecule. Sentence consisting of a sequence of characters corresponding to a 4-symbol alphabet.
  • RNA molecule – protein synthesis. Sentence consisting of a sequence of characters belonging to a 21-symbol alphabet.
  • RNA molecule-reverse transcription. Sentence composed of a sequence of characters belonging to a 4-symbol alphabet.
  • Protein molecule. Sentence composed of a sequence of characters belonging to a 20-symbol alphabet.

This information is processed by the machinery established by the physicochemical properties of control molecules. To better understand this functional structure, it is advisable to modify the scheme corresponding to the central dogma of biology. To do this, we must represent the processes involved and the information that flows between them, as shown in the following block diagram.

Functional structure of DNA replication

This structure highlights the flow of information between processes, such as DNA and RNA sentences, where the functional blocks of information processing are the following:

  • PDNA. Replication process. The functionality of this process is determined by the proteins involved in DNA synthesis, producing two replicas of DNA from a single molecule.
  • PRNA. Transcription process. It synthesizes a RNA molecule from a gene encoded in DNA.
  • PProt. Translation process. It synthesizes a protein from an RNA molecule.

This structure clearly shows how information emerges from biological processes, something that seems to be ubiquitous in all natural models and allows the implementation of computer systems. In all cases this capacity is finally supported by quantum physics. In the case of biology in particular, this is produced from the physicochemical properties of molecules, which are determined by quantum physics. Therefore, the information process is something that emerges from an underlying reality and ultimately from quantum physics. This is true as far as knowledge goes.

This means that, although there is a strong link between reality and information, information is simply an emerging product of reality. But biology provides a clue to the intimate relationship between reality and information, which are ultimately indistinguishable concepts. If we look at the DNA replication process, we see that DNA is produced in several stages of processing:

DNA → RNA → Proteins → DNA.

We could consider this to be a specific feature of the biological process. However, computability theory indicates that the replication process is subject to deeper logical rules than the physical processes themselves that support replication. In computability theory, the recursion theorem determines that replication of information requires at least the intervention of two independent processes.

This shows that DNA replication is subject to abstract rules that must be satisfied not only by biology, but by every natural process. Therefore, the physical foundations that support biological processes must verify this requirement. Consequently, this shows that the information processing is essential in what we understand by reality.

Natural language: A paradigm of axiomatic processing

The Theory of Computation (TC) aims to establish computational models and determine the limits of what is computable and the complexity of a problem when it is computable. The formal models established by TC are based on abstract systems ranging from simple models, such as automatons, to the general computer model established by the Turing Machine (TM).

Formally, the concept of algorithm is based on TM, so that each of the possible implementations will perform a specific function that we call algorithm. The TC demonstrates that it is possible to build an idealized machine, called Universal Turing Machine (UTM), capable of executing all possible computable algorithms. In the case of commercial computers, these are equivalent to UTM, with the difference that their memory and runtime are limited. On the contrary, in the UTM these resources are unlimited.

But the question we can ask is: What does this have to do with language? The answer is simple. In TC, an L(TM) language is defined as the set of bit sequences that “accepts” a given TM. In which the term “accept” means that the TM analyzes the input sequence and reaches the Halt state. Consequently, a language is the set of mathematical objects accepted by a given TM.

Without going into details that can be consulted in the specialized literature, the TC classifies the languages into two basic types, as shown in the figure. Thus, a language is Turing-decidable (DEC) when the TM accepts the sequences belonging to the language and rejects the rest, reaching the Halt state in both cases. On the contrary, a language is Turing-recognizable or RE if it is the language of a TM. This means that, for the set of languages belonging to RE but not belonging to DEC, TM does not reach the Halt state when the input sequence does not correspond to the language.

It is necessary to emphasize that there are sequences that are not recognized by any TM. Therefore, if the formal definition of language is taken into account, they should not be considered as such, although in general they are defined as non-RE languages. It is important to note that the latter concept is equivalent to Gödel’s incompleteness theorem. As a consequence, they are the set of undecidable or unsolvable problems, that is, they have a cardinality superior to the one of the natural numbers.

Within DEC languages, two types, regular ​​and context-free (CFL) can be identified. Regular languages ​​are those composed of a set of sequences on which the TM can decide individually, so they do not have a grammatical structure. Examples of these are the languages ​​of the automatons we handle every day, such as elevators, device controls, etc. CFLs are those that have a formal structure (grammar) in which language elements can be nested recursively. In general, we can consider CFLs to programming languages, such as JAVA, C ++. This is not strictly true, but it will facilitate the exposure of certain concepts.

But the question is: What does this have to do with natural language? The answer is easy again. Natural language is, in principle, a Turing-decidable language. The proof of this is trivial. Maybe a few decades ago this was not so, but nowadays information technology shows it us clearly, without the need for theoretical knowledge. On the one hand, natural language is a sequence of bits, since both spoken and written language are coded as bit sequences in audio and text files, respectively. On the other hand, humans do not loop when we get a message, at least permanently ;-).

However, it can be argued that we did not reach the Halt state either. But in this context, this does not mean that we literally end our existence, although there are messages that kill! This means that information processing concludes and that, as a result, we can make a decision and tackle a new task.

Therefore, from an operational or practical point of view, natural language is Turing-decidable. But we can find arguments that can be in conflict with this and that materialize in the form of contradictions. Although it may seem surprising, this also happens with programming languages, since their grammar may be context sensitive (CSG). But for now, we are going to leave aside this aspect, in order to make the reasoning easier.

What can intuitively be seen is a clear parallel between the TM model and the human communication model, as shown in the figure. This can be extended to other communication models, such as body language, physicochemical language between molecules, etc.

In the case of TC, the input and output objects to the TM are language elements, which is very suitable because the practical objective is human-to-machine or machine-to-machine communication. But this terminology varies with the context. Thus, from an abstract point of view, objects have a purely mathematical nature. However, in other contexts such as physics, we talk about concepts such as space-time, energy, momentum, etc.

What seems to be clear, from the observable models, is that a model of reality is equivalent to bit sequences processed by a TM. In short, a model of reality is equivalent to an axiomatic processing of information, where the axioms are embedded in the TM. It should be clear that an axiom is not self-evident, and therefore does not need proof. On the contrary, an axiom is a proposition assumed within a theoretical body. Possibly, this misunderstanding is originated by the apparent simplicity of some axiomatic systems, produced by our perception of reality. This is obvious, for example, in Euclidean geometry based on five postulates or axioms, in which such postulates seem to us evident, due to our perception of space. On this, we will continue to insist since the axiomatic processing is surely one of the great mysteries that nature encloses.

Returning to natural language, it should be possible to establish a parallelism between it and the axiomatic processing determined by TM, as suggested in the figure. As with programming languages, the structure of natural language is defined by a grammar, which establishes a set of axiomatic rules that determine the categories (verb, predicate) of the elements of language (lexicon) and how they are combined to form expressions (sentences). Both the elements of language and the resulting expressions have a meaning, which is known as semantics of language. The pertinent question is: What is the axiomatic structure of a natural language?

To answer, let’s reorient the question: How is the semantics of natural language defined? To do this, we can begin by analyzing the definition of the lexicon of a language, collected in the dictionary. In it we can find the definition of the meaning of each word in different contexts. But we soon find a formal problem, since the definitions are based on one another in a circular fashion. What is the same, the defined is part of the definition, so it is not possible to establish the semantics of language from the linguistic information.

For example, according to the Oxford dictionary:

  • Word: A single distinct meaningful element of speech or writing, used with others (or sometimes alone) to form a sentence and typically shown with a space on either side when written or printed.
  • Write: Mark (letters, words, or other symbols) on a surface, typically paper, with a pen, pencil, or similar implement. 
  • Sentence: A set of words that is complete in itself, typically containing a subject and predicate, conveying a statement, question, exclamation, or command, and consisting of a main clause and sometimes one or more subordinate clauses. 
  • Statement: A definite or clear expression of something in speech or writing
  • Expression: A word or phrase, especially an idiomatic one, used to convey an idea. 
  • Phrase: A small group of words standing together as a conceptual unit, typically forming a component of a clause

Therefore:

  • Word: A single distinct … or marks (letters, words, or other symbols) on … to form a set of words that … conveying a definite or clear word or a small group of words standing together … or marking (letters, words, …. ) …

In this way, we could continue recursively replacing the meaning of each component within the definition, arriving at the conclusion that natural language as an isolated entity has no meaning. So it is necessary to establish an axiomatic basis external to the language itself. By the way: What will happen if we continue to replace each component of the sentence?

Consequently, we can rise what will be the result of an experiment in which an entity of artificial intelligence disconnected from all reality, except from the information on which the written language is based, analyzes the information. That is, the entity will have access to grammar, dictionary, written works, etc. What will be the result of the experiment? To what conclusions will the entity arrive?

If we mentally perform this experiment, we will see that the entity can come to understand the reality of language, and all the stories based on it, provided that it has an axiomatic basis. Otherwise, the entity will experience what in information theory is known as “information without meaning”. This explains the impossibility of deciphering archaic scripts without having cross-references to other languages ​​or other forms of expression. In the case of humans, the axiomatic basis is acquired from cognitive experiences external to the language itself.

To clarify the idea of what the axiomatic processing means, we can use simple examples related to programming languages. So, let’s analyze the semantics of the “if… then” statement. If we consult the programming manual we can determine its semantics, since in our brain we have implemented rules or axioms to decipher the written message. This is equivalent to what happens in the execution of program sentences, in which it is the TM that executes those expressions axiomatically. In the case of both the brain and TM, axioms are defined in the fields of biochemistry and physics, respectively, and therefore outside the realm of language.

This shows once again how reality is structured in functional layers, which can be seen as independent entities by means of the axiomatic processing, as has been analyzed in:

But this issue, as well as the analysis of the existence of linguistic contradictions, will be addressed in later posts.

A classic example of axiomatic processing

In the article “Reality and information: Is information a physical entity?” what we mean by information is analyzed. This is a very general review of the development of the theoretical and practical aspects that occurred throughout the twentieth century to the present day and which have led to the current vision of what information is.

The article “Reality and information: What is the nature of information?” goes deeper into this analysis. This is made from a more theoretical perspective based on the computation theory, information theory (IT) and algorithmic information theory (AIT).

But in this post, we will leave aside the mathematical formalism and expose some examples that will give us a more intuitive view of what information is and its relation to reality. And above all try to expose what the axiomatic process of information means. This should help to understand the concept of information beyond what is generally understood as a set of bits. And this is what I consider one of the obstacles to establishing a strong link between information and reality.

Nowadays, information and computer technology offers countless examples of how what we observe as reality can be represented by a set of bits. Thus, videos, images, audio and written information can be encoded, compressed, stored and reproduced as a set of bits. This is possible since they are all mathematical objects, which can be represented by numbers subject to axiomatic rules and can, therefore, be represented by a set of bits. However, the number of bits needed to encode the object depends on the coding procedure (axiomatic rules), so that the AIT determines its minimum value defined as the entropy of the object. However, the AIT does not provide any criteria for the implementation of the compression process, so in general they are based on practical criteria, for example statistical criteria, psychophysical, etc.

The AIT establishes a formal definition of the complexity of mathematical objects, called the Kolmogorov complexity K(x). For a finite object x, K(x) is defined as the length of the shortest effective binary description of x, and is an intrinsic property of the object and not a property of the evaluation process. Without entering into theoretical details, the AIT determines that only a small part of n-bit mathematical objects can be compressed and encoded in m bits n>m, which means that most of them have a great complexity and can only be represented by themselves.

The compression and decompression of video, images, audio, etc., are a clear example of axiomatic processing. Imagine a video content x which, by means of a compression process C, has generated a content C(x) , so that by means of a decompression process D we can retrieve the original content x=D(y) . In this context, both C and D are axiomatic processes, understanding as axiom a proposition assumed within a theoretical body. This may seem shocking to the idea that an axiom is an obvious and accepted proposition without requiring demonstration. To clarify this point I will develop this idea in another post, for which I will use as an example the structure of natural languages.

In this context, the term axiomatic is totally justified theoretically, since the AIT does not establish any criteria for the implementation of the compression process. And, as already indicated, most mathematical objects are not compressible.

This example reveals an astonishing result of IT, defined as “information without meaning”. In such a way that a bit string has no meaning unless a process is applied that interprets the information and transforms it into knowledge. Thus, when we say that x is a video content we are assuming that it responds to a video coding system, according to the visual perception capabilities of humans.

And here we come to a transcendental conclusion regarding the nexus between information and reality. Historically, the development of IT has created the tendency to establish this nexus by considering the information as a sequence of bits exclusively. But AIT shows us that we must understand information as a broader concept, made up of axiomatic processes and bit strings. But for this, we must define it in a formal way.

Thus, both C and D are mathematical objects that in practice are embodied in a set consisting of a processor and programs that encode the functions of compression and decompression. If we define a processor as T() and c and d the bit strings that encode the compression and decompression algorithms, we can express:

         y=T(<c,x>)

         x=T(<d,y>)

where <,> is the concatenation of bit sequences.

Therefore, the axiomatic processing would be determined by the processor T(). And if we use any of the implementations of the universal Turing machine we will see that the number of axiomatic rules is very small. This may seem surprising if one considers that the above is extendable to the  definition of any mathematical model of reality.

Thus, any mathematical model that describes an element of reality can be formalized by means of a Turing machine. The result of the model can be enumerable or Turing computable, in which case the Halt state will be reached, concluding the process. On the contrary, the problem can be undecidable or non-computable, so that the Halt state is never reached, continuing the execution of the process forever.

For example, let us weigh in the Newtonian mechanics determined by the laws of the dynamics and the attraction exerted by the masses. In this case, the system dynamics will be determined by the recursive process w=T(<x,y,z>). Where x is the bit string encoding the laws of calculus, y the bit sequence encoding the laws of Newtonian mechanics and z the initial conditions of the masses constituting the system.

It is frequent, as a consequence of the numerical calculus, to think that the processes are nothing more than numerical simulations of the models. However, in the above example, both x and y can be the analytic expressions of the model and w=T(<x,y,z>) the analytical expression of the solution. Thus, if z specifies that the model is composed of only two massive bodies, w=T(<x,y,z>) will produce an analytical expression of the two ellipses corresponding to the ephemeris of both bodies. However, if z specifies more than two massive bodies, in general, the process will not be able to produce any result, not reaching the Halt state. This is because the Newtonian model has no analytical solution for three or more orbiting bodies, except for very particular cases, and is known as the three-body problem.

But we can make x and y encode the functions of numerical calculus, corresponding respectively to the mathematical calculus and to the computational functions of the Newtonian model. In this case, w=T(<x,y,z>) will produce recursively the numerical description of the ephemeris of the massive bodies. However, the process will not reach the Halt state, except in very particular cases in which the process may decide that the ephemeris is a closed trajectory.

This behaviour shows that the Newtonian model is not computable or undecidable. This is extendable to all models of nature established by physics since they are all non-linear models. If we consider the complexity of the y sequence corresponding to the Newtonian model, both in the analytical or in the numerical version, it is evident that the complexity K(x) is small. However, the complexity of w=T(<x,y,z>) is, in general, non-computable which justifies that it cannot be expressed analytically. If this were possible it would mean that w is an enumerable expression, which is in contradiction with the fact that it is a non-computable expression.

What is surprising is that from an enumerable expression <x, y, z> we can get a non-computable result. But this will be addressed another post.