Tag Archives: Information Theory

Perception of complexity

In previous posts, the nature of reality and its complexity has been approached from the point of view of Information Theory. However, it is interesting to make this analysis from the point of view of human perception and thus obtain a more intuitive view.

Obviously, making an exhaustive analysis of reality from this perspective is complex due to the diversity of the organs of perception and the physiological and neurological aspects that develop over them. In this sense, we could explain how the information perceived is processed, depending on each of the organs of perception. Especially the auditory and visual systems, as these are more culturally relevant. Thus, in the post dedicated to color perception it has been described how the physical parameters of light are encoded by the photoreceptor cells of the retina.

However, in this post the approach will consist of analyzing in an abstract way how knowledge influences the interpretation of information, in such a way that previous experience can lead the analysis in a certain direction. This behavior establishes a priori assumptions or conditions that limit the analysis of information in all its extension and that, as a consequence, prevent to obtain certain answers or solutions. Overcoming these obstacles, despite the conditioning posed by previous experience, is what is known as lateral thinking.

To begin with, let’s consider the case of series math puzzles in which a sequence of numbers, characters, or graphics is presented, asking how the sequence continues. For example, given the sequence “IIIIIIIVVV”, we are asked to determine which the next character is. If the Roman culture had not developed, it could be said that the next character is “V”, or also that the sequence has been made by little scribblers. But this is not the case, so the brain begins to engineer determining that the characters can be Roman and that the sequence is that of the numbers “1,2,3,…”.  Consequently, the next character must be “I”.

In this way, it can be seen how the knowledge acquired conditions the interpretation of the information perceived by the senses. But from this example another conclusion can be drawn, consisting of the ordering of information as a sign of intelligence. To expose this idea in a formal way let’s consider a numerical sequence, for example the Fibonacci series “0,1,2,3,5,8,…”. Similarly to the previous case, the following number should be 13, so that the general term can be expressed as fn=fn-1+fn-2. However, we can define another discrete mathematical function that takes the values “0,1,2,3,5,8” for n = 0,1,2,3,4,5, but differs for the rest of the values of n belonging to the natural numbers, as shown in the following figure. In fact, with this criterion it is possible to define an infinite number of functions that meet this criterion.

The question, therefore, is: What is so special about the Fibonacci series in relation to the set of functions that meet the condition defined above?

Here we can make the argument already used in the case of the Roman number series. So that mathematical training leads to identifying the series of numbers as belonging to the Fibonacci series. But this poses a contradiction, since any of the functions that meet the same criterion could have been identified. To clear up this contradiction, Algorithmic Information Theory (AIT) should be used again.

Firstly, it should be stressed that culturally the game of riddles implicitly involves following logical rules and that, therefore, the answer is free from arbitrariness. Thus, in the case of number series the game consists of determining a rule that justifies the result. If we now try to identify a simple mathematical series that determines the sequence “0,1,2,3,5,8,…” we see that the expression fn=fn-1+fn-2 fulfills these requirements. In fact, it is possible that this is the simplest within this type of expressions. The rest are either complex, arbitrary or simple expressions that follow different rules from the implicit rules of the puzzle.

From the AIT point of view, the solution that contains the minimum information and can therefore be expressed most readily will be the most likely response that the brain will give in identifying a pattern determined by a stimulus. In the example above, the description of the predictable solution will be the one composed of:

  • A Turing machine.
  • The information to code the calculus rules.
  • The information to code the analytical expression of the simplest solution. In the example shown it corresponds to the expression of the Fibonacci series.

Obviously, there are solutions of similar or even less complexity, such as the one performed by a Turing machine that periodically generates the sequence “0,1,2,3,5,8”. But in most cases the solutions will have a more complex description, so that, according to the AIT, in most cases their most compact description will be the sequence itself, which cannot be compressed or expressed analytically.

For example, it is easy to check that the function:

generates for integer values of n the sequence “0,1,1,2,3,5,8,0,-62,-279,…”, so it could be said that the quantities following the proposed series are “…,0,-62,-279,… Obviously, the complexity of this sequence is higher than that of the Fibonacci series, as a result of the complexity of the description of the function and the operations to be performed.

Similarly, we can try to define other algorithms that generate the proposed sequence, which will grow in complexity. This shows the possibility of interpreting the information from different points of view that go beyond the obvious solutions, which are conditioned by previous experiences.

If, in addition to all the above, it is considered that, according to Landauer’s principle, information complexity is associated with greater energy consumption, the resolution of complex problems not only requires a greater computational effort, but also a greater energy effort.

This may explain the feeling of satisfaction produced when a certain problem is solved, and the tendency to engage in relaxing activities that are characterized by simplicity or monotony. Conversely, the lack of response to a problem produces frustration and restlessness.

This is in contrast to the idea that is generally held about intelligence. Thus, the ability to solve problems such as the ones described above is considered a sign of intelligence. But on the contrary, the search for more complex interpretations does not seem to have this status. Something similar occurs with the concept of entropy, which is generally interpreted as disorder or chaos and yet from the point of view of information it is a measure of the amount of information.

Another aspect that should be highlighted is the fact that the cognitive process is supported by the processing of information and, therefore, subject to the rules of mathematical logic, whose nature is irrefutable. This nuance is important, since emphasis is generally placed on the physical and biological mechanisms that support the cognitive processes, which may eventually be assigned a spiritual or esoteric nature.

Therefore, it can be concluded that the cognitive process is subject to the nature and structure of information processing and that from the formal point of view of the Theory of Computability it corresponds to a Turing machine. In such a way that nature has created a processing structure based on the physics of emerging reality – classical reality -, materialized in a neural network, which interprets the information coded by the perception senses, according to the algorithmic established by previous experience. As a consequence, the system performs two fundamental functions, as shown in the figure:

  • Interact with the environment, producing a response to the input stimuli.
  • Enhance the ability to interpret, acquiring new skills -algorithmic- as a result of the learning capacity provided by the neural network. 

But the truth is that the input stimuli are conditioned by the sensory organs, which constitute a first filter of information and therefore they condition the perception of reality. The question that can be raised is: What impact does this filtering have on the perception of reality?

Reality as an information process

The purpose of physics is the description and interpretation of physical reality based on observation. To this end, mathematics has been a fundamental tool to formalize this reality through models, which in turn have allowed predictions to be made that have subsequently been experimentally verified. This creates an astonishing connection between reality and abstract logic that makes suspect the existence of a deep relationship beyond its conceptual definition. In fact, the ability of mathematics to accurately describe physical processes can lead us to think that reality is nothing more than a manifestation of a mathematical world.

But perhaps it is necessary to define in greater detail what we mean by this. Usually, when we refer to mathematics we think of concepts such as theorems or equations. However, we can have another view of mathematics as an information processing system, in which the above concepts can be interpreted as a compact expression of the behavior of the system, as shown by the algorithmic information theory [1].

In this way, physical laws determine how the information that describes the system is processed, establishing a space-time dynamic. As a consequence, a parallelism is established between the physical system and the computational system that, from an abstract point of view, are equivalent. This equivalence is somewhat astonishing, since in principle we assume that both systems belong to totally different fields of knowledge.

But apart from this fact, we can ask what consequences can be drawn from this equivalence. In particular, computability theory [2] and information theory [3] [1] provide criteria for determining the computational reversibility and complexity of a system [4]. In particular:

  • In a reversible computing system (RCS) the amount of information remains constant throughout the dynamics of the system.
  • In a non-reversible computational system (NRCS) the amount of information never increases along the dynamics of the system.
  • The complexity of the system corresponds to the most compact expression of the system, called Kolmogorov complexity and is an absolute measure.

It is important to note that in an NRCS system information is not lost, but is explicitly discarded. This means that there is no fundamental reason why such information should not be maintained, as the complexity of an RCS system remains constant. In practice, the implementation of computer systems is non-reversible in order to optimize resources, as a consequence of the technological limitations for its implementation. In fact, the energy currently needed for its implementation is much higher than that established by the Landauer principle [5].

If we focus on the analysis of reversible physical systems, such as quantum mechanics, relativity, Newtonian mechanics or electromagnetism, we can observe invariant physical magnitudes that are a consequence of computational reversibility. These are determined by unitary mathematical processes, which mean that every process has an inverse process [6]. But the difficulties in understanding reality from the point of view of mathematical logic seem to arise immediately, with thermodynamics and quantum measurement being paradigmatic examples.

In the case of quantum measurement, the state of the system before the measurement is made is in a superposition of states, so that when the measurement is made the state collapses in one of the possible states in which the system was [7]. This means that the quantum measurement scenario corresponds to that of a non-reversible computational system, in which the information in the system decreases when the superposition of states disappears, making the system non-reversible as a consequence of the loss of information.

This implies that physical reality systematically loses information, which poses two fundamental contradictions. The first is the fact that quantum mechanics is a reversible theory and that observable reality is based on it. The second is that this loss of information contradicts the systematic increase of classical entropy, which in turn poses a deeper contradiction, since in classical reality there is a spontaneous increase of information, as a consequence of the increase of entropy.

The solution to the first contradiction is relatively simple if we eliminate the anthropic vision of reality. In general, the process of quantum measurement introduces the concept of observer, which creates a certain degree of subjectivity that is very important to clarify, as it can lead to misinterpretations. In this process there are two clearly separated layers of reality, the quantum layer and the classical layer, which have already been addressed in previous posts. The realization of quantum measurement involves two quantum systems, one that we define as the system to be measured and another that corresponds to the measurement system, which can be considered as a quantum observer, and both have a quantum nature. As a result of this interaction, classical information emerges, where the classical observer is located, who can be identified e.g. with a physicist in a laboratory. 

Now consider that the measurement is structured in two blocks, one the quantum system under observation and the other the measurement system that includes the quantum observer and the classical observer. In this case it is being interpreted that the quantum system under measurement is an open quantum system that loses quantum information in the measurement process and that as a result a lesser amount of classical information emerges. In short, this scenario offers a negative balance of information.

But, on the contrary, in the quantum reality layer the interaction of two quantum systems takes place which, it can be said, mutually observe each other according to unitary operators, so that the system is closed producing an exchange of information with a null balance of information. As a result of this interaction, the classical layer emerges. But then there seems to be a positive balance of information, as classical information emerges from this process. But what really happens is that the emerging information, which constitutes the classical layer, is simply a simplified view of the quantum layer. For this reason we can say that the classical layer is an emerging reality.

So, it can be said that the quantum layer is formed by subsystems that interact with each other in a unitary way, constituting a closed system in which the information and, therefore, the complexity of the system is invariant. As a consequence of these interactions, the classical layer emerges as an irreducible reality of the quantum layer.

As for the contradiction produced by the increase in entropy, the reasons justifying this behavior seem more subtle. However, a first clue may lie in the fact that this increase occurs only in the classical layer. It must also be considered that, according to the algorithmic information theory, the complexity of a system, and therefore the amount of information that describes the system, is the set formed by the processed information and the information necessary to describe the processor itself. 

A physical scenario that can illustrate this situation is the case of the big bang [8], in which it is considered that the entropy of the system in its beginning was small or even null. This is so because the microwave background radiation shows a fairly homogeneous pattern, so the amount of information for its description and, therefore, its entropy is small. But if we create a computational model of this scenario, it is evident that the complexity of the system has increased in a formidable way, which is incompatible from the logical point of view. This indicates that in the model not only the information is incomplete, but also the description of the processes that govern it. But what physical evidence do we have to show that this is so?

Perhaps the clearest sample of this is cosmic inflation [9], so that the space-time metric changes with time, so that the spatial dimensions grow with time. To explain this behavior the existence of dark energy has been postulated as the engine of this process [10], which in a physical form recognizes the gaps revealed by mathematical logic. Perhaps one aspect that is not usually paid attention is the interaction between vacuum and photons, which produces a loss of energy in photons as space-time expands. This loss supposes a decrease of information that necessarily must be transferred to space-time.

This situation causes the vacuum, which in the context of classical physics is nothing more than an abstract metric, to become a fundamental physical piece of enormous complexity. Aspects that contribute to this conception of vacuum are the entanglement of quantum particles [11], decoherence and zero point energy [12].  

From all of the above, a hypothesis can be made as to what the structure of reality is from a computational point of view, as shown in the following figure. If we assume that the quantum layer is a unitary and closed structure, its complexity will remain constant. But the functionality and complexity of this remains hidden from observation and it is only possible to model it through an inductive process based on experimentation, which has led to the definition of physical models, in such a way that these models allow us to describe classical reality. As a consequence, the quantum layer shows a reality that constitutes the classical layer and that is a partial vision and, according to the theoretical and experimental results, extremely reduced of the underlying reality and that makes the classical reality an irreducible reality.  

The fundamental question that can be raised in this model is whether the complexity of the classical layer is constant or whether it can vary over time, since it is only bound by the laws of the underlying layer and is a partial and irreducible view of that functional layer. But for the classical layer to be invariant, it must be closed and therefore its computational description must be closed, which is not verified since it is subject to the quantum layer. Consequently, the complexity of the classical layer may change over time.

Consequently, the question arises as to whether there is any mechanism in the quantum layer that justifies the fluctuation of the complexity of the classical layer. Obviously one of the causes is quantum decoherence, which makes information observable in the classical layer. Similarly, cosmic inflation produces an increase in complexity, as space-time grows. On the contrary, attractive forces tend to reduce complexity, so gravity would be the most prominent factor.

From the observation of classical reality we can answer that currently its entropy tends to grow, as a consequence of the fact that decoherence and inflation are predominant causes. However, one can imagine recession scenarios, such as a big crunch scenario in which entropy decreased. Therefore, the entropy trend may be a consequence of the dynamic state of the system.

In summary, it can be said that the amount of information in the quantum layer remains constant, as a consequence of its unitary nature. On the contrary, the amount of information in the classical layer is determined by the amount of information that emerges from the quantum layer. Therefore, the challenge is to determine precisely the mechanisms that determine the dynamics of this process. Additionally, it is possible to analyze specific scenarios that generally correspond to the field of thermodynamics. Other interesting scenarios may be quantum in nature, such as the one proposed by Hugh Everett on the Many-Worlds Interpretation (MWI).  

Bibliography

[1] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[2] M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[3] C. E. Shannon, “A Mathematical Theory of Communication,” vol. 27, pp. 379-423, 623-656, 1948.
[4] M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011.
[5] R. Landauer, «Irreversibility and Heat Generation in Computing Process,» IBM J. Res. Dev., vol. 5, pp. 183-191, 1961.
[6] J. Sakurai y J. Napolitano, Modern Quantum Mechanics, Cambridge University Press, 2017.
[7] G. Auletta, Foundations and Interpretation of Quantum Mechanics, World Scientific, 2001.
[8] A. H. Guth, The Inflationary Universe, Perseus, 1997.
[9] A. Liddle, An Introduction to Modern Cosmology, Wiley, 2003.
[10] P. J. E. Peebles and Bharat Ratra, “The cosmological constant and dark energy,” arXiv:astro-ph/0207347, 2003.
[11] A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981.
[12] H. B. G. Casimir and D. Polder, “The Influence of Retardation on the London-van der Waals Forces,” Phys. Rev., vol. 73, no. 4, pp. 360-372, 1948.

On the complexity of PI (π)

Introduction

There is no doubt that since the origins of geometry humans have been seduced by the number π. Thus, one of its fundamental characteristics is that it determines the relationship between the length of a circumference and its radius. But this does not stop here, since this constant appears systematically in mathematical and scientific models that describe the behavior of nature. In fact, it is so popular that it is the only number that has its own commemorative day. The great fascination around π has raised speculations about the information encoded in its figures and above all has unleashed an endless race for its determination, having calculated several tens of billions of figures to date.

Formally, the classification of real numbers is done according to the rules of calculus. In this way, Cantor showed that real numbers can be classified as countable infinities and uncountable infinities, what are commonly called rational and irrational. Rational numbers are those that can be expressed as a quotient of two whole numbers. While irrational numbers cannot be expressed this way. These in turn are classified as algebraic numbers and transcendent numbers. The former correspond to the non-rational roots of the algebraic equations, that is, the roots of polynomials. On the contrary, transcendent numbers are solutions of transcendent equations, that is, non-polynomial, such as exponential and trigonometric functions.

Georg Cantor. Co-creator of Set Theory

Without going into greater detail, what should catch our attention is that this classification of numbers is based on positional rules, in which each figure has a hierarchical value. But what happens if the numbers are treated as ordered sequences of bits, in which the position is not a value attribute.  In this case, the Algorithmic Information Theory (AIT) allows to establish a measure of the information contained in a finite sequence of bits, and in general of any mathematical object, and that therefore is defined in the domain of natural numbers.

What does the AIT tell us?

This measure is based on the concept of Kolmogorov complexity (KC). So that, the Kolmogorov complexity K(x) of a finite object x is defined as the length of the shortest effective binary description of x. Where the term “effective description” connects the Kolmogorov complexity with the Theory of Computation, so that K(x) would correspond to the length of the shortest program that prints x and enters the halt state. To be precise, the formal definition of K(x) is:

K(x) = minp,i{K(i) + l(p):Ti (p) = x } + O(1)

Where Ti(p) is the Turing machine (TM) i that executes p and prints x, l(p) is the length of p, and K(i) is the complexity of Ti. Therefore, object p is a compressed representation of object x, relative to Ti, since x can be retrieved from p by the decoding process defined by Ti, so it is defined as meaningful information. The rest is considered as meaningless, redundant, accidental or noise (meaningless information). The term O(1) indicates that K(i) is a recursive function and in general it is non-computable, although by definition it is machine independent, and whose result has the same order of magnitude in each one of the implementations. In this sense, Gödel’s incompleteness theorems, Turing machine and Kolmogorov complexity lead to the same conclusion about undecidability, revealing the existence of non-computable functions.

KC shows that information can be compressed, but does not establish any general procedure for its implementation, which is only possible for certain sequences. In effect, from the definition of KC it is demonstrated that this is an intrinsic property of bitstreams, in such a way that there are sequences that cannot be compressed. Thus, the number of n-bit sequences that can be encoded by m bits is less than 2m, so the fraction of n-bit sequences with K(x) ≥ n-k is less than 2-k. If the n-bit possible sequences are considered, each one of them will have a probability of occurrence equal to 2-n, so the probability that the complexity of a sequence is K(x) ≥ n-k is equal to or greater than (1-2-k). In short, most bit sequences cannot be compressed beyond their own size, showing a high complexity as they do not present any type of pattern. Applied to the field of physics, this behavior justifies the ergodic hypothesis. As a consequence, this means that most of the problems cannot be solved analytically, since they can only be represented by themselves and as a consequence they cannot be described in a compact way by means of formal rules.

It could be thought that the complexity of a sequence can be reduced at will, by applying a coding criterion that modifies the sequence into a less complex sequence. In general, this only increases the complexity, since in the calculation of K(x) we would have to add the complexity of the coding algorithm that makes it grow as n2. Finally, add that the KC is applicable to any mathematical object, integers, sets, functions, and it is demonstrated that, as the complexity of the mathematical object grows, K(x) is equivalent to the entropy H defined in the context of Information Theory. The advantage of AIT is that it performs a semantic treatment of information, being an axiomatic process, so it does not require having a priori any type of alphabet to perform the measurement of information.

What can be said about the complexity of π?

According to its definition, KC cannot be applied to irrational numbers, since in this case the Turing machine does not reach the halt state, and as we know these numbers have an infinite number of digits. In other words, and to be formally correct, the Turing machine is only defined in the field of natural numbers (it must be noted that their cardinality is the same as that of the rationals), while irrational numbers have a cardinality greater than that of rational numbers. This means that KC and the equivalent entropy H of irrational numbers are undecidable and therefore non-computable.

To overcome this difficulty we can consider an irrational number X as the concatenation of a sequence of bits composed of a rational number x and a residue δx, so that in numerical terms X=x+δx, but in terms of information X={x,δx}. As a consequence, δx is an irrational number δx→0, and therefore δx is a sequence of bits with an undecidable KC and hence non-computable. In this way, it can be expressed:

K(X) = K(x)+K(δx)

The complexity of X can be assimilated to the complexity of x. A priori this approach may seem surprising and inadmissible, since the term K(δx) is neglected when in fact it has an undecidable complexity. But this is similar to the approximation made in the calculation of the entropy of a continuous variable or to the renormalization process used in physics, in order to circumvent the complexity of the underlying processes that remain hidden from observable reality.

Consequently, the sequence p, which runs the Turing machine i to get x, will be composed of the concatenation of:

  • The sequence of bits that encode the rules of calculus in the Turing machine i.
  • The bitstream that encodes the compressed expression of x, for example a given numerical series of x.
  • The length of the sequence x that is to be decoded and that determines when the Turing machine should reach the halt state, for example a googol (10100).

In short, it can be concluded that the complexity K(x) of known irrational numbers, e.g. √2, π, e,…, is limited. For this reason, the challenge must be to obtain the optimum expression of K(x) and not the figures that encode these numbers, since according to what has been said, their uncompressed expression, or the development of their figures, has a high degree of redundancy (meaningless information).

What in theory is a surprising and questionable fact is in practice an irrefutable fact, since the complexity of δx will always remain hidden, since it is undecidable and therefore non-computable.

Another important conclusion is that it provides a criterion for classifying irrational numbers into two groups: representable and non-representable. The former correspond to irrational numbers that can be represented by mathematical expressions, which would be the compressed expression of these numbers. While non-representable numbers would correspond to irrational numbers that could only be expressed by themselves and are therefore undecidable. In short, the cardinality of representable irrational numbers is that of natural numbers. It should be noted that the previous classification criterion is applicable to any mathematical object.

On the other hand, it is evident that mathematics, and calculus in particular, de facto accepts the criteria established to define the complexity K(x). This may go unnoticed because, traditionally in this context, numbers are analyzed from the perspective of positional coding, in such a way that the non-representable residue is filtered out through the concept of limit, in such a way that δx→0. However, when it comes to evaluating the informative complexity of a mathematical object, it may be required to apply a renormalization procedure.

A macroscopic view of the Schrödinger cat

From the analysis carried out in the previous post, it can be concluded that, in general, it is not possible to identify the macroscopic states of a complex system with its quantum states. Thus, the macroscopic states corresponding to the dead cat (DC) or to the living cat (AC) cannot be considered quantum states, since according to quantum theory the system could be expressed as a superposition of these states. Consequently, as it has been justified, for macroscopic systems it is not possible to define quantum states such as |DC⟩ and |DC⟩. On the other hand, the states (DC) and (AC) are an observable reality, indicating that the system presents two realities, a quantum reality and an emerging reality that can be defined as classical reality.

Quantum reality will be defined by its wave function, formed by the superposition of the quantum subsystems that make up the system and which will evolve according to the existing interaction between all the quantum elements that make up the system and the environment. For simplicity, if the CAT system is considered isolated from the environment, the succession of its quantum state can be expressed as:

            |CAT[n]⟩ = |SC1[n]⟩ ⊗|SC2[n]⟩ ⊗…⊗|SCi[n]⟩ ⊗…⊗|SCk[n][n]⟩.

Expression in which it has been taken into account that the number of non-entangled quantum subsystems k also varies with time, so it is a function of the sequence n, considering time as a discrete variable. 

The observable classical reality can be described by the state of the system that, if for the object “cat” is defined as (CAT[n]), from the previous reasoning it is concluded that (CAT[n]) ≢ |CAT[n]⟩. In other words, the quantum and classical states of a complex object are not equivalent. 

The question that remains to be justified is the irreducibility of the observable classical state (CAT) from the underlying quantum reality, represented by the quantum state |CAT⟩. This can be done if it is considered that the functional relationship between states |CAT⟩ and (CAT) is extraordinarily complex, being subject to the mathematical concepts on which complex systems are based, such as they are:

  • The complexity of the space of quantum states (Hilbert space).
  • The random behavior of observable information emerging from quantum reality.
  • The enormous number of quantum entities involved in a macroscopic system.
  • The non-linearity of the laws of classical physics.

Based on Kolmogorov complexity [1], it is possible to prove that the behavior of systems with these characteristics does not support, in most cases, an analytical solution that determines the evolution of the system from its initial state. This also implies that, in practice, the process of evolution of a complex object can only be represented by itself, both on a quantum and a classical level.

According to the algorithmic information theory [1], this process is equivalent to a mathematical object composed of an ordered set of bits processed according to axiomatic rules. In such a way that the information of the object is defined by the Kolmogorov complexity, in a manner that it remains constant throughout time, as long as the process is an isolated system. It should be pointed out that the Kolmogorov complexity makes it possible to determine the information contained in an object, without previously having an alphabet for the determination of its entropy, as is the case in the information theory [2], although both concepts coincide at the limit.

From this point of view, two fundamental questions arise. The first is the evolution of the entropy of the system and the second is the apparent loss of information in the observation process, through which classical reality emerges from quantum reality. This opens a possible line of analysis that will be addressed later.

But going back to the analysis of what is the relationship between classic and quantum states, it is possible to have an intuitive view of how the state (CAT) ends up being disconnected from the state |CAT⟩, analyzing the system qualitatively.

First, it should be noted that virtually 100% of the quantum information contained in the state |CAT⟩ remains hidden within the elementary particles that make up the system. This is a consequence of the fact that the physical-chemical structure [3] of the molecules is determined exclusively by the electrons that support its covalent bonds. Next, it must be considered that the molecular interaction, on which molecular biology is based, is performed by van der Waals forces and hydrogen bonds, creating a new level of functional disconnection with the underlying layer.

Supported by this functional level appears a new functional structure formed by cellular biology  [4], from which appear living organisms, from unicellular beings to complex beings formed by multicellular organs. It is in this layer that the concept of living being emerges, establishing a new border between the strictly physical and the concept of perception. At this level the nervous tissue [5] emerges, allowing the complex interaction between individuals and on which new structures and concepts are sustained, such as consciousness, culture, social organization, which are not only reserved to human beings, although it is in the latter where the functionality is more complex.

But to the complexity of the functional layers must be added the non-linearity of the laws to which they are subject and which are necessary and sufficient conditions for a behavior of deterministic chaos [6] and which, as previously justified, is based on the algorithmic information theory [1]. This means that any variation in the initial conditions will produce a different dynamic, so that any emulation will end up diverging from the original, this behavior being the justification of free will. In this sense, Heisenberg’s uncertainty principle [7] prevents from knowing exactly the initial conditions of the classical system, in any of the functional layers described above. Consequently, all of them will have an irreducible nature and an unpredictable dynamic, determined exclusively by the system itself.

At this point and in view of this complex functional structure, we must ask what the state (CAT) refers to, since in this context the existence of a classical state has been implicitly assumed. The complex functional structure of the object “cat” allows a description at different levels. Thus, the cat object can be described in different ways:

  • As atoms and molecules subject to the laws of physical chemistry.
  • As molecules that interact according to molecular biology.
  • As complex sets of molecules that give rise to cell biology.
  • As sets of cells to form organs and living organisms.
  • As structures of information processing, that give rise to the mechanisms of perception and interaction with the environment that allow the development of individual and social behavior.

As a result, each of these functional layers can be expressed by means of a certain state. So to speak of, the definition of a unique macroscopic state (CAT) is not correct. Each of these states will describe the object according to different functional rules, so it is worth asking what relationship exists between these descriptions and what their complexity is. Analogous to the arguments used to demonstrate that the states |CAT⟩ and (CAT) are not equivalent and are uncorrelated with each other, the states that describe the “cat” object at different functional levels will not be equivalent and may to some extent be disconnected from each other.

This behavior is a proof of how reality is structured in irreducible functional layers, in such a way that each one of the layers can be modeled independently and irreducibly, by means of an ordered set of bits processed according to axiomatic rules.

Refereces

[1] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
[3] P. Atkins and J. de Paula, Physical Chemestry, Oxford University Press, 2006.
[4] A. Bray, J. Hopkin, R. Lewis and W. Roberts, Essential Cell Biology, Garlan Science, 2014.
[5] D. Purves and G. J. Augustine, Neuroscience, Oxford Univesisty press, 2018.
[6] J. Gleick, Chaos: Making a New Science, Penguin Books, 1988.
[7] W. Heisenberg, «The Actual Content of Quantum Theoretical Kinematics and Mechanics,» Zeit-schrift fur Physik. Translation: NASA TM-77379., vol. 43, nº 3-4, pp. 172-198, 1927.

Information and knowledge

What is information? 

If we stick to its definition, which can be found in dictionaries, we can see that it always refers to a set of data and often adds the fact that these are sorted and processed. But we are going to see that these definitions are imprecise and even erroneous in assimilating it to the concept of knowledge.

One of the things that information theory has taught us is that any object (news, profile, image, etc.) can be expressed precisely by a set of bits. Therefore, the formal definition of information is the ordered set of symbols that represent the object and that in their basic form constitute an ordered set of bits. However, information theory itself surprisingly reveals that information has no meaning, which is technically known as “information without meaning”.

This seems to be totally contradictory, especially if we take into account the conventional idea of what is considered as information. However, this is easy to understand. Let us imagine that we find a book in which symbols appear written that are totally unknown to us. We will immediately assume that it is a text written in a language unknown to us, since, in our culture, book-shaped objects are what they usually contain. Thus, we begin to investigate and conclude that it is an unknown language without reference or Rosetta stone with any known language. Therefore, we have information but we do not know its message and as a result, the knowledge contained in the text. We can even classify the symbols that appear in the text and assign them a binary code, as we do in the digitization processes, converting the text into an ordered set of bits.

However, to know the content of the message we must analyze the information through a process that must include the keys that allow extracting the content of the message. It is exactly the same as if the message were encrypted, so the message will remain hidden if the decryption key is not available, as shown by the one-time pad encryption technique.

Ray Solomonoff, co-founder of Algorithmic Information Theory together with Andrey Kolmogorov. 

What is knowledge?

This clearly shows the difference between information and knowledge. In such a way that information is the set of data (bits) that describe an object and knowledge is the result of a process applied to this information and that is materialized in reality. In fact, reality is always subject to this scheme.

For example, suppose we are told a certain story. From the sound pressure applied to our eardrums we will end up extracting the content of the news and also we will be able to experience subjective sensations, such as pleasure or sadness. There is no doubt that the original stimulus can be represented as a set of bits, considering that audio information can be a digital content, e.g. MP3.

But for knowledge to emerge, information needs to be processed. In fact, in the previous case it is necessary to involve several different processes, among which we must highlight:

  • Biological processes responsible for the transduction of information into nerve stimuli.
  • Extraction processes of linguistic information, established by the rules of language in our brain by learning.
  • Extraction processes of subjective information, established by cultural rules in our brain by learning.

In short, knowledge is established by means of information processing. And here the debate may arise as a consequence of the diversity of processes, of their structuring, but above all because of the nature of the ultimate source from which they emerge. Countless examples can be given. But, since doubts can surely arise that this is the way reality emerges, we can try to look for a single counterexample!

A fundamental question is: Can we measure knowledge? The answer is yes and is provided by the algorithmic information theory (AIT) which, based on information theory and computer theory, allows us to establish the complexity of an object, by means of the Kolmogorov complexity K(x), which is defined as follows:

For a finite object x, K(x) is defined as the length of the shortest effective binary description of x.

Without going into complex theoretical details, it is important to mention that K(x) is an intrinsic property of the object and not a property of the evaluation process. But don’t panic! Since, in practice, we are familiar with this idea.

Let’s imagine audio, video, or general bitstream content. We know that these can be compressed, which significantly reduces their size. This means that the complexity of these objects is not determined by the number of bits of the original sequence, but by the result of the compression since through an inverse decompression process we can recover the original content. But be careful! The effective description of the object must include the result of the compression process and the description of the decompression process, needed to retrieve the message.

Complexity of digital content, equivalent to a compression process

A similar scenario is the modeling of reality, where physical processes stand out. Thus, a model is a compact definition of a reality. For example, Newton’s universal gravitation model is the most compact definition of the behavior of a gravitational system in a non-relativistic context. In this way, the model, together with the rules of calculus and the information that defines the physical scenario, will be the most compact description of the system and constitutes what we call algorithm. It is interesting to note that this is the formal definition of algorithm and that until these mathematical concepts were developed in the first half of the 20th century by Klein, Chruch and Turing, this concept was not fully established.

Alan Turing, one of the fathers of computing

It must be considered that the physical machine that supports the process is also part of the description of the object, providing the basic functions. These are axiomatically defined and in the case of the Turing machine correspond to an extremely small number of axiomatic rules.

Structure of the models, equivalent to a decompression process

In summary, we can say that knowledge is the result of information processing. Therefore, information processing is the source of reality. But this raises the question: Since there are non-computable problems, to what depth is it possible to explore reality? 

What is the nature of the information?

Published on OPENMIND May 7, 2018

A historical perspective

Classically, information is considered to be human-to-human transactions. However, throughout history this concept has been expanded, not so much by the development of mathematical logic but by technological development. A substantial change occurred with the arrival of the telegraph at the beginning of the 19th century. Thus, “send” went from being strictly material to a broader concept, as many anecdotes make clear. Among the most frequent highlights the intention of many people to send material things by means of telegrams, or the anger of certain customers arguing that the telegraph operator had not sent the message when he returned them the message note.

Currently, “information” is an abstract concept based on the theory of information, created by Claude Shannon in the mid-twentieth century. However, computer technology is what has contributed most to the concept of “bit” being something totally familiar. Moreover, concepts such as virtual reality, based on the processing of information, have become everyday terms.

The point is that information is ubiquitous in all natural processes, physics, biology, economics, etc., in such a way that these processes can be described by mathematical models and ultimately by information processing. This makes us wonder: What is the relationship between information and reality? 

Information as a physical entity

It is evident that information emerges from physical reality, as computer technology demonstrates. The question is whether information is fundamental to physical reality or simply a product of it. In this sense, there is evidence of the strict relationship between information and energy.

Claude Elwood Shannon was a mathematician, electrical engineer and American cryptographer remembered as «the father of information theory» / Image: DobriZheglov

Thus, the Shannon-Hartley theorem of information theory establishes the minimum amount of energy required to transmit a bit, known as the Bekenstein bound. In a different way and in order to determine the energy consumption in the computation process, Rolf Landauer established the minimum amount of energy needed to erase a bit, a result known as Landauer principle, and its value exactly coincides with the Bekenstein bound, which is a function of the absolute temperature of the medium.

These results allow determining the maximum capacity of a communication channel and the minimum energy required by a computer to perform a given task. In both cases, the inefficiency of current systems is evidenced, whose performance is extremely far from theoretical limits. But in this context, the really important thing is that Shannon-Hartley’s theorem is a strictly mathematical development, in which the information is finally coded on physical variables, leading us to think that information is something fundamental in what we define as reality.

Both cases show the relationship between energy and information, but are not conclusive in determining the nature of information. What is clear is that for a bit to emerge and be observed on the scale of classical physics requires a minimum amount of energy determined by the Bekenstein bound. So, the observation of information is something related to the absolute temperature of the environment.

This behavior is fundamental in the process of observation, as it becomes evident in the experimentation of physical phenomena. A representative example is the measurement of the microwave background radiation produced by the big bang, which requires that the detector located in the satellite be cooled by liquid helium. The same is true for night vision sensors, which must be cooled by a Peltier cell. On the contrary, this is not necessary in a conventional camera since the radiation emitted by the scene is much higher than the thermal noise level of the image sensor.

Cosmic Microwave Background (CMB). NASA’s WMAP satellite

This proofs that information emerges from physical reality. But we can go further, as information is the basis for describing natural processes. Therefore, something that cannot be observed cannot be described. In short, every observable is based on information, something that is clearly evident in the mechanisms of perception.

From the emerging information it is possible to establish mathematical models that hide the underlying reality, suggesting a functional structure in irreducible layers. A paradigmatic example is the theory of electromagnetism, which accurately describes electromagnetism without relying itself on the photon’s existence, and the existence of photos cannot be inferred from it. Something that is generally extendable to all physical models.

Another indication that information is a fundamental entity of what we call reality is the impossibility of transferring information faster than light. This would make reality a non-causal and inconsistent system. Therefore, from this point of view information is subject to the same physical laws as energy. And considering a behavior such as particle entanglement, we can ask: How does information flow at the quantum level?

Is information the essence of reality?

Based on these clues, we could hypothesize that information is the essence of reality in each of the functional layers in which it is manifested. Thus, for example, if we think of space-time, its observation is always indirect through the properties of matter-energy, so we could consider it to be nothing more than the emergent information of a more complex underlying reality. This gives an idea of ​​why the vacuum remains one of the great enigmas of physics. This kind of argument leads us to ask: What is it and what do we mean by reality?

Space-Time perception

From this perspective, we can ask what conclusions we could reach if we analyze what we define as reality from the point of view of information theory and, in particular, from  the algorithmic information theory and the theory of computability. All this without losing sight of the knowledge provided by the different areas that study reality, especially physics.

 

A classic example of axiomatic processing

In the article “Reality and information: Is information a physical entity?” what we mean by information is analyzed. This is a very general review of the development of the theoretical and practical aspects that occurred throughout the twentieth century to the present day and which have led to the current vision of what information is.

The article “Reality and information: What is the nature of information?” goes deeper into this analysis. This is made from a more theoretical perspective based on the computation theory, information theory (IT) and algorithmic information theory (AIT).

But in this post, we will leave aside the mathematical formalism and expose some examples that will give us a more intuitive view of what information is and its relation to reality. And above all try to expose what the axiomatic process of information means. This should help to understand the concept of information beyond what is generally understood as a set of bits. And this is what I consider one of the obstacles to establishing a strong link between information and reality.

Nowadays, information and computer technology offers countless examples of how what we observe as reality can be represented by a set of bits. Thus, videos, images, audio and written information can be encoded, compressed, stored and reproduced as a set of bits. This is possible since they are all mathematical objects, which can be represented by numbers subject to axiomatic rules and can, therefore, be represented by a set of bits. However, the number of bits needed to encode the object depends on the coding procedure (axiomatic rules), so that the AIT determines its minimum value defined as the entropy of the object. However, the AIT does not provide any criteria for the implementation of the compression process, so in general they are based on practical criteria, for example statistical criteria, psychophysical, etc.

The AIT establishes a formal definition of the complexity of mathematical objects, called the Kolmogorov complexity K(x). For a finite object x, K(x) is defined as the length of the shortest effective binary description of x, and is an intrinsic property of the object and not a property of the evaluation process. Without entering into theoretical details, the AIT determines that only a small part of n-bit mathematical objects can be compressed and encoded in m bits n>m, which means that most of them have a great complexity and can only be represented by themselves.

The compression and decompression of video, images, audio, etc., are a clear example of axiomatic processing. Imagine a video content x which, by means of a compression process C, has generated a content C(x) , so that by means of a decompression process D we can retrieve the original content x=D(y) . In this context, both C and D are axiomatic processes, understanding as axiom a proposition assumed within a theoretical body. This may seem shocking to the idea that an axiom is an obvious and accepted proposition without requiring demonstration. To clarify this point I will develop this idea in another post, for which I will use as an example the structure of natural languages.

In this context, the term axiomatic is totally justified theoretically, since the AIT does not establish any criteria for the implementation of the compression process. And, as already indicated, most mathematical objects are not compressible.

This example reveals an astonishing result of IT, defined as “information without meaning”. In such a way that a bit string has no meaning unless a process is applied that interprets the information and transforms it into knowledge. Thus, when we say that x is a video content we are assuming that it responds to a video coding system, according to the visual perception capabilities of humans.

And here we come to a transcendental conclusion regarding the nexus between information and reality. Historically, the development of IT has created the tendency to establish this nexus by considering the information as a sequence of bits exclusively. But AIT shows us that we must understand information as a broader concept, made up of axiomatic processes and bit strings. But for this, we must define it in a formal way.

Thus, both C and D are mathematical objects that in practice are embodied in a set consisting of a processor and programs that encode the functions of compression and decompression. If we define a processor as T() and c and d the bit strings that encode the compression and decompression algorithms, we can express:

         y=T(<c,x>)

         x=T(<d,y>)

where <,> is the concatenation of bit sequences.

Therefore, the axiomatic processing would be determined by the processor T(). And if we use any of the implementations of the universal Turing machine we will see that the number of axiomatic rules is very small. This may seem surprising if one considers that the above is extendable to the  definition of any mathematical model of reality.

Thus, any mathematical model that describes an element of reality can be formalized by means of a Turing machine. The result of the model can be enumerable or Turing computable, in which case the Halt state will be reached, concluding the process. On the contrary, the problem can be undecidable or non-computable, so that the Halt state is never reached, continuing the execution of the process forever.

For example, let us weigh in the Newtonian mechanics determined by the laws of the dynamics and the attraction exerted by the masses. In this case, the system dynamics will be determined by the recursive process w=T(<x,y,z>). Where x is the bit string encoding the laws of calculus, y the bit sequence encoding the laws of Newtonian mechanics and z the initial conditions of the masses constituting the system.

It is frequent, as a consequence of the numerical calculus, to think that the processes are nothing more than numerical simulations of the models. However, in the above example, both x and y can be the analytic expressions of the model and w=T(<x,y,z>) the analytical expression of the solution. Thus, if z specifies that the model is composed of only two massive bodies, w=T(<x,y,z>) will produce an analytical expression of the two ellipses corresponding to the ephemeris of both bodies. However, if z specifies more than two massive bodies, in general, the process will not be able to produce any result, not reaching the Halt state. This is because the Newtonian model has no analytical solution for three or more orbiting bodies, except for very particular cases, and is known as the three-body problem.

But we can make x and y encode the functions of numerical calculus, corresponding respectively to the mathematical calculus and to the computational functions of the Newtonian model. In this case, w=T(<x,y,z>) will produce recursively the numerical description of the ephemeris of the massive bodies. However, the process will not reach the Halt state, except in very particular cases in which the process may decide that the ephemeris is a closed trajectory.

This behaviour shows that the Newtonian model is not computable or undecidable. This is extendable to all models of nature established by physics since they are all non-linear models. If we consider the complexity of the y sequence corresponding to the Newtonian model, both in the analytical or in the numerical version, it is evident that the complexity K(x) is small. However, the complexity of w=T(<x,y,z>) is, in general, non-computable which justifies that it cannot be expressed analytically. If this were possible it would mean that w is an enumerable expression, which is in contradiction with the fact that it is a non-computable expression.

What is surprising is that from an enumerable expression <x, y, z> we can get a non-computable result. But this will be addressed another post.