Category Archives: Physics

What is the nature of the information?

Published on OPENMIND May 7, 2018

A historical perspective

Classically, information is considered to be human-to-human transactions. However, throughout history this concept has been expanded, not so much by the development of mathematical logic but by technological development. A substantial change occurred with the arrival of the telegraph at the beginning of the 19th century. Thus, “send” went from being strictly material to a broader concept, as many anecdotes make clear. Among the most frequent highlights the intention of many people to send material things by means of telegrams, or the anger of certain customers arguing that the telegraph operator had not sent the message when he returned them the message note.

Currently, “information” is an abstract concept based on the theory of information, created by Claude Shannon in the mid-twentieth century. However, computer technology is what has contributed most to the concept of “bit” being something totally familiar. Moreover, concepts such as virtual reality, based on the processing of information, have become everyday terms.

The point is that information is ubiquitous in all natural processes, physics, biology, economics, etc., in such a way that these processes can be described by mathematical models and ultimately by information processing. This makes us wonder: What is the relationship between information and reality? 

Information as a physical entity

It is evident that information emerges from physical reality, as computer technology demonstrates. The question is whether information is fundamental to physical reality or simply a product of it. In this sense, there is evidence of the strict relationship between information and energy.

Claude Elwood Shannon was a mathematician, electrical engineer and American cryptographer remembered as «the father of information theory» / Image: DobriZheglov

Thus, the Shannon-Hartley theorem of information theory establishes the minimum amount of energy required to transmit a bit, known as the Bekenstein bound. In a different way and in order to determine the energy consumption in the computation process, Rolf Landauer established the minimum amount of energy needed to erase a bit, a result known as Landauer principle, and its value exactly coincides with the Bekenstein bound, which is a function of the absolute temperature of the medium.

These results allow determining the maximum capacity of a communication channel and the minimum energy required by a computer to perform a given task. In both cases, the inefficiency of current systems is evidenced, whose performance is extremely far from theoretical limits. But in this context, the really important thing is that Shannon-Hartley’s theorem is a strictly mathematical development, in which the information is finally coded on physical variables, leading us to think that information is something fundamental in what we define as reality.

Both cases show the relationship between energy and information, but are not conclusive in determining the nature of information. What is clear is that for a bit to emerge and be observed on the scale of classical physics requires a minimum amount of energy determined by the Bekenstein bound. So, the observation of information is something related to the absolute temperature of the environment.

This behavior is fundamental in the process of observation, as it becomes evident in the experimentation of physical phenomena. A representative example is the measurement of the microwave background radiation produced by the big bang, which requires that the detector located in the satellite be cooled by liquid helium. The same is true for night vision sensors, which must be cooled by a Peltier cell. On the contrary, this is not necessary in a conventional camera since the radiation emitted by the scene is much higher than the thermal noise level of the image sensor.

Cosmic Microwave Background (CMB). NASA’s WMAP satellite

This proofs that information emerges from physical reality. But we can go further, as information is the basis for describing natural processes. Therefore, something that cannot be observed cannot be described. In short, every observable is based on information, something that is clearly evident in the mechanisms of perception.

From the emerging information it is possible to establish mathematical models that hide the underlying reality, suggesting a functional structure in irreducible layers. A paradigmatic example is the theory of electromagnetism, which accurately describes electromagnetism without relying itself on the photon’s existence, and the existence of photos cannot be inferred from it. Something that is generally extendable to all physical models.

Another indication that information is a fundamental entity of what we call reality is the impossibility of transferring information faster than light. This would make reality a non-causal and inconsistent system. Therefore, from this point of view information is subject to the same physical laws as energy. And considering a behavior such as particle entanglement, we can ask: How does information flow at the quantum level?

Is information the essence of reality?

Based on these clues, we could hypothesize that information is the essence of reality in each of the functional layers in which it is manifested. Thus, for example, if we think of space-time, its observation is always indirect through the properties of matter-energy, so we could consider it to be nothing more than the emergent information of a more complex underlying reality. This gives an idea of ​​why the vacuum remains one of the great enigmas of physics. This kind of argument leads us to ask: What is it and what do we mean by reality?

Space-Time perception

From this perspective, we can ask what conclusions we could reach if we analyze what we define as reality from the point of view of information theory and, in particular, from  the algorithmic information theory and the theory of computability. All this without losing sight of the knowledge provided by the different areas that study reality, especially physics.

 

A classic example of axiomatic processing

In the article “Reality and information: Is information a physical entity?” what we mean by information is analyzed. This is a very general review of the development of the theoretical and practical aspects that occurred throughout the twentieth century to the present day and which have led to the current vision of what information is.

The article “Reality and information: What is the nature of information?” goes deeper into this analysis. This is made from a more theoretical perspective based on the computation theory, information theory (IT) and algorithmic information theory (AIT).

But in this post, we will leave aside the mathematical formalism and expose some examples that will give us a more intuitive view of what information is and its relation to reality. And above all try to expose what the axiomatic process of information means. This should help to understand the concept of information beyond what is generally understood as a set of bits. And this is what I consider one of the obstacles to establishing a strong link between information and reality.

Nowadays, information and computer technology offers countless examples of how what we observe as reality can be represented by a set of bits. Thus, videos, images, audio and written information can be encoded, compressed, stored and reproduced as a set of bits. This is possible since they are all mathematical objects, which can be represented by numbers subject to axiomatic rules and can, therefore, be represented by a set of bits. However, the number of bits needed to encode the object depends on the coding procedure (axiomatic rules), so that the AIT determines its minimum value defined as the entropy of the object. However, the AIT does not provide any criteria for the implementation of the compression process, so in general they are based on practical criteria, for example statistical criteria, psychophysical, etc.

The AIT establishes a formal definition of the complexity of mathematical objects, called the Kolmogorov complexity K(x). For a finite object x, K(x) is defined as the length of the shortest effective binary description of x, and is an intrinsic property of the object and not a property of the evaluation process. Without entering into theoretical details, the AIT determines that only a small part of n-bit mathematical objects can be compressed and encoded in m bits n>m, which means that most of them have a great complexity and can only be represented by themselves.

The compression and decompression of video, images, audio, etc., are a clear example of axiomatic processing. Imagine a video content x which, by means of a compression process C, has generated a content C(x) , so that by means of a decompression process D we can retrieve the original content x=D(y) . In this context, both C and D are axiomatic processes, understanding as axiom a proposition assumed within a theoretical body. This may seem shocking to the idea that an axiom is an obvious and accepted proposition without requiring demonstration. To clarify this point I will develop this idea in another post, for which I will use as an example the structure of natural languages.

In this context, the term axiomatic is totally justified theoretically, since the AIT does not establish any criteria for the implementation of the compression process. And, as already indicated, most mathematical objects are not compressible.

This example reveals an astonishing result of IT, defined as “information without meaning”. In such a way that a bit string has no meaning unless a process is applied that interprets the information and transforms it into knowledge. Thus, when we say that x is a video content we are assuming that it responds to a video coding system, according to the visual perception capabilities of humans.

And here we come to a transcendental conclusion regarding the nexus between information and reality. Historically, the development of IT has created the tendency to establish this nexus by considering the information as a sequence of bits exclusively. But AIT shows us that we must understand information as a broader concept, made up of axiomatic processes and bit strings. But for this, we must define it in a formal way.

Thus, both C and D are mathematical objects that in practice are embodied in a set consisting of a processor and programs that encode the functions of compression and decompression. If we define a processor as T() and c and d the bit strings that encode the compression and decompression algorithms, we can express:

         y=T(<c,x>)

         x=T(<d,y>)

where <,> is the concatenation of bit sequences.

Therefore, the axiomatic processing would be determined by the processor T(). And if we use any of the implementations of the universal Turing machine we will see that the number of axiomatic rules is very small. This may seem surprising if one considers that the above is extendable to the  definition of any mathematical model of reality.

Thus, any mathematical model that describes an element of reality can be formalized by means of a Turing machine. The result of the model can be enumerable or Turing computable, in which case the Halt state will be reached, concluding the process. On the contrary, the problem can be undecidable or non-computable, so that the Halt state is never reached, continuing the execution of the process forever.

For example, let us weigh in the Newtonian mechanics determined by the laws of the dynamics and the attraction exerted by the masses. In this case, the system dynamics will be determined by the recursive process w=T(<x,y,z>). Where x is the bit string encoding the laws of calculus, y the bit sequence encoding the laws of Newtonian mechanics and z the initial conditions of the masses constituting the system.

It is frequent, as a consequence of the numerical calculus, to think that the processes are nothing more than numerical simulations of the models. However, in the above example, both x and y can be the analytic expressions of the model and w=T(<x,y,z>) the analytical expression of the solution. Thus, if z specifies that the model is composed of only two massive bodies, w=T(<x,y,z>) will produce an analytical expression of the two ellipses corresponding to the ephemeris of both bodies. However, if z specifies more than two massive bodies, in general, the process will not be able to produce any result, not reaching the Halt state. This is because the Newtonian model has no analytical solution for three or more orbiting bodies, except for very particular cases, and is known as the three-body problem.

But we can make x and y encode the functions of numerical calculus, corresponding respectively to the mathematical calculus and to the computational functions of the Newtonian model. In this case, w=T(<x,y,z>) will produce recursively the numerical description of the ephemeris of the massive bodies. However, the process will not reach the Halt state, except in very particular cases in which the process may decide that the ephemeris is a closed trajectory.

This behaviour shows that the Newtonian model is not computable or undecidable. This is extendable to all models of nature established by physics since they are all non-linear models. If we consider the complexity of the y sequence corresponding to the Newtonian model, both in the analytical or in the numerical version, it is evident that the complexity K(x) is small. However, the complexity of w=T(<x,y,z>) is, in general, non-computable which justifies that it cannot be expressed analytically. If this were possible it would mean that w is an enumerable expression, which is in contradiction with the fact that it is a non-computable expression.

What is surprising is that from an enumerable expression <x, y, z> we can get a non-computable result. But this will be addressed another post.

What do we mean by reality?

In the article “Reality and information: Is information a physical entity?” we analyze what we mean by reality, for which the models established by physics are taken as a reference since they have reached a level of formal definition not attained so far in other areas of knowledge.

One of the conclusions of this analysis is that physical models are axiomatic mathematical structures that describe an emerging reality layer without the need of connection with the underlying reality. This means that models describe reality at a given functional level. This makes reality closely linked to observation, which justifies our view of reality determined by our perception capabilities.

Consequently, reality can be structured into irreducible functional layers, and only when one looks at the edges or boundaries of the models describing the functionality of each emergent layer are there signs of another more complex underlying reality.

In this sense, physics aims to reveal the ultimate foundation of reality and has materialized in the development of quantum physics and in particular in the standard model of particles, although the questions raised by these suggest a more complex reality. However, the structure of layers could have no end and according to Gödel’s incompleteness theorem be an undecidable problem, that is, an unsolvable problem.

All this is very abstract, but with an example, we can understand it better. Thus, let us suppose the system of human color perception, based on three types of photoreceptors tuned in the bands of red, green or blue. Due to Heisenberg’s uncertainty principle, the response of these photoreceptors also responds to stimuli of near frequencies (in the future we could discuss it in detail), as shown in the figure. As a consequence, the photoreceptors do not directly measure the frequency of color stimuli, but instead translate their frequency into three parameters (L, M, S) corresponding to the excitation level of each type of photoreceptors.

This makes possible the synthesis of color by three components, red, green and blue in the case of additive synthesis, and yellow, cyan and magenta for subtractive synthesis. In this way, if the synthesized image is analyzed by means of spectroscopy the perception of the image in relation to color would have very little to do with the original. In the case of birds, the rainbow must have, hypothetically, 9 colors, since they are equipped with a fourth type of photoreceptor sensitive to ultraviolet.

One of the consequences of this measurement system, designed by natural evolution, is that the rainbow is composed of seven colors, determined by the three summits and the four valleys produced by the superposition of the photoreceptor response. In addition, the system creates the perception of additional virtual colors, such as magenta and white. In the case of magenta, this is the result of the simultaneous stimulation of the bands above the blue and below the red. In the case of white is the result of simultaneous stimulation of the red, green and blue bands.

From the physical point of view, this color structure does not exist, since the physical parameter that characterizes a photon is its frequency (or its wavelength λ= 1 / f). Therefore, it can be concluded that color perception is an emergent structure of a more complex structure, determined by an axiomatic observational system. But for the moment, the analysis of the term “axiomatic” will be left for later!

This is an example of how reality emerges from more complex underlying structures, so we can say that reality and observation are inseparable terms. And make no mistake! Although the example refers to the perception of color by humans, this is materialized in a mathematical model of information processing.

Now the question is: How far can we look into this layered structure? In the above case, physics shows by means of electromagnetism that the spectrum is continuous and includes radio waves, microwaves, heat, infrared, visible light, ultraviolet, etc. But electromagnetism is nothing more than an emergent model of a more complex underlying reality, as quantum physics shows us. So that, electromagnetic waves are a manifestation of a flow of quantum particles: photons.

And here appears a much more complex reality in which a photon seems to follow simultaneously multiple paths or to have multiple frequencies simultaneously, even infinite, until it is observed, being determined its position, energy, trajectory, etc., with a precision established by the Heisenberg’s uncertainty principle. And all this described by an abstract mathematical model contrasted by observation….

The search for the ultimate reasons behind things has led physics to deepen, with remarkable success, in the natural processes hidden to our systems of perception. For this purpose, there have been designed experiments and developed detectors that expand our capacity for perception and that have resulted in models such as the standard particle model.

The point is that, despite having increased our capacity for perception and as a result of our knowledge, it seems that we are again in the same situation. The result is that we have new, much more complex, underlying abstract reality models described in mathematical language. This is a clear sign that we can not find an elementary entity that can explain the foundation of reality since these models presuppose the existence of complex entities. Thus, everything seems to indicate that we enter an endless loop, in which from a greater perception of reality we define a new abstract model that in turn opens a new horizon of reality and therefore the need to go deeper into it.

As we can see, we are referring to abstract models to describe reality. For this reason, the second part of the article is dedicated to this. But we will discuss this later!