A classic example of axiomatic processing

In the article “Reality and information: Is information a physical entity?” what we mean by information is analyzed. This is a very general review of the development of the theoretical and practical aspects that occurred throughout the twentieth century to the present day and which have led to the current vision of what information is.

The article “Reality and information: What is the nature of information?” goes deeper into this analysis. This is made from a more theoretical perspective based on the computation theory, information theory (IT) and algorithmic information theory (AIT).

But in this post, we will leave aside the mathematical formalism and expose some examples that will give us a more intuitive view of what information is and its relation to reality. And above all try to expose what the axiomatic process of information means. This should help to understand the concept of information beyond what is generally understood as a set of bits. And this is what I consider one of the obstacles to establishing a strong link between information and reality.

Nowadays, information and computer technology offers countless examples of how what we observe as reality can be represented by a set of bits. Thus, videos, images, audio and written information can be encoded, compressed, stored and reproduced as a set of bits. This is possible since they are all mathematical objects, which can be represented by numbers subject to axiomatic rules and can, therefore, be represented by a set of bits. However, the number of bits needed to encode the object depends on the coding procedure (axiomatic rules), so that the AIT determines its minimum value defined as the entropy of the object. However, the AIT does not provide any criteria for the implementation of the compression process, so in general they are based on practical criteria, for example statistical criteria, psychophysical, etc.

The AIT establishes a formal definition of the complexity of mathematical objects, called the Kolmogorov complexity K(x). For a finite object x, K(x) is defined as the length of the shortest effective binary description of x, and is an intrinsic property of the object and not a property of the evaluation process. Without entering into theoretical details, the AIT determines that only a small part of n-bit mathematical objects can be compressed and encoded in m bits n>m, which means that most of them have a great complexity and can only be represented by themselves.

The compression and decompression of video, images, audio, etc., are a clear example of axiomatic processing. Imagine a video content x which, by means of a compression process C, has generated a content C(x) , so that by means of a decompression process D we can retrieve the original content x=D(y) . In this context, both C and D are axiomatic processes, understanding as axiom a proposition assumed within a theoretical body. This may seem shocking to the idea that an axiom is an obvious and accepted proposition without requiring demonstration. To clarify this point I will develop this idea in another post, for which I will use as an example the structure of natural languages.

In this context, the term axiomatic is totally justified theoretically, since the AIT does not establish any criteria for the implementation of the compression process. And, as already indicated, most mathematical objects are not compressible.

This example reveals an astonishing result of IT, defined as “information without meaning”. In such a way that a bit string has no meaning unless a process is applied that interprets the information and transforms it into knowledge. Thus, when we say that x is a video content we are assuming that it responds to a video coding system, according to the visual perception capabilities of humans.

And here we come to a transcendental conclusion regarding the nexus between information and reality. Historically, the development of IT has created the tendency to establish this nexus by considering the information as a sequence of bits exclusively. But AIT shows us that we must understand information as a broader concept, made up of axiomatic processes and bit strings. But for this, we must define it in a formal way.

Thus, both C and D are mathematical objects that in practice are embodied in a set consisting of a processor and programs that encode the functions of compression and decompression. If we define a processor as T() and c and d the bit strings that encode the compression and decompression algorithms, we can express:



where <,> is the concatenation of bit sequences.

Therefore, the axiomatic processing would be determined by the processor T(). And if we use any of the implementations of the universal Turing machine we will see that the number of axiomatic rules is very small. This may seem surprising if one considers that the above is extendable to the  definition of any mathematical model of reality.

Thus, any mathematical model that describes an element of reality can be formalized by means of a Turing machine. The result of the model can be enumerable or Turing computable, in which case the Halt state will be reached, concluding the process. On the contrary, the problem can be undecidable or non-computable, so that the Halt state is never reached, continuing the execution of the process forever.

For example, let us weigh in the Newtonian mechanics determined by the laws of the dynamics and the attraction exerted by the masses. In this case, the system dynamics will be determined by the recursive process w=T(<x,y,z>). Where x is the bit string encoding the laws of calculus, y the bit sequence encoding the laws of Newtonian mechanics and z the initial conditions of the masses constituting the system.

It is frequent, as a consequence of the numerical calculus, to think that the processes are nothing more than numerical simulations of the models. However, in the above example, both x and y can be the analytic expressions of the model and w=T(<x,y,z>) the analytical expression of the solution. Thus, if z specifies that the model is composed of only two massive bodies, w=T(<x,y,z>) will produce an analytical expression of the two ellipses corresponding to the ephemeris of both bodies. However, if z specifies more than two massive bodies, in general, the process will not be able to produce any result, not reaching the Halt state. This is because the Newtonian model has no analytical solution for three or more orbiting bodies, except for very particular cases, and is known as the three-body problem.

But we can make x and y encode the functions of numerical calculus, corresponding respectively to the mathematical calculus and to the computational functions of the Newtonian model. In this case, w=T(<x,y,z>) will produce recursively the numerical description of the ephemeris of the massive bodies. However, the process will not reach the Halt state, except in very particular cases in which the process may decide that the ephemeris is a closed trajectory.

This behaviour shows that the Newtonian model is not computable or undecidable. This is extendable to all models of nature established by physics since they are all non-linear models. If we consider the complexity of the y sequence corresponding to the Newtonian model, both in the analytical or in the numerical version, it is evident that the complexity K(x) is small. However, the complexity of w=T(<x,y,z>) is, in general, non-computable which justifies that it cannot be expressed analytically. If this were possible it would mean that w is an enumerable expression, which is in contradiction with the fact that it is a non-computable expression.

What is surprising is that from an enumerable expression <x, y, z> we can get a non-computable result. But this will be addressed another post.

What do we mean by reality?

In the article “Reality and information: Is information a physical entity?” we analyze what we mean by reality, for which the models established by physics are taken as a reference since they have reached a level of formal definition not attained so far in other areas of knowledge.

One of the conclusions of this analysis is that physical models are axiomatic mathematical structures that describe an emerging reality layer without the need of connection with the underlying reality. This means that models describe reality at a given functional level. This makes reality closely linked to observation, which justifies our view of reality determined by our perception capabilities.

Consequently, reality can be structured into irreducible functional layers, and only when one looks at the edges or boundaries of the models describing the functionality of each emergent layer are there signs of another more complex underlying reality.

In this sense, physics aims to reveal the ultimate foundation of reality and has materialized in the development of quantum physics and in particular in the standard model of particles, although the questions raised by these suggest a more complex reality. However, the structure of layers could have no end and according to Gödel’s incompleteness theorem be an undecidable problem, that is, an unsolvable problem.

All this is very abstract, but with an example, we can understand it better. Thus, let us suppose the system of human color perception, based on three types of photoreceptors tuned in the bands of red, green or blue. Due to Heisenberg’s uncertainty principle, the response of these photoreceptors also responds to stimuli of near frequencies (in the future we could discuss it in detail), as shown in the figure. As a consequence, the photoreceptors do not directly measure the frequency of color stimuli, but instead translate their frequency into three parameters (L, M, S) corresponding to the excitation level of each type of photoreceptors.

This makes possible the synthesis of color by three components, red, green and blue in the case of additive synthesis, and yellow, cyan and magenta for subtractive synthesis. In this way, if the synthesized image is analyzed by means of spectroscopy the perception of the image in relation to color would have very little to do with the original. In the case of birds, the rainbow must have, hypothetically, 9 colors, since they are equipped with a fourth type of photoreceptor sensitive to ultraviolet.

One of the consequences of this measurement system, designed by natural evolution, is that the rainbow is composed of seven colors, determined by the three summits and the four valleys produced by the superposition of the photoreceptor response. In addition, the system creates the perception of additional virtual colors, such as magenta and white. In the case of magenta, this is the result of the simultaneous stimulation of the bands above the blue and below the red. In the case of white is the result of simultaneous stimulation of the red, green and blue bands.

From the physical point of view, this color structure does not exist, since the physical parameter that characterizes a photon is its frequency (or its wavelength λ= 1 / f). Therefore, it can be concluded that color perception is an emergent structure of a more complex structure, determined by an axiomatic observational system. But for the moment, the analysis of the term “axiomatic” will be left for later!

This is an example of how reality emerges from more complex underlying structures, so we can say that reality and observation are inseparable terms. And make no mistake! Although the example refers to the perception of color by humans, this is materialized in a mathematical model of information processing.

Now the question is: How far can we look into this layered structure? In the above case, physics shows by means of electromagnetism that the spectrum is continuous and includes radio waves, microwaves, heat, infrared, visible light, ultraviolet, etc. But electromagnetism is nothing more than an emergent model of a more complex underlying reality, as quantum physics shows us. So that, electromagnetic waves are a manifestation of a flow of quantum particles: photons.

And here appears a much more complex reality in which a photon seems to follow simultaneously multiple paths or to have multiple frequencies simultaneously, even infinite, until it is observed, being determined its position, energy, trajectory, etc., with a precision established by the Heisenberg’s uncertainty principle. And all this described by an abstract mathematical model contrasted by observation….

The search for the ultimate reasons behind things has led physics to deepen, with remarkable success, in the natural processes hidden to our systems of perception. For this purpose, there have been designed experiments and developed detectors that expand our capacity for perception and that have resulted in models such as the standard particle model.

The point is that, despite having increased our capacity for perception and as a result of our knowledge, it seems that we are again in the same situation. The result is that we have new, much more complex, underlying abstract reality models described in mathematical language. This is a clear sign that we can not find an elementary entity that can explain the foundation of reality since these models presuppose the existence of complex entities. Thus, everything seems to indicate that we enter an endless loop, in which from a greater perception of reality we define a new abstract model that in turn opens a new horizon of reality and therefore the need to go deeper into it.

As we can see, we are referring to abstract models to describe reality. For this reason, the second part of the article is dedicated to this. But we will discuss this later!

Welcome to the launch

After years of dealing with information and other subjects such as engineering, physics, and mathematics, I have decided to venture into the analysis of what we mean by reality and its relationship to information. Given the nature of the subject, I hope to stick to reason, trying to avoid any kind of speculation and not let myself be carried away by enthusiasm. Formally said, I hope that the analysis responds to “pure reason” (This is a very cool statement in the Kantian style!).

I think that when we deal with issues with deep unknowns, we humans have a tendency to divert analysis in other directions that can fill the void created by lack of certainty and knowledge. I understand that these variants are the task of another department, so, except for error, they will not be treated in this context.

When writing this post I had already written several articles on the topic that you can find in the menu bar. What I will do soon is to comment on the results obtained. In the future, I hope to continue writing new articles, but because of the complexity of the subject, I do not think that the frequency can be more than two articles per year.

I thank you in advance for your comments and collaboration!