Author Archives: Jose Pozas

Biology as an axiomatic process

The replication mechanisms of living beings can be compared with the self-replication of automatons in the context of computability theory. In particular, DNA replication, analyzed from the perspective of the recursion theorem, indicates that its replication structure goes beyond biology and the quantum mechanisms that support it, as it is analyzed in the article Biology as an Axiomatic Process.

Physical chemistry establishes the principles by which atoms interact with each other to form molecules. In the inorganic world the resulting molecules are relatively simple, not allowing establishing a complex functional structure. On the other hand, in the organic world, molecules can be made up of thousands or even millions of atoms and have complex functionality. It highlights what is known as molecular recognition, through which the molecules interact with each other selectively and which is the basis of biology.

Molecular recognition plays a fundamental role in the structure of DNA, in the translation of the genetic code of DNA into proteins and in the biochemical interaction of proteins, which ultimately form the basis on which living beings are based.

The detailed study of these molecular interactions makes it possible to describe the functionality of the processes, in such a way that it is possible to establish formal models, to such an extent that they can be used as a computing technology, as is the case of DNA-based computing.

From this perspective, this allows us to ask if the process of information is something deeper and if in reality it is the foundation of biology itself, according to what is established by the principle of reality.

For this purpose, this section aims to analyze the basic processes on which biology is based, in order to establish a link with axiomatic processing and thus investigate the nature of biological processes. For this, it is not necessary to describe in detail the biological mechanisms described in the literature. We will simply describe its functionality, so that they can be identified with the theoretical foundations of information processing. To this end, we will explain the mechanisms on which DNA replication and protein synthesis are based.

DNA and RNA molecules are polymers formed from the ribose and deoxyribose nucleotides, respectively, bound by phosphates. On the basis of this nucleotide chain, one of the four possible nucleic acids can be linked. There are five different nucleic acids, adenine (A), guanine (G), cytosine (C), thymine (T) and uracil (U). In the case of DNA, nucleic acids that can be coupled by covalent bonds to nucleotides are A, G, C and T, whereas in the case of RNA they are A, G, C and U. As a consequence, molecules are structured in a helix shape, fitting the nucleic acids in a precise and compact way, due to the shape of their electronic clouds.

The helix structure allows the nucleic acids of two different strands to be bound together by hydrogen bonds, forming pairs A-T and G-C in the case of DNA, and A-U and G-C in the case of RNA, as shown in the following figure.

Base-pairing of nucleic acids in DNA

As a result, the DNA molecule is formed by a double helix, in which two chains of nucleotides polymers wind one on top of the other, remaining together by means of hydrogen bonds of nucleic acids. Thus, each strand of the DNA molecule contains the same genetic code, one of which can be considered the negative of the other.

Double helix structure of DNA molecule

The genetic information of an organism, called a genome, is not contained in a single DNA molecule, but is organized into chromosomes. These are made up of DNA strands bound together by proteins. Thus, in the case of humans, the genome is formed by 46 chromosomes, and so, the number of bases in the DNA molecules that compose it being about 3×109. Since each base can be encoded by means of 2 bits, the human genome, considered as an object of information, is equivalent to 6×109 bits.

The information contained in the genes is the basis for the synthesis of proteins, which are responsible for executing and controlling the biochemistry of living beings. The proteins are formed by the bonding of amino acids, through covalent bonds, which is done from the sequences of the bases contained in the DNA. The number of existing amino acids is 20 and since each base codes 2 bits, 3 bases (6 bits, 64 combinations) are necessary to be able to code each one of the amino acids. This means that there is some redundancy in the assignment of base sequences to amino acids, in addition to control codes for the synthesis process (Stop), as shown in the following table.

Translation of nucleic acids (Codons) to amino acids

However, protein synthesis is not done directly from DNA, since it requires the intermediation of RNA. This is called the translation process and involves two types of different RNA molecules, the messenger ARM (mRNA) and the transfer RNA (tRNA). The first step is the synthesis of mRNA from DNA. This process is called transcription, in such a way that the information corresponding to a gene is copied into the mRNA molecule, which is done through a process of recognition between the molecules of the nucleic acids, carried out by the hydrogen bonds, such as shows the following figure.

DNA transcription

Once the mRNA molecule is synthesized, the tRNA molecule is responsible for mediating between mRNA and amino acids to synthesize proteins, for which it has two specific molecular mechanisms. On the one hand, tRNA has a chain of three amino acids called anticodon at one end. On the opposite side, tRNA binds to a specific amino acid, according to the translation table of nucleic acid sequences into amino acids. In this way, tRNA is able to translate mRNA into a protein, as shown in the figure below. 

Protein synthesis (mRNA translation)

But perhaps the most complex process is undoubtedly DNA replication, so that each molecule produces two identical replicas. Replication is performed by unwinding each strand of the molecule and inserting the nucleic acid molecules on each of the strands, in a similar way to that shown in the mRNA synthesis. DNA replication is controlled by enzymatic processes supported by proteins. Without going into detail and in order to show its complexity, the table below shows the proteins involved in the replication process and their role.

The role of proteins in the DNA replication process

The processes described above are defined as the central dogma of molecular biology and are usually schematically represented schematically as shown in the following figure. It also depicts the reverse transcription that occurs in retroviruses, which synthesizes a DNA molecule from RNA.

Central dogma of molecular biology

The biological process from the perspective of computability theory

Molecular processes supported by DNA, RNA and proteins can be considered from an abstract point of view as information processes. As a result, input statements corresponding to a language are processed resulting in new output statements. Thus, the following languages can be identified:

  • DNA molecule. Sentence consisting of a sequence of characters corresponding to a 4-symbol alphabet.
  • RNA molecule – protein synthesis. Sentence consisting of a sequence of characters belonging to a 21-symbol alphabet.
  • RNA molecule-reverse transcription. Sentence composed of a sequence of characters belonging to a 4-symbol alphabet.
  • Protein molecule. Sentence composed of a sequence of characters belonging to a 20-symbol alphabet.

This information is processed by the machinery established by the physicochemical properties of control molecules. To better understand this functional structure, it is advisable to modify the scheme corresponding to the central dogma of biology. To do this, we must represent the processes involved and the information that flows between them, as shown in the following block diagram.

Functional structure of DNA replication

This structure highlights the flow of information between processes, such as DNA and RNA sentences, where the functional blocks of information processing are the following:

  • PDNA. Replication process. The functionality of this process is determined by the proteins involved in DNA synthesis, producing two replicas of DNA from a single molecule.
  • PRNA. Transcription process. It synthesizes a RNA molecule from a gene encoded in DNA.
  • PProt. Translation process. It synthesizes a protein from an RNA molecule.

This structure clearly shows how information emerges from biological processes, something that seems to be ubiquitous in all natural models and allows the implementation of computer systems. In all cases this capacity is finally supported by quantum physics. In the case of biology in particular, this is produced from the physicochemical properties of molecules, which are determined by quantum physics. Therefore, the information process is something that emerges from an underlying reality and ultimately from quantum physics. This is true as far as knowledge goes.

This means that, although there is a strong link between reality and information, information is simply an emerging product of reality. But biology provides a clue to the intimate relationship between reality and information, which are ultimately indistinguishable concepts. If we look at the DNA replication process, we see that DNA is produced in several stages of processing:

DNA → RNA → Proteins → DNA.

We could consider this to be a specific feature of the biological process. However, computability theory indicates that the replication process is subject to deeper logical rules than the physical processes themselves that support replication. In computability theory, the recursion theorem determines that replication of information requires at least the intervention of two independent processes.

This shows that DNA replication is subject to abstract rules that must be satisfied not only by biology, but by every natural process. Therefore, the physical foundations that support biological processes must verify this requirement. Consequently, this shows that the information processing is essential in what we understand by reality.

Natural language: A paradigm of axiomatic processing

The Theory of Computation (TC) aims to establish computational models and determine the limits of what is computable and the complexity of a problem when it is computable. The formal models established by TC are based on abstract systems ranging from simple models, such as automatons, to the general computer model established by the Turing Machine (TM).

Formally, the concept of algorithm is based on TM, so that each of the possible implementations will perform a specific function that we call algorithm. The TC demonstrates that it is possible to build an idealized machine, called Universal Turing Machine (UTM), capable of executing all possible computable algorithms. In the case of commercial computers, these are equivalent to UTM, with the difference that their memory and runtime are limited. On the contrary, in the UTM these resources are unlimited.

But the question we can ask is: What does this have to do with language? The answer is simple. In TC, an L(TM) language is defined as the set of bit sequences that “accepts” a given TM. In which the term “accept” means that the TM analyzes the input sequence and reaches the Halt state. Consequently, a language is the set of mathematical objects accepted by a given TM.

Without going into details that can be consulted in the specialized literature, the TC classifies the languages into two basic types, as shown in the figure. Thus, a language is Turing-decidable (DEC) when the TM accepts the sequences belonging to the language and rejects the rest, reaching the Halt state in both cases. On the contrary, a language is Turing-recognizable or RE if it is the language of a TM. This means that, for the set of languages belonging to RE but not belonging to DEC, TM does not reach the Halt state when the input sequence does not correspond to the language.

It is necessary to emphasize that there are sequences that are not recognized by any TM. Therefore, if the formal definition of language is taken into account, they should not be considered as such, although in general they are defined as non-RE languages. It is important to note that the latter concept is equivalent to Gödel’s incompleteness theorem. As a consequence, they are the set of undecidable or unsolvable problems, that is, they have a cardinality superior to the one of the natural numbers.

Within DEC languages, two types, regular ​​and context-free (CFL) can be identified. Regular languages ​​are those composed of a set of sequences on which the TM can decide individually, so they do not have a grammatical structure. Examples of these are the languages ​​of the automatons we handle every day, such as elevators, device controls, etc. CFLs are those that have a formal structure (grammar) in which language elements can be nested recursively. In general, we can consider CFLs to programming languages, such as JAVA, C ++. This is not strictly true, but it will facilitate the exposure of certain concepts.

But the question is: What does this have to do with natural language? The answer is easy again. Natural language is, in principle, a Turing-decidable language. The proof of this is trivial. Maybe a few decades ago this was not so, but nowadays information technology shows it us clearly, without the need for theoretical knowledge. On the one hand, natural language is a sequence of bits, since both spoken and written language are coded as bit sequences in audio and text files, respectively. On the other hand, humans do not loop when we get a message, at least permanently ;-).

However, it can be argued that we did not reach the Halt state either. But in this context, this does not mean that we literally end our existence, although there are messages that kill! This means that information processing concludes and that, as a result, we can make a decision and tackle a new task.

Therefore, from an operational or practical point of view, natural language is Turing-decidable. But we can find arguments that can be in conflict with this and that materialize in the form of contradictions. Although it may seem surprising, this also happens with programming languages, since their grammar may be context sensitive (CSG). But for now, we are going to leave aside this aspect, in order to make the reasoning easier.

What can intuitively be seen is a clear parallel between the TM model and the human communication model, as shown in the figure. This can be extended to other communication models, such as body language, physicochemical language between molecules, etc.

In the case of TC, the input and output objects to the TM are language elements, which is very suitable because the practical objective is human-to-machine or machine-to-machine communication. But this terminology varies with the context. Thus, from an abstract point of view, objects have a purely mathematical nature. However, in other contexts such as physics, we talk about concepts such as space-time, energy, momentum, etc.

What seems to be clear, from the observable models, is that a model of reality is equivalent to bit sequences processed by a TM. In short, a model of reality is equivalent to an axiomatic processing of information, where the axioms are embedded in the TM. It should be clear that an axiom is not self-evident, and therefore does not need proof. On the contrary, an axiom is a proposition assumed within a theoretical body. Possibly, this misunderstanding is originated by the apparent simplicity of some axiomatic systems, produced by our perception of reality. This is obvious, for example, in Euclidean geometry based on five postulates or axioms, in which such postulates seem to us evident, due to our perception of space. On this, we will continue to insist since the axiomatic processing is surely one of the great mysteries that nature encloses.

Returning to natural language, it should be possible to establish a parallelism between it and the axiomatic processing determined by TM, as suggested in the figure. As with programming languages, the structure of natural language is defined by a grammar, which establishes a set of axiomatic rules that determine the categories (verb, predicate) of the elements of language (lexicon) and how they are combined to form expressions (sentences). Both the elements of language and the resulting expressions have a meaning, which is known as semantics of language. The pertinent question is: What is the axiomatic structure of a natural language?

To answer, let’s reorient the question: How is the semantics of natural language defined? To do this, we can begin by analyzing the definition of the lexicon of a language, collected in the dictionary. In it we can find the definition of the meaning of each word in different contexts. But we soon find a formal problem, since the definitions are based on one another in a circular fashion. What is the same, the defined is part of the definition, so it is not possible to establish the semantics of language from the linguistic information.

For example, according to the Oxford dictionary:

  • Word: A single distinct meaningful element of speech or writing, used with others (or sometimes alone) to form a sentence and typically shown with a space on either side when written or printed.
  • Write: Mark (letters, words, or other symbols) on a surface, typically paper, with a pen, pencil, or similar implement. 
  • Sentence: A set of words that is complete in itself, typically containing a subject and predicate, conveying a statement, question, exclamation, or command, and consisting of a main clause and sometimes one or more subordinate clauses. 
  • Statement: A definite or clear expression of something in speech or writing
  • Expression: A word or phrase, especially an idiomatic one, used to convey an idea. 
  • Phrase: A small group of words standing together as a conceptual unit, typically forming a component of a clause

Therefore:

  • Word: A single distinct … or marks (letters, words, or other symbols) on … to form a set of words that … conveying a definite or clear word or a small group of words standing together … or marking (letters, words, …. ) …

In this way, we could continue recursively replacing the meaning of each component within the definition, arriving at the conclusion that natural language as an isolated entity has no meaning. So it is necessary to establish an axiomatic basis external to the language itself. By the way: What will happen if we continue to replace each component of the sentence?

Consequently, we can rise what will be the result of an experiment in which an entity of artificial intelligence disconnected from all reality, except from the information on which the written language is based, analyzes the information. That is, the entity will have access to grammar, dictionary, written works, etc. What will be the result of the experiment? To what conclusions will the entity arrive?

If we mentally perform this experiment, we will see that the entity can come to understand the reality of language, and all the stories based on it, provided that it has an axiomatic basis. Otherwise, the entity will experience what in information theory is known as “information without meaning”. This explains the impossibility of deciphering archaic scripts without having cross-references to other languages ​​or other forms of expression. In the case of humans, the axiomatic basis is acquired from cognitive experiences external to the language itself.

To clarify the idea of what the axiomatic processing means, we can use simple examples related to programming languages. So, let’s analyze the semantics of the “if… then” statement. If we consult the programming manual we can determine its semantics, since in our brain we have implemented rules or axioms to decipher the written message. This is equivalent to what happens in the execution of program sentences, in which it is the TM that executes those expressions axiomatically. In the case of both the brain and TM, axioms are defined in the fields of biochemistry and physics, respectively, and therefore outside the realm of language.

This shows once again how reality is structured in functional layers, which can be seen as independent entities by means of the axiomatic processing, as has been analyzed in:

But this issue, as well as the analysis of the existence of linguistic contradictions, will be addressed in later posts.

A classic example of axiomatic processing

In the article “Reality and information: Is information a physical entity?” what we mean by information is analyzed. This is a very general review of the development of the theoretical and practical aspects that occurred throughout the twentieth century to the present day and which have led to the current vision of what information is.

The article “Reality and information: What is the nature of information?” goes deeper into this analysis. This is made from a more theoretical perspective based on the computation theory, information theory (IT) and algorithmic information theory (AIT).

But in this post, we will leave aside the mathematical formalism and expose some examples that will give us a more intuitive view of what information is and its relation to reality. And above all try to expose what the axiomatic process of information means. This should help to understand the concept of information beyond what is generally understood as a set of bits. And this is what I consider one of the obstacles to establishing a strong link between information and reality.

Nowadays, information and computer technology offers countless examples of how what we observe as reality can be represented by a set of bits. Thus, videos, images, audio and written information can be encoded, compressed, stored and reproduced as a set of bits. This is possible since they are all mathematical objects, which can be represented by numbers subject to axiomatic rules and can, therefore, be represented by a set of bits. However, the number of bits needed to encode the object depends on the coding procedure (axiomatic rules), so that the AIT determines its minimum value defined as the entropy of the object. However, the AIT does not provide any criteria for the implementation of the compression process, so in general they are based on practical criteria, for example statistical criteria, psychophysical, etc.

The AIT establishes a formal definition of the complexity of mathematical objects, called the Kolmogorov complexity K(x). For a finite object x, K(x) is defined as the length of the shortest effective binary description of x, and is an intrinsic property of the object and not a property of the evaluation process. Without entering into theoretical details, the AIT determines that only a small part of n-bit mathematical objects can be compressed and encoded in m bits n>m, which means that most of them have a great complexity and can only be represented by themselves.

The compression and decompression of video, images, audio, etc., are a clear example of axiomatic processing. Imagine a video content x which, by means of a compression process C, has generated a content C(x) , so that by means of a decompression process D we can retrieve the original content x=D(y) . In this context, both C and D are axiomatic processes, understanding as axiom a proposition assumed within a theoretical body. This may seem shocking to the idea that an axiom is an obvious and accepted proposition without requiring demonstration. To clarify this point I will develop this idea in another post, for which I will use as an example the structure of natural languages.

In this context, the term axiomatic is totally justified theoretically, since the AIT does not establish any criteria for the implementation of the compression process. And, as already indicated, most mathematical objects are not compressible.

This example reveals an astonishing result of IT, defined as “information without meaning”. In such a way that a bit string has no meaning unless a process is applied that interprets the information and transforms it into knowledge. Thus, when we say that x is a video content we are assuming that it responds to a video coding system, according to the visual perception capabilities of humans.

And here we come to a transcendental conclusion regarding the nexus between information and reality. Historically, the development of IT has created the tendency to establish this nexus by considering the information as a sequence of bits exclusively. But AIT shows us that we must understand information as a broader concept, made up of axiomatic processes and bit strings. But for this, we must define it in a formal way.

Thus, both C and D are mathematical objects that in practice are embodied in a set consisting of a processor and programs that encode the functions of compression and decompression. If we define a processor as T() and c and d the bit strings that encode the compression and decompression algorithms, we can express:

         y=T(<c,x>)

         x=T(<d,y>)

where <,> is the concatenation of bit sequences.

Therefore, the axiomatic processing would be determined by the processor T(). And if we use any of the implementations of the universal Turing machine we will see that the number of axiomatic rules is very small. This may seem surprising if one considers that the above is extendable to the  definition of any mathematical model of reality.

Thus, any mathematical model that describes an element of reality can be formalized by means of a Turing machine. The result of the model can be enumerable or Turing computable, in which case the Halt state will be reached, concluding the process. On the contrary, the problem can be undecidable or non-computable, so that the Halt state is never reached, continuing the execution of the process forever.

For example, let us weigh in the Newtonian mechanics determined by the laws of the dynamics and the attraction exerted by the masses. In this case, the system dynamics will be determined by the recursive process w=T(<x,y,z>). Where x is the bit string encoding the laws of calculus, y the bit sequence encoding the laws of Newtonian mechanics and z the initial conditions of the masses constituting the system.

It is frequent, as a consequence of the numerical calculus, to think that the processes are nothing more than numerical simulations of the models. However, in the above example, both x and y can be the analytic expressions of the model and w=T(<x,y,z>) the analytical expression of the solution. Thus, if z specifies that the model is composed of only two massive bodies, w=T(<x,y,z>) will produce an analytical expression of the two ellipses corresponding to the ephemeris of both bodies. However, if z specifies more than two massive bodies, in general, the process will not be able to produce any result, not reaching the Halt state. This is because the Newtonian model has no analytical solution for three or more orbiting bodies, except for very particular cases, and is known as the three-body problem.

But we can make x and y encode the functions of numerical calculus, corresponding respectively to the mathematical calculus and to the computational functions of the Newtonian model. In this case, w=T(<x,y,z>) will produce recursively the numerical description of the ephemeris of the massive bodies. However, the process will not reach the Halt state, except in very particular cases in which the process may decide that the ephemeris is a closed trajectory.

This behaviour shows that the Newtonian model is not computable or undecidable. This is extendable to all models of nature established by physics since they are all non-linear models. If we consider the complexity of the y sequence corresponding to the Newtonian model, both in the analytical or in the numerical version, it is evident that the complexity K(x) is small. However, the complexity of w=T(<x,y,z>) is, in general, non-computable which justifies that it cannot be expressed analytically. If this were possible it would mean that w is an enumerable expression, which is in contradiction with the fact that it is a non-computable expression.

What is surprising is that from an enumerable expression <x, y, z> we can get a non-computable result. But this will be addressed another post.

What do we mean by reality?

In the article “Reality and information: Is information a physical entity?” we analyze what we mean by reality, for which the models established by physics are taken as a reference since they have reached a level of formal definition not attained so far in other areas of knowledge.

One of the conclusions of this analysis is that physical models are axiomatic mathematical structures that describe an emerging reality layer without the need of connection with the underlying reality. This means that models describe reality at a given functional level. This makes reality closely linked to observation, which justifies our view of reality determined by our perception capabilities.

Consequently, reality can be structured into irreducible functional layers, and only when one looks at the edges or boundaries of the models describing the functionality of each emergent layer are there signs of another more complex underlying reality.

In this sense, physics aims to reveal the ultimate foundation of reality and has materialized in the development of quantum physics and in particular in the standard model of particles, although the questions raised by these suggest a more complex reality. However, the structure of layers could have no end and according to Gödel’s incompleteness theorem be an undecidable problem, that is, an unsolvable problem.

All this is very abstract, but with an example, we can understand it better. Thus, let us suppose the system of human color perception, based on three types of photoreceptors tuned in the bands of red, green or blue. Due to Heisenberg’s uncertainty principle, the response of these photoreceptors also responds to stimuli of near frequencies (in the future we could discuss it in detail), as shown in the figure. As a consequence, the photoreceptors do not directly measure the frequency of color stimuli, but instead translate their frequency into three parameters (L, M, S) corresponding to the excitation level of each type of photoreceptors.

This makes possible the synthesis of color by three components, red, green and blue in the case of additive synthesis, and yellow, cyan and magenta for subtractive synthesis. In this way, if the synthesized image is analyzed by means of spectroscopy the perception of the image in relation to color would have very little to do with the original. In the case of birds, the rainbow must have, hypothetically, 9 colors, since they are equipped with a fourth type of photoreceptor sensitive to ultraviolet.

One of the consequences of this measurement system, designed by natural evolution, is that the rainbow is composed of seven colors, determined by the three summits and the four valleys produced by the superposition of the photoreceptor response. In addition, the system creates the perception of additional virtual colors, such as magenta and white. In the case of magenta, this is the result of the simultaneous stimulation of the bands above the blue and below the red. In the case of white is the result of simultaneous stimulation of the red, green and blue bands.

From the physical point of view, this color structure does not exist, since the physical parameter that characterizes a photon is its frequency (or its wavelength λ= 1 / f). Therefore, it can be concluded that color perception is an emergent structure of a more complex structure, determined by an axiomatic observational system. But for the moment, the analysis of the term “axiomatic” will be left for later!

This is an example of how reality emerges from more complex underlying structures, so we can say that reality and observation are inseparable terms. And make no mistake! Although the example refers to the perception of color by humans, this is materialized in a mathematical model of information processing.

Now the question is: How far can we look into this layered structure? In the above case, physics shows by means of electromagnetism that the spectrum is continuous and includes radio waves, microwaves, heat, infrared, visible light, ultraviolet, etc. But electromagnetism is nothing more than an emergent model of a more complex underlying reality, as quantum physics shows us. So that, electromagnetic waves are a manifestation of a flow of quantum particles: photons.

And here appears a much more complex reality in which a photon seems to follow simultaneously multiple paths or to have multiple frequencies simultaneously, even infinite, until it is observed, being determined its position, energy, trajectory, etc., with a precision established by the Heisenberg’s uncertainty principle. And all this described by an abstract mathematical model contrasted by observation….

The search for the ultimate reasons behind things has led physics to deepen, with remarkable success, in the natural processes hidden to our systems of perception. For this purpose, there have been designed experiments and developed detectors that expand our capacity for perception and that have resulted in models such as the standard particle model.

The point is that, despite having increased our capacity for perception and as a result of our knowledge, it seems that we are again in the same situation. The result is that we have new, much more complex, underlying abstract reality models described in mathematical language. This is a clear sign that we can not find an elementary entity that can explain the foundation of reality since these models presuppose the existence of complex entities. Thus, everything seems to indicate that we enter an endless loop, in which from a greater perception of reality we define a new abstract model that in turn opens a new horizon of reality and therefore the need to go deeper into it.

As we can see, we are referring to abstract models to describe reality. For this reason, the second part of the article is dedicated to this. But we will discuss this later!

Welcome to the launch

After years of dealing with information and other subjects such as engineering, physics, and mathematics, I have decided to venture into the analysis of what we mean by reality and its relationship to information. Given the nature of the subject, I hope to stick to reason, trying to avoid any kind of speculation and not let myself be carried away by enthusiasm. Formally said, I hope that the analysis responds to “pure reason” (This is a very cool statement in the Kantian style!).

I think that when we deal with issues with deep unknowns, we humans have a tendency to divert analysis in other directions that can fill the void created by lack of certainty and knowledge. I understand that these variants are the task of another department, so, except for error, they will not be treated in this context.

When writing this post I had already written several articles on the topic that you can find in the menu bar. What I will do soon is to comment on the results obtained. In the future, I hope to continue writing new articles, but because of the complexity of the subject, I do not think that the frequency can be more than two articles per year.

I thank you in advance for your comments and collaboration!