# Entropy: A relativistic invariant

Since the establishment of the principles of relativity, the problem arose of determining the transformations that would allow the expression of thermodynamic parameters in the relativistic context, such as entropy, temperature, pressure or heat transfer; in a analogous way to that achieved in the context of mechanics, such as space-time and momentum.

The first efforts in this field were made by Planck [1] and Einstein [2], arriving at the expressions:

S’ = S, T’ = T/γ, p = p’, γ = (1 – (v/c)2) -1/2

Where S, T, p are the entropy, temperature and pressure of the inertial thermodynamic system at rest I, and S’, T’, p’ are the entropy, temperature and pressure observed from the inertial system I’ in motion, with velocity v.

But in the 1960s this conception of relativistic thermodynamics was revised, and two different points of view were put forward. On the one hand, Ott [3] and Arzeliès [4] propose that the observed temperature of a body in motion must be T’ = Tγ. Subsequently, Landsberg [5] proposes that the observed temperature has be T’ = T.

All these cases are based on purely thermodynamic arguments of energy transfer by heat and work, such that ∆E = ∆Q + ∆W.  However, van Kampen [6] and later Israel [7] analyze the problem from a relativistic point of view, such that G = ∆Q + ∆W , where G  is the increment of the energy-momentum vector, Q and W are the four-component vectors corresponding to the irreversible and reversible part of the thermodynamic process, with ∆Q being the time component of Q .

Thus, the van Kampen-Israel model can be considered the basis for the development of thermodynamics in a relativistic context, offering the advantage that it does not require the concepts of heat and energy, the laws of thermodynamics being expressed in terms of the relativistic concept of momentum-energy.

In spite of this, there is no theoretical justification based on any of the models that allows to determine conclusively the relation of temperatures corresponding to the thermodynamic system at rest and the one observed in the system in motion, so that the controversy raised by the different models is still unresolved today.

To complicate the situation further, the experimental determination of the observed temperature poses a challenge of enormous difficulty.  The problem is that the observer must move in the thermal bath located in the inertial system at rest. To find out the observed temperature from the moving reference system Landsberg proposed a thought experiment and thus determine the relativistic transformation of the temperature experimentally. As a result of this proposal, he recognized that the measurement scenario may be unfeasible in practice.

In recent years, algorithms and computational capabilities have made it possible to propose numerical solutions aimed at resolving the controversy over relativistic transformations for a thermodynamic system. As a result, it is concluded that any of the temperature relations proposed by the different models can be true, depending on the thermodynamic assumptions used in the simulation [8] [9], so the resolution of the problem remains open.

## The relativistic thermodynamic scenario

In order to highlight the difficulty inherent in the measurement of the thermodynamic body temperature of the inertial system at rest I from the inertial system I’ it is necessary to analyze the measurement scenario.

Thus, as Landsberg and Johns [10] make clear, the determination of the temperature transformation must be made by means of a thermometer attached to the observer by means of a brief interaction with the black body under measurement. To ensure that there is no energy loss, the observer must move within the thermodynamic system under measurement, as shown in the figure below.

This scenario, which may seem bizarre and may not be realizable in practice, clearly shows the essence of thermodynamic relativity. But it should not be confused with the scenario of temperature measurement in a cosmological scenario, in which the observer does not move inside the object under measurement.

Thus, in the case of the measurement of the surface temperature of a star the observer does not move within the thermodynamic context of the star, so that the temperature may be determined using Wien’s law, which relates the wavelength of the emission maximum of a black body and its temperature T, such that T = b/λmax, where b is a constant (b≅2.9*10-3 m K).

In this case the measured wavelength λmax must be corrected for several factors, https://en.wikipedia.org/wiki/Redshift, such as:

• The redshift or blueshift produced by the Doppler effect, a consequence of the relative velocity of the reference systems of the star and the observer.
• The redshift produced by the expansion of the universe and which is a function of the scales of space at the time of emission and observation of the photon.
• The redshift produced by the gravitational effect of the mass of the star.

As an example, the following figure shows the concept of the redshift produced by the expansion of the universe.

## Entropy is a relativistic invariant

Although the problem concerning the determination of the observed temperature in relativistic systems I and I’ remains open, it follows from the models proposed by Planck, Ott and Landsberg that the entropy is a relativistic invariant

This conclusion follows from the fact that the number of microstates in the two systems is identical, so that according to the expression of the entropy S = k ln(Ω), where k is the Boltzmann constant and Ω is the number of microstates, it follows that S = S’, since Ω = Ω’.

The invariance of entropy in the relativistic context is of great significance, since it means that the amount of information needed to describe any scenario of reality that emerges from quantum reality is an invariant, independently of the observer.

In the post ’an interpretation of the collapse of the wave function‘ it was concluded that a quantum system is reversible and therefore its entropy is constant and, consequently, the amount of information for its description is an invariant. This post also highlights the entropy increase of classical systems, which is deduced from the ‘Pauli’s Master Equation’ [11], such that > 0. This means that the information to describe the system grows systematically.

The conclusion drawn from the analysis of relativistic thermodynamics is that the entropy of a classical system is the same regardless of the observer and, therefore, the information needed to describe the system is also independent of the observer

Obviously, the entropy increment of a classical system and how the information increment of the system emerges from quantum reality remains a mystery. However, the fact that the amount of information needed to describe a system is an invariant independent of the observer suggests that information is a fundamental physical entity at this level of reality.

On the other hand, the description of a system at any level of reality requires information in a magnitude that according to Algorithmic Information Theory is the entropy of the system. Therefore, reality and information are two entities intimately united from a logical point of view.

In short, from both the physical and the logical point of view, information is a fundamental entity. However, the axiomatic structure that configures the functionality from which the natural laws emerge, which determines how information is processed, remains a mystery.

# The predictive brain

While significant progress has been made in the field of neuroscience and in particular in the neural circuits that support perception and motor activity, the understanding of neural structures, how they encode information, and establish the mechanisms of learning is still under investigation.

Digital audio and image processing techniques and advances in artificial intelligence (AI) are a source of inspiration for understanding these mechanisms. However, it seems clear that these ideas are not directly applicable to brain functionality.

Thus, for example, the processing of an image is static, since digital sensors provide complete images of the scene. In contrast, the information encoded by the retina is not homogeneous, with large differences in resolution between the fovea and the surrounding areas, so that the image composition is necessarily spatially segmented.

But these differences are much more pronounced if we consider that this information is dynamic in time. In the case of digital video processing, it is possible to establish a correlation of the images that make up a sequence. A correlation that in the case of the visual system is much more complex, due to the spatial segmentation of the images and how this information is obtained using the saccadic movements of the eyes.

The information generated by the retina is processed by the primary visual cortex (V1) which has a well-defined map of spatial information and also performs simple feature recognition functions. This information progresses to the secondary visual cortex (V2) which is responsible for composing the spatial information generated by saccadic eye movement.

This structure has been the dominant theoretical framework, in what has been termed the hierarchical feedforward model [1]. However, certain neurons in V1 and V2 regions have been found to have a surprising response. They seem to know what is going to happen in the immediate future, activating as if they could perceive new visual information without it having been produced by the retina [2], in what is defined as Predictive Processing (PP)  [3], and which is gaining influence in cognitive neuroscience, although it is criticized for lacking empirical support to justify it.

For this reason, the aim of this post is to analyze this behavior from the point of view of signal processing techniques and control systems, which show that the nervous system would not be able to interact with the surrounding reality unless it performs PP functions.

## A brief review of control systems

The design of a control system is based on a mature technique [4], although the advances in digital signal processing produced in the last decades allow the implementation of highly sophisticated systems. We will not go into details about these techniques and will only focus on the aspects necessary to justify the possible PP performed by the brain.

Thus, a closed-loop control system is composed of three fundamental blocks:

• Feedback: This block determines the state of the target under control.
• Control: Determines the actions to be taken based on the reference and the information on the state of the target.
• Process: Translates the actions determined by the control to the physical world of the target.

The functionality of a control system is shown in the example shown in the figure. In this case the reference is the position of the ball and the target is for the robot to hit the ball accurately.

The robot sensors must determine in real-time the relative position of the ball and all the parameters that define the robot structure (feedback). From these, the control must determine the robot motion parameters necessary to reach the target, generating the control commands that activate the robot’s servomechanisms.

The theoretical analysis of this functional structure allows determining the stability of the system, which establishes its capacity to correctly develop the functionality for which it has been designed. This analysis shows that the system can exhibit two extreme cases of behavior. To simplify the reasoning, we will eliminate the ball and assume that the objective is to reach a certain position.

In the first case, we will assume that the robot has a motion capability such that it can perform fast movements without limitation, but that the measurement mechanisms that determine the robot’s position require a certain processing time Δt. As a consequence, the decisions of the control block are not in real-time since the decisions at t = ti actually correspond to t = ti-Δt, where Δt is the time required to process the information coming from the sensing mechanisms. Therefore, when the robot approaches the reference point the control will make decisions as if it were somewhat distant, which will cause the robot to overshoot the position of the target. When this happens, the control should correct the motion by turning back the robot’s trajectory. This behavior is defined as an underdamped regime.

Conversely, if we assume that the measurement system has a fast response time, such that Δt≊0, but that the robot’s motion capability is limited, then the control will make decisions in real-time, but the approach to the target will be slow until the target is accurately reached. Such behavior is defined as an overdamped regime.

At the boundary of these two behaviors is the critically damped regime that optimizes the speed and accuracy to reach the target. The behavior of these regimes is shown in the figure.

Formally, the above analysis corresponds to systems in which the functional blocks are linear. The development of digital processing techniques allows the implementation of functional blocks with a nonlinear response, resulting in much more efficient control systems in terms of response speed and accuracy. In addition, they allow the implementation of predictive processing techniques using the laws of mechanics. Thus, if the reference is a passive entity, its trajectory is known from the initial conditions. If it is an active entity, i.e. it has internal mechanisms that can modify its dynamics, heuristic functions, and AI can be used  [5].

## The brain as a control system

As the figure below shows, the ensemble formed by the brain, the motor organs, and the sensory organs comprises a control system. Consequently, this system can be analyzed with the techniques of feedback control systems.

For this purpose, it is necessary to analyze the response times of each of the functional blocks. In this regard, it should be noted that the nervous system has a relatively slow temporal behavior [6]. Thus, for example, the response time to initiate movement in a 100-meter sprint is 120-165 ms. This time is distributed in recognizing the start signal, the processing time of the brain to interpret this signal and generate the control commands to the motor organs, and the start-up of these organs. In the case of eye movements toward a new target, the response time is 50-200 ms. These times give an idea of the processing speed of the different organs involved in the different scenarios of interaction with reality.

Now, let’s assume several scenarios of interaction with the environment:

• A soccer player intending to hit a ball moving at a speed of 10 km/hour. In a time of 0.1 s. the ball will have moved 30 cm.
• A tennis player who must hit a ball moving at 50 km/hour. In a time of 0.1 s. the ball will have displaced 150 cm.
• Grip a motionless cup by moving the hand at a speed of 0.5 m/s. In a time of 0.1 s. the hand will have moved 5 cm.

These examples show that if the brain is considered as a classical control system, it is practically impossible to obtain the necessary precision to justify the behavior of the system. Thus, in the case of the soccer player, the information obtained by the brain from the sensory organs, in this case, the sight, will be delayed in time, providing a relative position of the foot concerning the ball with an error of the order of centimeters, so that the ball strike will be very inaccurate.

The same reasoning can be made in the case of the other two proposed scenarios, so it is necessary to investigate the mechanisms used by the brain to obtain an accuracy that justifies its actual behavior, much more accurate than that provided by a control system based on the temporal response of neurons and nerve tissue.

To this end, let’s assume the case of grasping the cup, and let’s do a simple exercise of introspection. If we close our eyes for a moment we can observe that we have a precise knowledge of the environment. This knowledge is updated as we interact with the environment and the hand approaches the cup. This spatiotemporal behavior allows predicting with the necessary precision what will be the position of the hand and the cup at any moment, despite the delay produced by the nervous system.

To this must be added the knowledge acquired by the brain about space-time reality and the laws of mechanics. In this way, the brain can predict the most probable trajectory of the ball in the tennis player’s scenario. This is evident in the importance of training in sports activities since this knowledge must be refreshed frequently to provide the necessary accuracy. Without the above prediction mechanisms, the tennis player would not be able to hit the ball.

Consequently, from the analysis of the behavior of the system formed by the sensory organs, the brain, and the motor organs, it follows that the brain must perform PP functions. Otherwise, and as a consequence of the response time of the nervous tissue, the system would not be able to interact with the environment with the precision and speed shown in practice. In fact, to compensate for the delay introduced by the sensory organs and their subsequent interpretation by the brain, the brain must predict and advance the commands to the motor organs in a time interval that can be estimated at several tens of milliseconds.

## The neurological foundations of prediction

As justified in the previous section, from the temporal response of the nervous tissue and the behavior of the system formed by the sensory organs, the brain, and the motor organs, it follows that the brain must support two fundamental functions: encoding and processing reference frames of the surrounding reality and performing Predictive Processing.

But what evidence is for this behavior? It has been known for several decades that there are neurons in the entorhinal cortex and hippocampus that respond to a spatial model, called grid cells [7]. But recently it has been shown that in the neocortex there are structures capable of representing reference frames and that these structures can render both a spatial map and any other functional structure needed to represent concepts, language, and structured reasoning [8].

Therefore, the question to be resolved is how the nervous system performs PP. As already advanced, PP is a disputed functionality because of its lack of evidence. The problem it poses is that the number of neurons that exhibit predictive behavior is very small compared to the number of neurons that are activated as a consequence of a stimulus.

The answer to this problem may lie in the model proposed by Jeff Hawkins and Subutai Ahmad [9] based on the functionality of pyramidal neurons [10], whose function is related to motor control and cognition, areas in which PP should be fundamental.

The figure below shows the structure of a pyramidal neuron, which is the most common type of neuron in the neocortex. The dendrites close to the cell body are called proximal synapses so that the neuron is activated if they receive sufficient excitation. The nerve impulse generated by the activation of the neuron propagates to other neurons through the axon, which is represented by an arrow.

This description corresponds to a classical view of the neuron, but pyramidal neurons have a much more complex structure. The dendrites radiating from the central zone are endowed with hundreds or thousands of synapses, called distal synapses so approximately 90% of the synapses are located on these dendrites. Also, the upper part of the figure shows dendrites that have a longer reach, which have feedback functionality.

The remarkable thing about this type of neuron is that if a group of synapses of a distal dendrite close to each other receives a signal at the same time, a new type of nerve impulse is produced that propagates along the dendrite until it reaches the body of the cell. This causes an increase in the voltage of the cell, but without producing its activation, so it does not generate a nerve impulse towards the axon. The neuron remains in this state for a short period, returning to its relaxed state.

The question is: What is the purpose of these nerve impulses from the dendrites if they are not powerful enough to produce cell activation? This has been an unknown that is intended to be solved by the model proposed by Hawkins and Ahmad [9], which proposes that the nerve impulses in the distal dendrites are predictions.

This means that a dendritic impulse is produced when a set of synapses close to each other on a distal dendrite receive inputs at the same time, and it means that the neuron has recognized a pattern of activity determined by a set of neurons. When the pattern of activity is detected, a dendritic impulse is created, which raises the voltage in the cell body, putting the cell into what we call a predictive state.

The neuron is then ready to fire. If a neuron in the predictive state subsequently receives sufficient proximal input to create an action potential to fire it, then the neuron fires slightly earlier than it would if the neuron were not in the predictive state.

Thus, the prediction mechanism is based on the idea that multiple neurons in a minicolumn [11] participate in the prediction of a pattern, all of them entering a prediction state, such that when one of them fires it inhibits the firing of the rest. This means that in a minicolumn hundreds or thousands of predictions are made simultaneously over a certain control scenario, such that one of the predictions will prevail over the rest, optimizing the accuracy of the process. This justifies the fact of the small number of predictive events observed versus the overall neuronal activity and also explains why unexpected events or patterns produce greater activity than more predictable or expected events.

If the neural structure of the minicolumns is taken into account, it is easy to understand how this mechanism involves a large number of predictions for the processing of a single pattern, and it can be said that the brain is continuously making predictions about the environment, which allows real-time interaction.

## The PP from the point of view of AI

According to the above analysis, it can be concluded that the PP performed by the brain within a time window, of the order of tens of milliseconds, is fundamental for the interaction with the surrounding reality, synchronizing this reality with the perceived reality. But this ability to anticipate perceived events requires other mechanisms such as the need to establish reference frames as well as the ability to recognize patterns.

In the subject raised, it is evident the need to have reference frames in which objects can be represented, such as the dynamic position of the motor organs and of the objects with which to interact. In addition to this, the brain must be able to recognize such objects.

But these capabilities are common to all types of scenarios, although it is perhaps more appropriate to use the term model as an alternative to a reference frame since it is a more general concept. Thus, for example, in verbal communication, it is necessary to have a model that represents the structure of language, as well as an ability to recognize the patterns encoded in the stimuli perceived through the auditory system. In this case, the PP must play a fundamental role, since prediction allows for greater fluency in verbal communication, as is evident when there are delays in a communication channel. This is perhaps most evident in the synchronism necessary in musical coordination.

The enormous complexity of the nervous tissue and the difficulty to empirically identify these mechanisms can be an obstacle to understanding their behavior. For this reason, AI is a source of inspiration [12] since, using different neural network architectures, it shows how models of reality can be established and predictions can be made about this reality.

It should be noted that these models do not claim to provide realistic biological models. Nevertheless, they are fundamental mathematical models in the paradigm of machine learning and artificial intelligence and are a fundamental tool in neurological research. In this sense, it is important to highlight that PP is not only a necessary functionality for the temporal prediction of events, but as shown by artificial neural networks pattern recognition is intrinsically a predictive function.

This may go unnoticed in the case of the brain since pattern recognition achieves such accuracy that it makes the concept of prediction very diluted and appears to be free of probabilistic factors. In contrast, in the case of AI, mathematical models make it clear that pattern recognition is probabilistic in nature and practical results show a diversity of outcomes.

This diversity depends on several factors. Perhaps the most important is its state of development, which can still be considered very primitive, compared to the structural complexity, processing capacity, and energy efficiency of the brain. This means that AI applications are oriented to specific cases where it has shown its effectiveness, such as in health sciences [13] or in the determination of protein structures [14].

But without going into a deeper analysis of these factors, what can be concluded is that the functionality of the brain is based on the establishment of models of reality and the prediction of patterns, one of its functions being temporal prediction, which is the foundation of PP.

# Consciousness from the point of view of AI

The self-awareness of human beings, which constitutes the concept of consciousness, has been and continues to be an enigma faced by philosophers, anthropologists and neuroscientists. But perhaps most suggestive is the fact that consciousness is a central concept in human behavior and that being aware of it does not find an explanation for it.

Without going into details, until the modern age the concept of consciousness had deep roots in the concept of soul and religious beliefs, often attributing to divine intervention in the differentiation of human nature from other species.

The modern age saw a substantial change based on Descartes’ concept “cogito ergo sum ( I think, therefore I am”) and later on the model proposed by Kant, which is structured around what are known as “transcendental arguments” [1].

Subsequently, a variety of schools of thought have developed, among which dualistic, monistic, materialistic and neurocognitive theories stand out. In general terms, these theories focus on the psychological and phenomenological aspects that describe conscious reality. In the case of neurocognitive theories, neurological evidence is a fundamental pillar. But ultimately, all these theories are abstract in nature and, for the time being, have failed to provide a formal justification of consciousness and how a “being” can develop conscious behavior, as well as concepts such as morality or ethics.

One aspect that these models deal with and that brings into question the concept of the “cogito” is the change of behavior produced by brain damage and that in some cases can be re-educated, which shows that the brain and the learning processes play a fundamental role in consciousness.

In this regard, advances in Artificial Intelligence (AI) [2] highlight the formal foundations of learning, by which an algorithm can acquire knowledge and in which neural networks are now a fundamental component. For this reason, the use of this new knowledge can shed light on the nature of consciousness.

To analyze what may be the mechanisms that support consciousness we can start with the Turing Test [3], in which a machine is tested to see if it shows a behavior similar to that of a human being.

Without going into the definition of the Turing Test, we can assimilate this concept to that of a chatbot, as shown in Figure 1, which can give us an intuitive idea of this concept. But we can go even further if we consider its implementation. This requires the availability of a huge amount of dialogues between humans, which allows us to train the model using Deep Learning techniques [4]. And although it may seem strange, the availability of dialogues is the most laborious part of the process.

Figure 1. Schematic of the Turing Test

Once the chatbot has been trained, we can ask about its behavior from a psychophysical point of view. The answer seems quite obvious, since although it can show a very complex behavior, this will always be a reflex behavior, even though the interlocutor can deduce that the chatbot has feelings and even an intelligent behavior. The latter is a controversial issue because of the difficulty of defining what constitutes intelligent behavior, which is highlighted by the questions: Intelligent? Compared to what?

But the Turing Test only aims to determine the ability of a machine to show human-like behavior, without going into the analysis of the mechanisms to establish this functionality.

In the case of humans, these mechanisms can be classified into two sections: genetic learning and neural learning.

### Genetic learning

Genetic learning is based on the learning capacity of biology to establish functions adapted to the processing of the surrounding reality. Expressed in this way it does not seem an obvious or convincing argument, but DNA computing [5] is a formal demonstration of the capability of biological learning. The evolution of capabilities acquired through this process is based on trial and error, which is inherent to learning. Thus, biological evolution is a slow process, as nature shows.

Instinctive reactions are based on genetic learning, so that all species of living beings are endowed with certain faculties without the need for significant subsequent training. Examples are the survival instinct, the reproductive instinct, and the maternal and paternal instinct. These functions are located in the inner layers of the brain, which humans share with vertebrates.

We will not go into details related to neuroscience [6], since the only thing that interests us in this analysis is to highlight two fundamental aspects: the functional specialization and plasticity of each of its neural structures. Thus, structure, plasticity and specialization are determined by genetic factors, so that the inner layers, such as the limbic system, have a very specialized functionality and require little training to be functional. In contrast, the external structures, located in the neocortex, are very plastic and their functionality is strongly influenced by learning and experience.

Thus, genetic learning is responsible for structure, plasticity and specialization, whereas neural learning is intimately linked to the plastic functionality of neural tissue.

A clear example of functional specialization based on genetic learning is the space-time processing that we share with the rest of higher living beings and that is located in the limbic system. This endows the brain with structures dedicated to the establishment of a spatial map and the processing of temporal delay, which provides the ability to establish trajectories in advance, vital for survival and for interacting with spatio-temporal reality.

This functionality has a high degree of automaticity, which makes its functional capacity effective from the moment of birth. However, this is not exactly the case in humans, since these neural systems function in coordination with the neocortex, which requires a high degree of neural training.

Thus, for example, this functional specialization precludes visualizing and intuitively understanding geometries of more than three spatial dimensions, something that humans can only deal with abstractly at a higher level by means of the neocortex, which has a plastic functionality and is the main support for neural learning.

It is interesting to consider that the functionality of the neocortex, whose response time is longer than that of the lower layers, can interfere in the reaction of automatic functions. This is clearly evident in the loss of concentration in activities that require a high degree of automatism, as occurs in certain sports activities. This means that in addition to having an appropriate physical capacity and a well-developed and trained automatic processing capacity, elite athletes require specific psychological preparation.

This applies to all sensory systems, such as vision, hearing, balance, in which genetic learning determines and conditions the interpretation of information coming from the sensory organs. But as this information ascends to the higher layers of the brain, the processing and interpretation of the information is determined by neural learning.

This is what differentiates humans from the rest of the species, being endowed with a highly developed neocortex, which provides a very significant neural learning capacity, from which the conscious being seems to emerge.

Nevertheless, there is solid evidence of the ability to feel and to have a certain level of consciousness in some species. This is what has triggered a movement for legal recognition of feelings in certain species of animals, and even recognition of personal status for some species of hominids.

### Neural learning: AI as a source of intuition

Currently, AI is made up of a set of mathematical strategies that are grouped under different names depending on their characteristics. Thus, Machine Learning (ML) is made up of classical mathematical algorithms, such as statistical algorithms, decision trees, clustering, support vector machine, etc. Deep Learning, on the other hand, is inspired by the functioning of neural tissue, and exhibits complex behavior that approximates certain capabilities of humans.

In the current state of development of this discipline, designs are reduced to the implementation and training of specific tasks, such as automatic diagnostic systems, assistants, chatbots, games, etc., so these systems are grouped in what is called Artificial Narrow Intelligence.

The perspective offered by this new knowledge makes it possible to establish three major categories within AI:

• Artificial Narrow Intelligence.
• Artificial General Intelligence. AI systems with a capacity similar to that of human beings.
• Artificial Super Intelligence: Self-aware AI systems with a capacity equal to or greater than that of human beings.

The implementation of neural networks used in Deep Learning is inspired by the functionality of neurons and neural tissue, as shown in Figure 2 [7]. As a consequence, the nerve stimuli coming from the axon terminals that connect to the dendrites (synapses) are weighted and processed according to the functional configuration of the neuron acquired by learning, producing a nerve stimulus that propagates to other neurons, through the terminal axons.

Figure 2. Structure of a neuron and mathematical model

Artificial neural networks are structured by creating layers of the mathematical neuron model, as shown in Figure 3. A fundamental issue in this model is to determine the mechanisms necessary to establish the weighting parameters Wi in each of the units that form the neural network. Neural mechanisms could be used for this purpose. However, although there is a very general idea of how the functionality of the synapses is configured, the establishment of the functionality at the neural network level is still a mystery.

Figure 3. Artificial Neural Network Architecture

In the case of artificial neural networks, mathematics has found a solution that makes it possible to establish the Wi values, by means of what is known as supervised learning. This requires having a dataset in which each of its elements represents a stimulus X i and the response to this stimulus Y i. Thus, once the Wi values have been randomly initialized, the training phase proceeds, presenting each of the X i stimuli and comparing the response with the Y i values. The errors produced are propagated backwards by means of an algorithm known as backpropagation.

Through the sequential application of the elements of a training set belonging to the dataset in several sessions, a state of convergence is reached, in which the neural network achieves an appropriate degree of accuracy, verified by means of a validation set of elements belonging to the dataset that are not used for training.

An example is much more intuitive to understand the nature of the elements of a dataset. Thus, in a dataset used in the training of autonomous driving systems, X i correspond to images in which patterns of different types of vehicles, pedestrians, public roads, etc. appear. Each of these images has a category Y i associated with it, which specifies the patterns that appear in that image. It should be noted that in the current state of development of AI systems, the dataset is made by humans, so learning is supervised and requires significant resources.

In unsupervised learning the category Y i is generated automatically, although its state of development is very incipient. A very illustrative example is the Alpha Zero program developed by DeepMind [8], in such a way that learning is performed by providing it with the rules of the game (chess, go, shogi) and developing against itself matches, in such a way that the moves and the result configure (X i , Y i). The neural network is continuously updated with these results, sequentially improving its behavior and therefore the new results (X i , Y i), reaching a superhuman level of play.

It is important to note that in the case of upper living beings, unsupervised learning takes place through the interaction of the afferent (sensory) neuronal system and the efferent (motor) neuronal system. Although from a functional point of view there are no substantial differences, this interaction takes place at two levels, as shown in Figure 4:

• The interaction with the inanimate environment.
• Interaction with other living beings, especially of the same species.

The first level of interaction provides knowledge about physical reality. On the other contrary, the second level of interaction allows the establishment of survival habits and, above all, social habits. In the case of humans, this level acquires great importance and complexity, since from it emerge concepts such as morality and ethics, as well as the capacity to accumulate and transmit knowledge from generation to generation.

Figure 4. Structure of unsupervised learning

Consequently, unsupervised learning is based on the recursion of afferent and efferent systems. This means that unlike the models used in Deep Learning, which are unidirectional, unsupervised AI systems require the implementation of two independent systems. An afferent system that produces a response from a stimulus and an efferent system that, based on the response, corrects the behavior of the afferent system by means of a reinforcement technique.

### What is the foundation of consciousness?

Two fundamental aspects can be deduced from the development of AI:

• The learning capability of algorithms.
• The need for afferent and efferent structures to support unsupervised learning.

On the other hand, it is known that traumatic processes in the brain or pathologies associated with aging can produce changes in personality and conscious perception.  This clearly indicates that these functions are located in the brain and supported by neural tissue.

But it is necessary to rely on anthropology to have a more precise idea of what are the foundations of consciousness and how it has developed in human beings. Thus, a direct correlation can be observed between the cranial capacity of a hominid species and its abilities, social organization, spirituality and, above all, in the abstract perception of the surrounding world. This correlation is clearly determined by the size of the neocortex and can be observed to a lesser extent in other species, such as primates, showing a capacity for emotional pain, a structured social organization and a certain degree of abstract learning.

According to all of the above, it could be concluded that consciousness emerges from the learning capacity of the neural tissue and would be achieved as the structural complexity and functional resources of the brain acquire an appropriate level of development. But this leads directly to the scenario proposed by the Turing Test, in such a way that we would obtain a system with a complex behavior indistinguishable from a human, which does not provide any proof of the existence of consciousness.

To understand this, we can ask how a human comes to the conclusion that all other humans are self-awareness. In reality, it has no argument to reach this conclusion, since at most it could check that they verify the Turing test. The human comes to the conclusion that other humans have consciousness by resemblance to itself. By introspection, a human is self-awareness and since the rest of the humans are similar to him it concludes that the rest of the humans are self-awareness.

Ultimately, the only answer that can be given to what is the basis of consciousness is the introspection mechanism of the brain itself. In the unsupervised learning scheme, the afferent and efferent mechanisms that allow the brain to interact with the outside world through the sensory and motor organs have been highlighted. However, to this model we must add another flow of information, as shown in Figure 5, which enhances learning and corresponds to the interconnection of neuronal structures of the brain that recursively establish the mechanisms of reasoning, imagination and, why not, consciousness.

Figure 5. Mechanism of reasoning and imagination.

This statement may seem radical, but if we meditate on it we will see that the only difference between imagination and consciousness is that the capacity of humans to identify themselves raises existential questions that are difficult to answer, but which from the point of view of information processing require the same resources as reasoning or imagination.

But how can this hypothesis be verified? One possible solution would be to build a system based on learning technologies that would confirm the hypothesis, but would this confirmation be accepted as true, or would it simply be decided that the system verifies the Turing Test?

# Perception of complexity

In previous posts, the nature of reality and its complexity has been approached from the point of view of Information Theory. However, it is interesting to make this analysis from the point of view of human perception and thus obtain a more intuitive view.

Obviously, making an exhaustive analysis of reality from this perspective is complex due to the diversity of the organs of perception and the physiological and neurological aspects that develop over them. In this sense, we could explain how the information perceived is processed, depending on each of the organs of perception. Especially the auditory and visual systems, as these are more culturally relevant. Thus, in the post dedicated to color perception it has been described how the physical parameters of light are encoded by the photoreceptor cells of the retina.

However, in this post the approach will consist of analyzing in an abstract way how knowledge influences the interpretation of information, in such a way that previous experience can lead the analysis in a certain direction. This behavior establishes a priori assumptions or conditions that limit the analysis of information in all its extension and that, as a consequence, prevent to obtain certain answers or solutions. Overcoming these obstacles, despite the conditioning posed by previous experience, is what is known as lateral thinking.

To begin with, let’s consider the case of series math puzzles in which a sequence of numbers, characters, or graphics is presented, asking how the sequence continues. For example, given the sequence “IIIIIIIVVV”, we are asked to determine which the next character is. If the Roman culture had not developed, it could be said that the next character is “V”, or also that the sequence has been made by little scribblers. But this is not the case, so the brain begins to engineer determining that the characters can be Roman and that the sequence is that of the numbers “1,2,3,…”.  Consequently, the next character must be “I”.

In this way, it can be seen how the knowledge acquired conditions the interpretation of the information perceived by the senses. But from this example another conclusion can be drawn, consisting of the ordering of information as a sign of intelligence. To expose this idea in a formal way let’s consider a numerical sequence, for example the Fibonacci series “0,1,2,3,5,8,…”. Similarly to the previous case, the following number should be 13, so that the general term can be expressed as fn=fn-1+fn-2. However, we can define another discrete mathematical function that takes the values “0,1,2,3,5,8” for n = 0,1,2,3,4,5, but differs for the rest of the values of n belonging to the natural numbers, as shown in the following figure. In fact, with this criterion it is possible to define an infinite number of functions that meet this criterion.

The question, therefore, is: What is so special about the Fibonacci series in relation to the set of functions that meet the condition defined above?

Here we can make the argument already used in the case of the Roman number series. So that mathematical training leads to identifying the series of numbers as belonging to the Fibonacci series. But this poses a contradiction, since any of the functions that meet the same criterion could have been identified. To clear up this contradiction, Algorithmic Information Theory (AIT) should be used again.

Firstly, it should be stressed that culturally the game of riddles implicitly involves following logical rules and that, therefore, the answer is free from arbitrariness. Thus, in the case of number series the game consists of determining a rule that justifies the result. If we now try to identify a simple mathematical series that determines the sequence “0,1,2,3,5,8,…” we see that the expression fn=fn-1+fn-2 fulfills these requirements. In fact, it is possible that this is the simplest within this type of expressions. The rest are either complex, arbitrary or simple expressions that follow different rules from the implicit rules of the puzzle.

From the AIT point of view, the solution that contains the minimum information and can therefore be expressed most readily will be the most likely response that the brain will give in identifying a pattern determined by a stimulus. In the example above, the description of the predictable solution will be the one composed of:

• A Turing machine.
• The information to code the calculus rules.
• The information to code the analytical expression of the simplest solution. In the example shown it corresponds to the expression of the Fibonacci series.

Obviously, there are solutions of similar or even less complexity, such as the one performed by a Turing machine that periodically generates the sequence “0,1,2,3,5,8”. But in most cases the solutions will have a more complex description, so that, according to the AIT, in most cases their most compact description will be the sequence itself, which cannot be compressed or expressed analytically.

For example, it is easy to check that the function:

generates for integer values of n the sequence “0,1,1,2,3,5,8,0,-62,-279,…”, so it could be said that the quantities following the proposed series are “…,0,-62,-279,… Obviously, the complexity of this sequence is higher than that of the Fibonacci series, as a result of the complexity of the description of the function and the operations to be performed.

Similarly, we can try to define other algorithms that generate the proposed sequence, which will grow in complexity. This shows the possibility of interpreting the information from different points of view that go beyond the obvious solutions, which are conditioned by previous experiences.

If, in addition to all the above, it is considered that, according to Landauer’s principle, information complexity is associated with greater energy consumption, the resolution of complex problems not only requires a greater computational effort, but also a greater energy effort.

This may explain the feeling of satisfaction produced when a certain problem is solved, and the tendency to engage in relaxing activities that are characterized by simplicity or monotony. Conversely, the lack of response to a problem produces frustration and restlessness.

This is in contrast to the idea that is generally held about intelligence. Thus, the ability to solve problems such as the ones described above is considered a sign of intelligence. But on the contrary, the search for more complex interpretations does not seem to have this status. Something similar occurs with the concept of entropy, which is generally interpreted as disorder or chaos and yet from the point of view of information it is a measure of the amount of information.

Another aspect that should be highlighted is the fact that the cognitive process is supported by the processing of information and, therefore, subject to the rules of mathematical logic, whose nature is irrefutable. This nuance is important, since emphasis is generally placed on the physical and biological mechanisms that support the cognitive processes, which may eventually be assigned a spiritual or esoteric nature.

Therefore, it can be concluded that the cognitive process is subject to the nature and structure of information processing and that from the formal point of view of the Theory of Computability it corresponds to a Turing machine. In such a way that nature has created a processing structure based on the physics of emerging reality – classical reality -, materialized in a neural network, which interprets the information coded by the perception senses, according to the algorithmic established by previous experience. As a consequence, the system performs two fundamental functions, as shown in the figure:

• Interact with the environment, producing a response to the input stimuli.
• Enhance the ability to interpret, acquiring new skills -algorithmic- as a result of the learning capacity provided by the neural network.

But the truth is that the input stimuli are conditioned by the sensory organs, which constitute a first filter of information and therefore they condition the perception of reality. The question that can be raised is: What impact does this filtering have on the perception of reality?

# Reality as an irreducible layered structure

Note: This post is the first in a series in which macroscopic objects will be analyzed from a quantum and classical point of view, as well as the nature of the observation. Finally, all of them will be integrated into a single article.

### Introduction

Quantum theory establishes the fundamentals of the behavior of particles and their interaction with each other. In general, these fundamentals apply to microscopic systems formed by a very limited number of particles. However, nothing indicates that the application of quantum theory cannot be applied to macroscopic objects, since the emerging properties of such objects must be based on the underlying quantum reality. Obviously, there is a practical limitation established by the increase in complexity, which grows exponentially as the number of elementary particles increases.

The initial reference to this approach was made by Schrödinger [1], indicating that the quantum superposition of states did not represent any contradiction at the macroscopic level. To do this, he used what is known as Schrödinger’s cat paradox in which the cat could be in a superposition of states, one in which the cat was alive and another in which the cat was dead. Schrödinger’s original motivation was to raise a discussion about the EPR paradox [2], which revealed the incompleteness of quantum theory. This has finally been solved by Bell’s theorem [3] and its experimental verification by Aspect [4], making it clear that the entanglement of quantum particles is a reality on which quantum computation is based [5]. A summary of the aspects related to the realization of a quantum system that emulates Schrödinger cat has been made by Auletta [6], although these are restricted to non-macroscopic quantum systems.

But the question that remains is whether quantum theory can be used to describe macroscopic objects and whether the concept of quantum entanglement applies to these objects as well. Contrary to Schrödinger’s position, Wigner argued, through the friend paradox, that quantum mechanics could not have unlimited validity [7]. Recently, Frauchiger and Renner [8] have proposed a virtual experiment (Gedankenexperiment) that shows that quantum mechanics is not consistent when applied to complex objects.

The Schrödinger cat paradigm will be used to analyze these results from two points of view, with no loss of generality, one as a quantum object and the other as a macroscopic object (in a next post). This will allow their consistency and functional relationship to be determined, leading to the establishment of an irreducible functional structure. As a consequence of this, it will also be necessary to analyze the nature of the observer within this functional structure (also in a later posts).

### Schrödinger’s cat as a quantum reality

In the Schrödinger cat experiment there are several entities [1], the radioactive particle, the radiation monitor, the poison flask and the cat. For simplicity, the experiment can be reduced to two quantum variables: the cat, which we will identify as CAT, and the system formed by the radioactive particle, the radiation monitor and the poison flask, which we will define as the poison system PS.

These quantum variables can be expressed as [9]:

|CAT⟩ = α1|DC⟩ + β1|LC⟩. Quantum state of the cat: dead cat |DC⟩, live cat |LC⟩.

|PS⟩ = α2|PD⟩ + β2|PA⟩. Quantum state of the poison system: poison deactivated |PD⟩, poison activated |PA⟩.

The quantum state of the Schrödinger cat experiment SCE as a whole can be expressed as:
|SCE⟩ = |CAT⟩⊗|PS⟩= α1α2|DC⟩|PD⟩+α1β2|DC⟩|PA⟩+β1α2|LC⟩|PD⟩+β1β2|LC⟩|PA⟩.

Since for a classical observer the final result of the experiment requires that the states |DC⟩|PD⟩ and |LC⟩|PA⟩ are not compatible with observations,  the experiment must be prepared in such a way that the quantum states |CAT⟩ and |PS⟩ are entangled [10] [11], so that the wave function of the experiment must be:

|SCE⟩ = α|DC⟩|PA⟩ + β|LC⟩|PD⟩.

As a consequence, the observation of the experiment [12] will result in a state:

|SCE⟩ = |DC⟩|PA⟩, with probability α2, (poison activated, dead cat).

or:

|SCE⟩ =|LC⟩|PD⟩, with probability β2, (poison deactivated, live cat).

Although from the formal point of view of quantum theory the approach of the experiment is correct, for a classical observer the experiment presents several objections. One of these is related to the fact that the experiment requires establishing “a priori” the requirement that the PS and CAT systems are entangled. Something contradictory, since from the point of view of the preparation of the quantum experiment there is no restriction, being able to exist results with quantum states |DC⟩|PD⟩, or |LC⟩|PA⟩, something totally impossible for a classical observer, assuming in any case that the poison is effective, that it is taken for granted in the experiment. Therefore, the SCE experiment is inconsistent, so it is necessary to analyze the root of the incongruence between the SCE quantum system and the result of the observation.

Another objection, which may seem trivial, is that for the SCE experiment to collapse in one of its states the OBS observer must be entangled with the experiment, since the experiment must interact with it. Otherwise, the operation performed by the observer would have no consequence on the experiment. For this reason, this aspect will require more detailed analysis.

Returning to the first objection, from the perspective of quantum theory it may seem possible to prepare the PS and CAT systems in an entangled superposition of states. However, it should be noted that both systems are composed of a huge number of non-entangled quantum subsystems Ssubject to continuous decoherence [13] [14]. It should be noted that the Si subsystems will internally have an entangled structure. Thus, the CAT and PS systems can be expressed as:

|CAT⟩ = |SC1⟩ ⊗ |SC2⟩ ⊗…⊗ |SCi⟩ ⊗…⊗ |SCk⟩,

|PS⟩= |SP1⟩⊗|SP2⟩⊗…⊗|SPi⟩⊗…⊗|SPl⟩,

in such a way that the observation of a certain subsystem causes its state to collapse, producing no influence on the rest of the subsystems, which will develop an independent quantum dynamics. This makes it unfeasible that the states |LC⟩ and |DC⟩ can be simultaneous and as a consequence the CAT system cannot be in a superposition of these states. An analogous reasoning can be made of the PS system, although it imay seem obvious that functionally it is much simpler.

In short, from a theoretical point of view it is possible to have a quantum system equivalent to the SCE, for which all the subsystems must be fully entangled with each other, and in addition the system will require an “a priori” preparation of its state. However, the emerging reality differs radically from this scenario, so that the experiment seems to be unfeasible in practice. But the most striking fact is that, if the SCE experiment is generalized, the observable reality would be radically different from the observed reality.

To better understand the consequences of the quantum state of the ECS system having to be prepared “a priori”, imagine that the supplier of the poison has changed its contents to a harmless liquid. As a result of this, the experiment will be able to kill the cat without cause.

From these conclusions the question can be raised as to whether quantum theory can explain in a general and consistent way the observable reality at the macroscopic level. But perhaps the question is also whether the assumptions on which the SCE experiment has been conducted are correct. Thus, for example: Is it correct to use the concepts of live cat or dead cat in the domain of quantum physics? Which in turn raises other kinds of questions, such as: Is it generally correct to establish a strong link between observable reality and the underlying quantum reality?

The conclusion that can be drawn from the contradictions of the SCE experiment is that the scenario of a complex quantum system cannot be treated in the same terms as a simple system. In terms of quantum computation these correspond, respectively, to systems made up of an enormous number and a limited number of qubits [5]. As a consequence of this, classical reality will be an irreducible fact, which based on quantum reality ends up being disconnected from it. This leads to defining reality in two independent and irreducible functional layers, a quantum reality layer and a classical reality layer. This would justify the criterion established by the Copenhagen interpretation [15] and its statistical nature as a means of functionally disconnecting both realities. Thus, quantum theory would be nothing more than a description of the information that can emerge from an underlying reality, but not a description of that reality. At this point, it is important to emphasize that statistical behavior is the means by which the functional correlation between processes can be reduced or eliminated [16] and that it would be the cause of irreducibility

#### References

 [1] E. Schrödinger, «Die gegenwärtige Situation in der Quantenmechanik,» Naturwissenschaften, vol. 23, pp. 844-849, 1935. [2] A. Einstein, B. Podolsky and N. Rose, “Can Quantum-Mechanical description of Physical Reality be Considered Complete?,” Physical Review, vol. 47, pp. 777-780, 1935. [3] J. S. Bell, «On the Einstein Podolsky Rosen Paradox,» Physics,vol. 1, nº 3, pp. 195-290, 1964. [4] A. Aspect, P. Grangier and G. Roger, “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Phys. Rev. Lett., vol. 47, pp. 460-463, 1981. [5] M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum Information, Cambridge University Press, 2011. [6] G. Auletta, Foundations and Interpretation of Quantum Mechanics, World Scientific, 2001. [7] E. P. Wigner, «Remarks on the mind–body question,» in Symmetries and Reflections, Indiana University Press, 1967, pp. 171-184. [8] D. Frauchiger and R. Renner, “Quantum Theory Cannot Consistently Describe the Use of Itself,” Nature Commun., vol. 9, no. 3711, 2018. [9] P. Dirac, The Principles of Quantum Mechanics, Oxford University Press, 1958. [10] E. Schrödinger, «Discussion of Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 31, nº 4, pp. 555-563, 1935. [11] E. Schrödinger, «Probability Relations between Separated Systems,» Mathematical Proceedings of the Cambridge Philosophical Society, vol. 32, nº 3, pp. 446­-452, 1936. [12] M. Born, «On the quantum mechanics of collision processes.,» Zeit. Phys.(D. H. Delphenich translation), vol. 37, pp. 863-867, 1926. [13] H. D. Zeh, «On the Interpretation of Measurement in Quantum Theory,» Found. Phys., vol. 1, nº 1, pp. 69-76, 1970. [14] W. H. Zurek, «Decoherence, einselection, and the quantum origins of the classical,» Rev. Mod. Phys., vol. 75, nº 3, pp. 715-775, 2003. [15] W. Heisenberg, Physics and Philosophy. The revolution in Modern Science, Harper, 1958. [16] E. W. Weisstein, «MathWorld,» [En línea]. Available http://mathworld.wolfram.com/Covariance.html.

# Why the rainbow has 7 colors?

Published on OPENMIND August 8, 2018

## Color as a physical concept

Visible light, heat, radio waves and other types of radiation all have the same physical nature and are constituted by a flow of particles called photons. The photon or “light quantum” was proposed by Einstein, for which he was awarded the Nobel Prize in 1921 and is one of the elementary particles of the standard model, belonging to the boson family. The fundamental characteristic of a photon is its capacity to transfer energy in quantized form, which is determined by its frequency, according to the expression E=h∙ν, where h is the Planck constant and ν the frequency of the photon.

Electromagnetic spectrum

Thus, we can find photons of very low frequencies located in the band of radio waves, to photons of very high energy called gamma rays, as shown in the following figure, forming a continuous frequency range that constitutes the electromagnetic spectrum. Since the photon can be modeled as a sinusoid traveling at the speed of light c, the length of a complete cycle is called the photon wavelength λ, so the photon can be characterized either by its frequency or its wavelength, since λ=c/ν. But it is common to use the term color as a synonym for frequency, since the color of light perceived by humans is a function of frequency. However, as we are going to see, this is not strictly physical but a consequence of the process of measuring and interpreting information, which makes color an emerging reality of another underlying reality, sustained by the physical reality of electromagnetic radiation.

Structure of an electromagnetic wave

But before addressing this issue, it should be considered that to detect photons efficiently it is necessary to have a detector called an antenna, whose size must be similar to the wavelength of the photons.

## Color perception by humans

The human eye is sensitive to wavelengths ranging from deep red (700nm, nanometers=10-9 meters) to violet (400nm).  This requires receiving antennas of the order of hundreds of nanometres in size! But for nature this is not a big problem, as complex molecules can easily be this size. In fact, the human eye, for color vision, is endowed with three types of photoreceptor proteins, which produce a response as shown in the following figure.

Response of photoreceptor cells of the human retina

Each of these types configures a type of photoreceptor cell in the retina, which due to its morphology are called cones. The photoreceptor proteins are located in the cell membrane, so that when they absorb a photon they change shape, opening up channels in the cell membrane that generate a flow of ions. After a complex biochemical process, a flow of nerve impulses is produced that is preprocessed by several layers of neurons in the retina that finally reach the visual cortex through the optic nerve, where the information is finally processed.

But in this context, the point is that the retinal cells do not measure the wavelength of the photons of the stimulus. On the contrary, what they do is convert a stimulus of a certain wavelength into three parameters called L, M, S, which are the response of each of the types of photoreceptor cells to the stimulus. This has very interesting implications that need to be analyzed. In this way, we can explain aspects such as:

• The reason why the rainbow has 7 colors.
• The possibility of synthesizing the color by means of additive and subtractive mixing.
• The existence of non-physical colors, such as white and magenta.
• The existence of different ways of interpreting color according to the species.

To understand this, let us imagine that they provide us with the response of a measurement system that relates L, M, S to the wavelength and ask us to establish a correlation between them. The first thing we can see is that there are 7 different zones in the wavelength, 3 ridges and 4 valleys. 7 patterns! This explains why we perceive the rainbow composed of 7 colors, an emerging reality as a result of information processing that transcends physical reality.

But what answer will a bird give us if we ask it about the number of colors of the rainbow? Possibly, though unlikely, it will tell us nine! This is because the birds have a fourth type of photoreceptor positioned in the ultraviolet, so the perception system will establish 9 regions in the light perception band. And this leads us to ask: What will be the chromatic range perceived by our hypothetical bird, or by species that only have a single type of photoreceptor? The result is a simple case of combinatorial!

On the other hand, the existence of three types of photoreceptors in the human retina makes it possible to synthesize the chromatic range in a relatively precise way, by means of the additive combination of three colors, red, green and blue, as it is done in the video screens. In this way, it is possible to produce an L,M,S response at each point of the retina similar to that produced by a real stimulus, by means of the weighted application of a mixture of photons of red, green and blue wavelengths.

Similarly, it is possible to synthesize color by subtractive or pigmentary mixing of three colors, magenta, cyan and yellow, as in oil paint or printers. And this is where the virtuality of color is clearly shown, since there are no magenta photons, since this stimulus is a mixture of blue and red photons. The same happens with the white color, as there are no individual photons that produce this stimulus, since white is the perception of a mixture of photons distributed in the visible band, and in particular by the mixture of red, green and blue photons.

In short, the perception of color is a clear example of how reality emerges as a result of information processing. Thus, we can see how a given interpretation of the physical information of the visible electromagnetic spectrum produces an emerging reality, based a much more complex underlying reality.

In this sense, we could ask ourselves what an android with a precise wavelength measurement system would think of the images we synthesize in painting or on video screens. It would surely answer that they do not correspond to the original images, something that for us is practically imperceptible. And this connects with a subject, which may seem unrelated, as is the concept of beauty and aesthetics. The truth is that when we are not able to establish patterns or categories in the information we perceive it as noise or disorder.  Something unpleasant or unsightly!

# Reality as emerging information

What is reality?

The idea that reality may be nothing more than a result of emerging information is not a novel idea at all. Plato, in what is known as the allegory of the cave, exposes how reality is perceived by a group of humans chained in a cave who from birth observe reality through the shadows projected on a wall.

Modern version of the allegory of the cave

It is interesting to note that when we refer to perception, anthropic vision plays an important role, which can create some confusion by associating perception with human consciousness. To clarify this point, let’s imagine an automaton of artificial vision. In the simplest case, it will be equipped with image sensors, processes for image processing and a database of patterns to be recognized. Therefore, the system is reduced to information encoded as a sequence of bits and to a set of processes, defined axiomatically, that convert information into knowledge.

Therefore, the acquisition of information always takes place by physical processes, which in the case of the automaton are materialized by means of an image sensor based on electronic technology and in the case of living beings by means of molecular photoreceptors. As algorithmic information theory shows us, this information has no meaning until it is processed, extracting patterns contained in it.

As a result, we can draw general conclusions about the process of perception. Thus, the information can be obtained and analyzed with different degrees of detail, giving rise to different layers of reality. This is what makes humans have a limited view of reality and sometimes a distorted one.

But in the case of physics, the scientific procedure aims to solve this problem by rigorously contrasting theory and experimentation. This leads to the definition of physical models such as the theory of electromagnetism or Newton’s theory of universal gravitation that condense the behavior of nature to a certain functional level, hiding a more complex underlying reality, which is why they are irreducible models of reality. Thus, Newton’s theory of gravitation models the gravitational behavior of massive bodies without giving a justification for it.

Today we know that the theory of general relativity gives an explanation to this behavior, through the deformation of space-time by the effect of mass, which in turn determines the movement of massive bodies. However, the model is again a description limited to a certain level of detail, proposing a space-time structure that may be static, expansive or recessive, but without giving justification for it. Neither does it establish a link with the quantum behavior of matter, which is one of the objectives of the unification theories. What we can say is that all these models are a description of reality at a certain functional level.

Universal Gravitation vs. Relativistic Mechanics

Reality as information processing

But the question is: What does this have to do with perception? As we have described, perception is the result of information processing, but this is a term generally reserved for human behavior, which entails a certain degree of subjectivity or virtuality. In short, perception is a mechanism to establish reality as the result of an interpretation process of information. For this reason, we handle concepts such as virtual reality, something that computers have boosted but that is nothing new and that we can experiment through daydreaming or simply by reading a book.

Leaving aside a controversial issue such as the concept of consciousness: What is the difference between the interaction of two atoms, two complex molecules or two individuals? Let’s look at the similarities first. In all these cases, the two entities exchange and process information, in each particular case making a decision to form a molecule, synthesize a new molecule or decide to go to the cinema. The difference is the information exchanged and the functionality of each entity. Can we make any other difference? Our anthropic vision tells us that we humans are superior beings, which makes a fundamental difference. But let’s think of biology: This is nothing more than a complex interaction between molecules, to which we owe our existence!

We could argue that in the case where human intelligence intervenes the situation is different. However, the structure of the three cases is the same, so the information transferred between the entities, which as far as we know have a quantum nature, is processed with a certain functionality. The difference that can be seen is that in the case of human intervention we say that functionality is intelligent. But we must consider that it is very easy to cheat with natural language, as it becomes clear when analyzing its nature.

In short, one could say that reality is the result of emerging information and its subsequent interpretation by means of processes, whose definition is always axiomatic, at least as far as knowledge reaches.

Perhaps, all this is very abstract so a simple example, which we find in advertising techniques, can give us a more intuitive idea. Let’s suppose an image whose pixels are actually images that appear when we zoom in, as shown in the figure.

Perception of a structure in functional layers

For an observer with a limited visual capacity, only a reality that shows a specific scene of a city will emerge. But for an observer with a much greater visual acuity, or who has an appropriate measuring instrument, he will observe a much more complex reality. This example shows that the process of observation of a mathematical object formed by a sequence of bits can be structured into irreducible functional layers, depending on the processes used to interpret the information. Since everything observable in our Universe seems to follow this pattern, we can ask ourselves the question: Is this functional structure the foundation of our Universe?

# What do we mean by reality?

In the article “Reality and information: Is information a physical entity?” we analyze what we mean by reality, for which the models established by physics are taken as a reference since they have reached a level of formal definition not attained so far in other areas of knowledge.

One of the conclusions of this analysis is that physical models are axiomatic mathematical structures that describe an emerging reality layer without the need of connection with the underlying reality. This means that models describe reality at a given functional level. This makes reality closely linked to observation, which justifies our view of reality determined by our perception capabilities.

Consequently, reality can be structured into irreducible functional layers, and only when one looks at the edges or boundaries of the models describing the functionality of each emergent layer are there signs of another more complex underlying reality.

In this sense, physics aims to reveal the ultimate foundation of reality and has materialized in the development of quantum physics and in particular in the standard model of particles, although the questions raised by these suggest a more complex reality. However, the structure of layers could have no end and according to Gödel’s incompleteness theorem be an undecidable problem, that is, an unsolvable problem.

All this is very abstract, but with an example, we can understand it better. Thus, let us suppose the system of human color perception, based on three types of photoreceptors tuned in the bands of red, green or blue. Due to Heisenberg’s uncertainty principle, the response of these photoreceptors also responds to stimuli of near frequencies (in the future we could discuss it in detail), as shown in the figure. As a consequence, the photoreceptors do not directly measure the frequency of color stimuli, but instead translate their frequency into three parameters (L, M, S) corresponding to the excitation level of each type of photoreceptors.

This makes possible the synthesis of color by three components, red, green and blue in the case of additive synthesis, and yellow, cyan and magenta for subtractive synthesis. In this way, if the synthesized image is analyzed by means of spectroscopy the perception of the image in relation to color would have very little to do with the original. In the case of birds, the rainbow must have, hypothetically, 9 colors, since they are equipped with a fourth type of photoreceptor sensitive to ultraviolet.

One of the consequences of this measurement system, designed by natural evolution, is that the rainbow is composed of seven colors, determined by the three summits and the four valleys produced by the superposition of the photoreceptor response. In addition, the system creates the perception of additional virtual colors, such as magenta and white. In the case of magenta, this is the result of the simultaneous stimulation of the bands above the blue and below the red. In the case of white is the result of simultaneous stimulation of the red, green and blue bands.

From the physical point of view, this color structure does not exist, since the physical parameter that characterizes a photon is its frequency (or its wavelength λ= 1 / f). Therefore, it can be concluded that color perception is an emergent structure of a more complex structure, determined by an axiomatic observational system. But for the moment, the analysis of the term “axiomatic” will be left for later!

This is an example of how reality emerges from more complex underlying structures, so we can say that reality and observation are inseparable terms. And make no mistake! Although the example refers to the perception of color by humans, this is materialized in a mathematical model of information processing.

Now the question is: How far can we look into this layered structure? In the above case, physics shows by means of electromagnetism that the spectrum is continuous and includes radio waves, microwaves, heat, infrared, visible light, ultraviolet, etc. But electromagnetism is nothing more than an emergent model of a more complex underlying reality, as quantum physics shows us. So that, electromagnetic waves are a manifestation of a flow of quantum particles: photons.

And here appears a much more complex reality in which a photon seems to follow simultaneously multiple paths or to have multiple frequencies simultaneously, even infinite, until it is observed, being determined its position, energy, trajectory, etc., with a precision established by the Heisenberg’s uncertainty principle. And all this described by an abstract mathematical model contrasted by observation….

The search for the ultimate reasons behind things has led physics to deepen, with remarkable success, in the natural processes hidden to our systems of perception. For this purpose, there have been designed experiments and developed detectors that expand our capacity for perception and that have resulted in models such as the standard particle model.

The point is that, despite having increased our capacity for perception and as a result of our knowledge, it seems that we are again in the same situation. The result is that we have new, much more complex, underlying abstract reality models described in mathematical language. This is a clear sign that we can not find an elementary entity that can explain the foundation of reality since these models presuppose the existence of complex entities. Thus, everything seems to indicate that we enter an endless loop, in which from a greater perception of reality we define a new abstract model that in turn opens a new horizon of reality and therefore the need to go deeper into it.

As we can see, we are referring to abstract models to describe reality. For this reason, the second part of the article is dedicated to this. But we will discuss this later!