Tag Archives: Biology

The perception of time

In the post “What is the nature of time?” the essence of time has been analyzed from the point of view of physics. Several conclusions have been drawn from it, which can be summarized in the following points:

  • Time is an observable that emerges at the classical level from quantum reality.
  • Time is determined by the sequence of events that determines the dynamics of classical reality.
  • Time is not reversible, but is a unidirectional process determined by the sequence of events (arrow of time), in which entropy grows in the direction of the sequence of events. 
  • Quantum reality has a reversible nature, so the entropy of the system is constant and therefore its description is an invariant.
  • The space-time synchronization of events requires an intimate connection of space-time at the level of quantum reality, which is deduced from the theory of relativity and quantum entanglement.

Therefore, a sequence of events can be established which allows describing the dynamics of a classical system (CS) in the following way:

CS = {… Si-2, Si-1, Si, Si+1, Si+2,…}, where Si is the state of the system at instant i.

This perspective has as a consequence that from a perceptual point of view the past can be defined as the sequence {… S-2, S-1}, the future as the sequence {S+1, S+2,…} and the present as the state S0.

At this point it is important to emphasize that these states are perfectly distinguishable from a sequential conception (time) since the amount of information of each state, determined by its entropy, verifies that:

  H(Si) < H(Si+1) [1].

Therefore, it seems necessary to analyze how this sequence of states can be interpreted by an observer, the process of perception being a very prominent factor in the development of philosophical theories on the nature of time.

Without going into the foundation of these theories, since we have exhaustive references on the subject [2], we will focus on how the sequence of events produced by the dynamics of a system can be interpreted from the point of view of the mechanisms of perception [3] and from the perspective currently offered by the knowledge on Artificial Intelligence (AI) [4].

Nevertheless, let us make a brief note on what physical time means. According to the theory of relativity, space-time is as if in a vacuum there were a network of clocks and rules of measurement, forming a reference system, in such a way that its geometry depends on the gravitational effects and the relative velocity of the observer’s own reference system. And it is at this point where we can scale in the interpretation of time if we consider the observer as a perceptive entity and establish a relationship between physics and perception.

The physical structure of space-time

What we are going to discuss next is whether the sequence of states {… S-2, S-1, S0, S+1, S+2,…} is a physical reality or, on the contrary, is a purely mathematical construction, such that the concept of past, present and future is exclusively a consequence of the perception of this sequence of states. Which means that the only physical reality would be the state of the system S0, and that the sequences {… S-2, S-1} and {S+1, Si+2,…} would be an abstraction or fiction created by the mathematical model.

The contrast between these two views has an immediate consequence. In the first case, in which the sequence of states has physical reality, the physical system would be formed by the set of states {… S-2, S-1, S0, S+1, S+2,…}, which would imply a physical behavior different from the observed universe, which would reinforce the strictly mathematical nature of the sequence of states.

In the second hypothesis there would only be a physical reality determined by the state of the system S0, in such a way that physical time would be an emergent property, consequence of the entropy difference between states that would differentiate them and make them observable.

This conception must be consistent with the theory of relativity, which is possible if we consider that one of the consequences of its postulates is the causality of the system, so that the sequence of events is the same in all reference systems, regardless of the fact that the space-time geometry is different in each of them and therefore the emergent space-time magnitudes are different.

At this point one could posit as fundamental postulates of the theory of relativity the invariance of the sequence of events and covariance. But this is another subject.

Past , present and future

From this physical conception of space-time, the question that arises is how this physical reality determines or conditions an observer’s perception of time.

Thus, in the post “the predictive brain” the ability of neural tissue to process time, which allows higher living beings to interact with the environment, has been indirectly discussed. This requires not only establishing space-time models, but also making space-time predictions [5]. Thus, time perception requires discriminating time intervals of the order of milliseconds to coordinate in real time the stimuli produced by the sensory organs and the actions to activate the motor organs. The performance of these functions is distributed in the brain and involves multiple neural structures, such as the basal ganglia, cerebellum, hippocampus and cerebral cortex [6] [7].

To this we must add that the brain is capable of establishing long-term timelines, as shown by the perception of time in humans [8], in such a way that it allows establishing a narrative of the sequence of events, which is influenced by the subjective interest of those events.

This indicates that when we speak generically of “time” we should establish the context to which we refer. Thus, when we speak of physical time we would be referring to relativistic time, as the time that elapses between two events and that we measure by means of what we define as a clock.

But when we refer to the perception of time, a perceptual entity, human or artificial, interprets the past as something physically real, based on the memory provided by classical reality. But such reality does not exist once the sequence of events has elapsed, since physically only the state S0 exists, so that the states Si, i<0, are only a fiction of the mathematical model. In fact, the very foundation of the mathematical model shows, through chaos theory [9], that it is not possible to reconstruct the states Si, i<0, from S0. In the same way it is not possible to define the future states, although here an additional element appears determined by the increase of the entropy of the system.

With this, we are hypothesizing that the classical universe is S≡S0, and that the states Si, i≠0 have no physical reality (another thing is the quantum universe, which is reversible, so all its states have the same entropy! Although at the moment it is nothing more than a set of mathematical models). Colloquially, this would mean that the classical universe does not have a repository of Si states. In other words, the classical universe would have no memory of itself.

Thus, it is S that supports the memory mechanisms and this is what makes it possible to make a virtual reconstruction of the past, giving support to our memories, as well as to areas of knowledge such as history, archeology or geology. In the same way, state S provides the information to make a virtual construction of what we define as the future, although this issue will be argued later. Without going into details, we know that in previous states we have had some experiences that we store in our memory and in our photo albums.

Therefore, according to this hypothesis it can be concluded that the concepts of past and future do not correspond to a physical reality, since the sequences of states {… S-2, S-1} and {S+1, S+2,…}  have no physical reality, since they are only a mathematical artifact. This means that the concepts of past and future are virtual constructs that are materialized on the basis of the present state S, through the mechanisms of perception and memory. The arising question that we will try to answer is the one about how the mechanisms of perception construct these concepts.

Mechanisms of perception

Natural processes are determined by the dynamics of the system in such a way that, according to the proposed model, there is only what we define as present state S. Consequently, if the past and the future have no physical reality, it is worth asking whether plants, inanimate beings are aware of the passage of time.

It is obvious that for humans the answer is yes, otherwise we would not be talking about it. And the reason for this is the information about the past contained in the state S. But this requires the existence of information processing mechanisms that make it possible to virtually construct the past. Similarly, these mechanisms may allow the construction of predictions about future states that constitute the perception of the future [10].

For this, the cognitive function of the brain requires the coordination of neural activity at different levels, from neurons, neural circuits, to large-scale neural networks [7]. As an example of this, the post “The predictive brain” highlights the need to coordinate the stimuli perceived by the sensory organs with the motor organs, in order to be able to interact with the environment. Not only that, but it is essential for the neural tissue to perform predictive processing functions [5], thus overcoming the limitations caused by the response times of neurons.

As already indicated, the perception of time involves several neural structures, which allow the measurement of time at different scales. Thus, the cerebellum allows establishing a time base on the scale of tens of milliseconds [11], analogous to a spatiotemporal metric. Since the dynamics of events is something physical that modifies the state of the system S, the measurement of these changes by the brain requires a physical mechanism that memorizes these changes, analogous to a delay line, which seems to be supported by the cerebellum.

However, this estimation of time cannot be considered at the psychological level as a high-level perceptual functionality, since it is only effective within very short temporal windows, necessary for the performance of functions of an automatic or unconscious nature. For this reason, one could say that time as a physical entity is not perceived by the brain at the conscious level. Thus, what we generally define as time perception is a relationship between events that constitute a story or narrative. This involves processes of attention, memory and consciousness supported in a complex way, involving structures from the basal ganglia to the cerebral cortex, with links between temporal and non-temporal perception mechanisms [12] [13].

Given the complexity of the brain and the mechanisms of perception, attention, memory and self-awareness, it is not possible, at least for the time being, to understand in detail how humans construct temporal stories. Fortunately, we now have AI models that allow to understanding how this can be possible and how stories and narratives can be constructed from the sequential perception of daily life events. A paradigmatic example of this are the “Large Language Models” (LLMs), which based on natural language processing (NLP) techniques and neural networks, are capable of understanding, summarizing, generating and predicting new content and which raise the debate on whether human cognitive capabilities could emerge in these generic models, if provided with sufficient processing resources and training data [14].

Without delving into this debate, today anyone can verify through this type of applications (ChatGPT, BARD, Claude, etc.) how a completely consistent story can be constructed, both in its content and in its temporal plot, from the human experiences reflected in written texts with which these models have been trained.

Taking these models as a reference provides solid evidence on perception in general and on the perception of time in particular. However, it should be noted that these models also show how new properties emerge in their behavior as their complexity grows [15]. This gives a clue as to how new perceptual capabilities or even concepts such as self-awareness may emerge, although this last term is purely speculative, and that in the event that this ends up being the case, it raises the problem discussed in the post “Consciousness from the AI point of view” concerning how to know that an entity is self-aware.

But returning to the subject at hand, what is really important from the point of view of the perception of the passage of time is how the timeline of stories or narratives is a virtual construction that transcends physical time. Thus, the chronological line of events does not refer to a measure of physical time, but is a structure in which a hierarchy or order is established in the course of events.

Virtual perception of time

It can therefore be concluded that the brain only needs to measure physical time in the very short term, in order to be able to interact with the physical environment. But from this point on, all that is needed is to establish a chronological order without a precise reference to physical time. Thus we can refer to an hour, day, month, year, or a reference to another event as a way of ordering events, but always within a purely virtual context. This is one of the reasons for how the passage of time is perceived, so that virtual time will be extended according to the amount of information or relevance of events, something that is evident in playful or stressful situations [16].

Conclusions

The first conclusion that results from the above analysis is the existence of two conceptions of time. One is the one related to physical time that corresponds to the sequence of states of a physical system and the other is the one corresponding to the stimuli produced by this sequence of states on a perceptual intelligence.

Both concepts are elusive when it comes to understanding them. We are able to measure physical time with great precision. However, the theory of relativity shows space-time as an emergent reality that depends on the reference system. On the other hand, the synchronization of clocks and the establishment of a space-measuring structure may seem somewhat contrived, oriented simply to the understanding of space-time from the point of view of physics. On the other hand, the compression of cognitive processes still has many unknowns, although new developments in AI allow us to intuit its foundation, which sheds some light on the concept of psychological time.

The interpretation of time as the sequence of events or states occurring within a reference system is consistent with the theory of relativity and also allows for a simple justification of the psychological perception of time as a narrative.

The hypothesis that the past and the future have no physical reality and that, therefore, the universe keeps no record of the sequence of states, supports the idea that these concepts are an emergent reality at the cognitive level, so that the conception of time at the perceptual level would be based on the information contained in the current state of the system, exclusively. 

From the point of view of physics this hypothesis does not contradict any physical law. Moreover, it can be considered fundamental in the theory of relativity, since it assures a causal behavior that would solve the question of temporal irreversibility and the impossibility of traveling both to the past and to the future. Moreover, invariance in the time sequence supports the concept of causality, which is fundamental for the emergent system to be logically consistent.

References

[1]F. Schwabl, Statistical Mechanics, pp. 491-494, Springer, 2006.
[2]N. Emery, N. Markosian y M. Sullivan, «”Time”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2020/entries/time/&gt;,» [En línea].
[3]E. R. Kandel, J. H. Schwartz, S. A. Siegenbaum y A. J. Hudspeth, Principles of Neural Science, The McGraw-Hill, 2013.
[4]F. Emmert-Streib, Z. Yang, S. Tripathi y M. Dehmer, «An Introductory Review of Deep Learning for Prediction Models With Big Data,» Front. Artif. Intell., 2020.
[5]W. Wiese y T. Metzinger, «Vanilla PP for philosophers: a primer on predictive processing.,» In Philosophy and Predictive Processing. T. Metzinger &W.Wiese, Eds., pp. 1-18, 2017.
[6]J. Hawkins y S. Ahmad, «Why Neurons Have Tousands of Synapses, Theory of Sequence Memory in Neocortex,» Frontiers in Neural Circuits, vol. 10, nº 23, 2016.
[7]S. Rao, A. Mayer y D. Harrington, «The evolution of brain activation during temporal processing.,» Nature Neuroscience, vol. 4, p. 317–323, 2001.
[8]V. Evans, Language and Time: A Cognitive Linguistics Approach, Cambridge University Press, 2013.
[9]R. Bishop, «Chaos: The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed).,» Bishop, Robert, “Chaos”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), 2017. [En línea]. Available: https://plato.stanford.edu/archives/spr2017/entries/chaos/. [Último acceso: 7 9 2023].
[10]A. Nayebi, R. Rajalingham, M. Jazayeri y G. R. Yang, «Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes,» arXiv.2305.11772v2.pdf, 2023.
[11]R. B. Ivry, R. M. Spencer, H. N. Zelaznik y J. Diedrichsen, «Ivry, Richard B., REBECCA M. Spencer, Howard N. Zelaznik and Jörn Diedrichsen. The Cerebellum and Event Timing,» Ivry, Richard B., REBECCA M. Spencer, Howard N. Zelaznik and Jörn DiedrichAnnals of the New York Academy of Sciences, vol. 978, 2002.
[12]W. J. Matthews y W. H. Meck, «Temporal cognition: Connecting subjective time to perception, attention, and memory.,» Psychol Bull., vol. 142, nº 8, pp. 865-907, 2016.
[13]A. Kok, Functions of the Brain: A Conceptual Approach to Cognitive Neuroscience, Routledge, 2020.
[14]J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean y W. Fedus, «Emergent Abilities of Large Language Models,» Transactions on Machine Learning Research. https://openreview.net/forum?id=yzkSU5zdwD, 2022.
[15]T. Webb, K. J. Holyoak y H. Lu, «Emergent Analogical Reasoning in Large Language Models,» Nature Human Behaviour, vol. 7, p. 1526–1541, 3 8 2023.
[16]P. U. Tse, J. Intriligator, J. Rivest y P. Cavanagh, «Attention and the subjective expansion of time,» Perception & Psychophysics, vol. 66, pp. 1171-1189, 2004.

What is the nature of mathematics?

The ability of mathematics to describe the behavior of nature, particularly in the field of physics, is a surprising fact, especially when one considers that mathematics is an abstract entity created by the human mind and disconnected from physical reality.  But if mathematics is an entity created by humans, how is this precise correspondence possible?

Throughout centuries this has been a topic of debate, focusing on two opposing ideas: Is mathematics invented or discovered by humans?

This question has divided the scientific community: philosophers, physicists, logicians, cognitive scientists and linguists, and it can be said that not only is there no consensus, but generally positions are totally opposed. Mario Livio in the essay “Is God a Mathematician? [1] describes in a broad and precise way the historical events on the subject, from Greek philosophers to our days.

The aim of this post is to analyze this dilemma, introducing new analysis tools  such as Information Theory (IT) [2], Algorithmic Information Theory (AIT) [3] and Computer Theory (CT) [4], without forgetting the perspective that shows the new knowledge about Artificial Intelligence (AI).

In this post we will make a brief review of the current state of the issue, without entering into its historical development, trying to identify the difficulties that hinder its resolution, for in subsequent posts to analyze the problem from a different perspective to the conventional, using the logical tools that offer us the above theories.

Currents of thought: invented or discovered?

In a very simplified way, it can be said that at present the position that mathematics is discovered by humans is headed by Max Tegmark, who states in “Our Mathematical Universe” [5] that the universe is a purely mathematical entity, which would justify that mathematics describes reality with precision, but that reality itself is a mathematical entity.

On the other extreme, there is a large group of scientists, including cognitive scientists and biologists who, based on the fact of the brain’s capabilities, maintain that mathematics is an entity invented by humans.

Max Tegmark: Our Mathematical Universe

In both cases, there are no arguments that would tip the balance towards one of the hypotheses. Thus, in Max Tegmark’s case he maintains that the definitive theory (Theory of Everything) cannot include concepts such as “subatomic particles”, “vibrating strings”, “space-time deformation” or other man-made constructs. Therefore, the only possible description of the cosmos implies only abstract concepts and relations between them, which for him constitute the operative definition of mathematics.

This reasoning assumes that the cosmos has a nature completely independent of human perception, and its behavior is governed exclusively by such abstract concepts. This view of the cosmos seems to be correct insofar as it eliminates any anthropic view of the universe, in which humans are only a part of it. However, it does not justify that physical laws and abstract mathematical concepts are the same entity.  

In the case of those who maintain that mathematics is an entity invented by humans, the arguments do not usually have a formal structure and it could be said that in many cases they correspond more to a personal position and sentiment. An exception is the position maintained by biologists and cognitive scientists, in which the arguments are based on the creative capacity of the human brain and which would justify that mathematics is an entity created by humans.

For these, mathematics does not really differ from natural language, so mathematics would be no more than another language. Thus, the conception of mathematics would be nothing more than the idealization and abstraction of elements of the physical world. However, this approach presents several difficulties to be able to conclude that mathematics is an entity invented by humans.

On the one hand, it does not provide formal criteria for its demonstration. But it also presupposes that the ability to learn is an attribute exclusive to humans. This is a crucial point, which will be addressed in later posts. In addition, natural language is used as a central concept, without taking into account that any interaction, no matter what its nature, is carried out through language, as shown by the TC [4], which is a theory of language.

Consequently, it can be concluded that neither current of thought presents conclusive arguments about what the nature of mathematics is. For this reason, it seems necessary to analyze from new points of view what is the cause for this, since physical reality and mathematics seem intimately linked.

Mathematics as a discovered entity

In the case that considers mathematics the very essence of the cosmos, and therefore that mathematics is an entity discovered by humans, the argument is the equivalence of mathematical models with physical behavior. But for this argument to be conclusive, the Theory of Everything should be developed, in which the physical entities would be strictly of a mathematical nature. This means that reality would be supported by a set of axioms and the information describing the model, the state and the dynamics of the system.

This means a dematerialization of physics, something that somehow seems to be happening as the development of the deeper structures of physics proceeds. Thus, the particles of the standard model are nothing more than abstract entities with observable properties. This could be the key, and there is a hint in Landauer’s principle [6], which establishes an equivalence between information and energy.

But solving the problem by physical means or, to be more precise, by contrasting mathematical models with reality presents a fundamental difficulty. In general, mathematical models describe the functionality of a certain context or layer of reality, and all of them have a common characteristic, in such a way that these models are irreducible and disconnected from the underlying layers. Therefore, the deepest functional layer should be unraveled, which from the point of view of AIT and TC is a non-computable problem.

Mathematics as an invented entity

The current of opinion in favor of mathematics being an entity invented by humans is based on natural language and on the brain’s ability to learn, imagine and create. 

But this argument has two fundamental weaknesses. On the one hand, it does not provide formal arguments to conclusively demonstrate the hypothesis that mathematics is an invented entity. On the other hand, it attributes properties to the human brain that are a general characteristic of the cosmos.

The Hippocampus: A paradigmatic example of the dilemma discovered or invented

To clarify this last point, let us take as an example the invention of whole numbers by humans, which is usually used to support this view. Let us now imagine an animal interacting with the environment. Therefore, it has to interpret spacetime accurately as a basic means of survival. Obviously, the animal must have learned or invented the space-time map, something much more complex than natural numbers.

Moreover, nature has provided or invented the hippocampus [7], a neuronal structure specialized in acquiring long-term information that forms a complex convolution, forming a recurrent neuronal network, very suitable for the treatment of the space-time map and for the resolution of trajectories. And of course this structure is physical and encoded in the genome of higher animals. The question is: Is this structure discovered or invented by nature?

Regarding the use of language as an argument, it should be noted that language is the means of interaction in nature at all functional levels. Thus, biology is a language, the interaction between particles is formally a language, although this point requires a deeper analysis for its justification. In particular, natural language is in fact a non-formal language, so it is not an axiomatic language, which makes it inconsistent.

Finally, in relation to the learning capability attributed to the brain, this is a fundamental characteristic of nature, as demonstrated by mathematical models of learning and evidenced in an incipient manner by AI.

Another way of approaching the question about the nature of mathematics is through Wigner’s enigma [8], in which he asks about the inexplicable effectiveness of mathematics. But this topic and the topics opened before will be dealt with and expanded in later posts.

References

[1] M. Livio, Is God a Mathematician?, New York: Simon & Schuster Paperbacks, 2009.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948. 
[3] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[4] M. Sipser, Introduction to the Theory of Computation, Course Technology, 2012.
[5] M. Tegmark, Our Mathematical Universe: My Quest For The Ultimate Nature Of Reality, Knopf Doubleday Publishing Group, 2014.
[6] R. Landauer, «Irreversibility and Heat Generation in Computing Process,» IBM J. Res. Dev., vol. 5, pp. 183-191, 1961.
[7] S. Jacobson y E. M. Marcus, Neuroanatomy for the Neuroscientist, Springer, 2008.
[8] E. P. Wigner, «The unreasonable effectiveness of mathematics in the natural sciences.,» Communications on Pure and Applied Mathematics, vol. 13, nº 1, pp. 1-14, 1960.

Covid-19: Interpretation of data

In view of the expansion of the Covid-19 in different countries, and taking as a reference the model of spreading exposed in the previous post, it is possible to make an interpretation of the data, in order to solve some doubts and contradictions raised in different forums.

But before starting this analysis, it is important to highlight an outstanding feature of the Covid-19 expansion shown by the model. In general, the modeling of infectious processes usually focuses on the infection rate of individuals, leaving temporal aspects such as incubation or latency periods of the pathogens in the background. This is justified as a consequence of the fact that their influence is generally unnoticed, besides introducing difficulties in the analytical study of the models. 

However, in the case of Covid-19 its rapid expansion makes the effect of time parameters evident, putting health systems in critical situations and making it difficult to interpret the data that emerge as the pandemic spreads. 

In this sense, the outstanding characteristics of the Covid-19 are:

  • The high capacity of infection.
  • The capacity of infection of individuals in the incubation phase.
  • The capacity of infection of asymptomatic individuals.

This makes the number of possible asymptomatic cases very high, presenting a great difficulty in diagnosis, as a result of the lack of resources caused by the novelty and rapid spread of the virus.

For this reason, the model has been developed taking into account the temporal parameters of the spread of the infection, which requires a numerical model, since the analytical solution is very complex and possibly without a purely analytical solution. 

As a result, the model has a distinctive feature compared to conventional models, which is shown in the figure below. 

This consists in that it is necessary to distinguish groups of asymptomatic and symptomatic individuals, since they present a temporal evolution delayed in time. As a consequence, the same happens with the curves of hospitalized and ICU individuals.

This allows clarifying some aspects linked to the real evolution of the virus. For example, in relation to the declaration of the exceptional measures in Italy and Spain, a substantial improvement in the contention of the pandemic was expected, something that still seems distant. The reason for this behavior is that the contention measures have been taken on the basis of the evolution of the curve of symptomatic individuals, ignoring the fact that there was already a very important population of asymptomatic individuals.

As can be seen in the graphs, the measurements should have been taken at least three weeks in advance, that is, according to the evolution curve of asymptomatic individuals. But in order to make this decision correctly, this data should have been available, something that was completely impossible, as a result of the lack of a test campaign on the population. 

This situation is supported by the example of China, which although the spread of the virus could not be contained at an early stage, containment measures were taken several weeks earlier, on a comparative time scale.

The data from Germany are also very significant, exhibiting a much lower mortality rate than Italy and Spain. Although this raises a question about the capacity of infection in this country, it is actually easy to explain. In Italy and Spain, testing for Covid-19 infection is beginning. However, in Germany these tests have been carried out for several weeks at a rate of several hundred thousand per week. In contrast, the numbers of individuals diagnosed in Italy and Spain should be reviewed in the future.

This explains the lower mortality rate for a large number of infected individuals.  This also has a decisive advantage, since early diagnosis allows for the isolation of infected individuals, reducing the possibility of infection of other individuals, which ultimately will result in a lower mortality rate.

Therefore, a quick conclusion can be made that can be summarized in the following points: 

  • Measures to isolate the population are necessary but ineffective when taken at an advanced stage of the pandemic.
  • Early detection of infection is a totally decisive aspect in the contention of the pandemic and above all in the reduction of the mortality rate.

A model of the spread of Covid-19

The reason for addressing this issue is twofold. On the one hand, Covid-19 is the most important challenge for humanity at the moment, but on the other hand the process of expansion of the virus is an example of how nature establishes models based on information processing.

The analysis of the dynamics of the virus expansion and its consequences will be based on a model implemented in Python, which for those who are interested can be downloaded, being able to make the changes that are considered appropriate to analyze different scenarios.

The model

The model is based on a structure of 14 states and 20 parameters, which determine the probabilities and the temporal dynamics of transitions between states. It is important to note that in the model the only vectors for virus spread are the “symptomatic” and “asymptomatic” states. The model also establishes parameters for the mobility of individuals and the rate of infection.

Some simplifications have been made to the model. Thus, it assumes that the geographical distribution of the population is homogeneous, which has contributed to a significant reduction in computational effort. In principle, this may seem to be a major limitation, but we will see that it is not an obstacle to drawing overall conclusions. The following figure represents in a simplified way the state diagram of the model. The conditions that establish the transitions can be consulted in the model.

The parameters have been adjusted according to experience gained from the progression of the virus, so information is limited and should be subject to further review. In any case, it seems clear that the virus has a high efficiency in infiltrating the cells to perform the copying process, so the viral load required for the infection seems to be small. This presupposes a high rate of infection, so it is also assumed that a significant part of the population will be infected.  

Scenarios for the spread of the virus can be listed in the following sections:

  • Early action measures to confine the spread of the virus  
  • Uncontrolled spread of the virus.
  • Exceptional measures to limit the propagation of virus.

The first scenario is not going to be analyzed as this is not the case in the current situation. This scenario can be analyzed by modifying the parameters of the model.

Therefore, the scenarios of interest are those of uncontrolled propagation and exceptional measures, as these represent the current state of the pandemic.

The natural evolution

The model dynamics for the case of uncontrolled propagation are shown in the figure below. It can be seen that the most important vectors in the propagation of the virus are asymptomatic individuals, for three fundamental reasons. The first is the broad impact of the virus on the population. The second is determined by the fact that it only produces a symptomatic picture in a limited fraction of the population. The third is directly related to the practical limitations in diagnosing asymptomatic individuals, as a consequence of the novelty and rapid spread of Covid-19.  

For this reason, it seems clear that the extraordinary measures to contain the virus must be aimed at drastically limiting contact between humans. This is what has surely advised the possible suspension of academic activities, which includes the child and youth population, not because they are a risk group but because they are the most active population in the spread of the virus.

The other characteristic of the spreading dynamics is the abrupt temporary growth of those affected by the virus, until it reaches the whole population, initiating a rapid recovery, but condemning the groups at risk to be admitted to the Intensive Care Unit (ICU) and probably to death.

This will pose an acute problem in health systems, and an increase in collateral cases can be expected, which could easily surpass the direct cases produced by Covid-19. This makes it advisable to take extraordinary measures, but at the same time, the effectiveness of these measures is in doubt, since their rapid expansion may reduce the effectiveness of these measures, leading to late decision-making.  

Present situation

This scenario is depicted in the following figures where quarantine is decreed for a large part of the population, restricting the movement of the propagation vectors. To confirm the above, two scenarios have been modeled. The first, in which the decision of extraordinary measures has been taken before the curve of diagnosed symptoms begins to grow, which in the figure occurs around day 40 from patient zero. The second in whom the decision has been taken a few days later, when the curve of diagnosed symptoms is clearly increasing, around day 65 from patient zero.

These two scenarios clearly indicate that it is more than possible that measures have been taken late and that the pandemic is following its natural course, due to the delay between the infected and symptomatic patient curves. Consequently, it seems that the containment measures will not be as effective as expected, and considering that economic factors will possibly have very profound consequences in the long and medium term for the well-being of society, alternative solutions should be considered.

It is interesting to note how the declaration of special measures modifies the temporal behavior of the pandemic. But once these have not been taken at an early stage of the virus’ emergence, the consequences are profound.

What can be expected

Obviously, the most appropriate solution would be to find remedies to cure the disease, which is being actively worked on, but which has a developmental period that may exceed those established by the dynamics of the pandemic.

However, since the groups at risk, the impact and the magnitude of these are known, a possible alternative solution would be:

  • Quarantine these groups, keeping them totally isolated from the virus and implementing care services to make this isolation effective until the pandemic subsides, or effective treatment is found.
  • Implement hospitals dedicated exclusively to the treatment of Covid-19.
  • For the rest of the population not included in the risk groups, continue with normal activity, allowing the pandemic to spread (something that already seems to be an inevitable possibility).  However, strict prophylactic and safety measures must be taken. 

This strategy has undeniable advantages. Firstly, it would reduce the pressure on the health system, preventing the collapse of normal system activity and leading to a faster recovery.  Secondly, it would reduce the problems of treasury and cash management of states, which can lead to an unprecedented crisis, the consequences of which will certainly be more serious than the pandemic itself.  

Finally, an important aspect of the model remains to be analyzed, such as its limitation for modeling a non-homogeneous distribution of the population. This section is easy to solve if we consider that it works correctly for cities. Thus, in order to model the case of a wider geographical extension, one only has to model the particular cases of each city or community with a time lag as the extension of the pandemic itself is showing.

One aspect, namely the duration of the extraordinary measures, remains to be determined. If it is considered that the viral load to infect an individual is small, it is possible that the remnants at the end of the quarantine period may reactivate the disease, in those individuals who have not yet been exposed to the virus or who have not been immunized. This is especially important considering that cured people may continue to be infected for another 15 days.

A macroscopic view of the Schrödinger cat

From the analysis carried out in the previous post, it can be concluded that, in general, it is not possible to identify the macroscopic states of a complex system with its quantum states. Thus, the macroscopic states corresponding to the dead cat (DC) or to the living cat (AC) cannot be considered quantum states, since according to quantum theory the system could be expressed as a superposition of these states. Consequently, as it has been justified, for macroscopic systems it is not possible to define quantum states such as |DC⟩ and |DC⟩. On the other hand, the states (DC) and (AC) are an observable reality, indicating that the system presents two realities, a quantum reality and an emerging reality that can be defined as classical reality.

Quantum reality will be defined by its wave function, formed by the superposition of the quantum subsystems that make up the system and which will evolve according to the existing interaction between all the quantum elements that make up the system and the environment. For simplicity, if the CAT system is considered isolated from the environment, the succession of its quantum state can be expressed as:

            |CAT[n]⟩ = |SC1[n]⟩ ⊗|SC2[n]⟩ ⊗…⊗|SCi[n]⟩ ⊗…⊗|SCk[n][n]⟩.

Expression in which it has been taken into account that the number of non-entangled quantum subsystems k also varies with time, so it is a function of the sequence n, considering time as a discrete variable. 

The observable classical reality can be described by the state of the system that, if for the object “cat” is defined as (CAT[n]), from the previous reasoning it is concluded that (CAT[n]) ≢ |CAT[n]⟩. In other words, the quantum and classical states of a complex object are not equivalent. 

The question that remains to be justified is the irreducibility of the observable classical state (CAT) from the underlying quantum reality, represented by the quantum state |CAT⟩. This can be done if it is considered that the functional relationship between states |CAT⟩ and (CAT) is extraordinarily complex, being subject to the mathematical concepts on which complex systems are based, such as they are:

  • The complexity of the space of quantum states (Hilbert space).
  • The random behavior of observable information emerging from quantum reality.
  • The enormous number of quantum entities involved in a macroscopic system.
  • The non-linearity of the laws of classical physics.

Based on Kolmogorov complexity [1], it is possible to prove that the behavior of systems with these characteristics does not support, in most cases, an analytical solution that determines the evolution of the system from its initial state. This also implies that, in practice, the process of evolution of a complex object can only be represented by itself, both on a quantum and a classical level.

According to the algorithmic information theory [1], this process is equivalent to a mathematical object composed of an ordered set of bits processed according to axiomatic rules. In such a way that the information of the object is defined by the Kolmogorov complexity, in a manner that it remains constant throughout time, as long as the process is an isolated system. It should be pointed out that the Kolmogorov complexity makes it possible to determine the information contained in an object, without previously having an alphabet for the determination of its entropy, as is the case in the information theory [2], although both concepts coincide at the limit.

From this point of view, two fundamental questions arise. The first is the evolution of the entropy of the system and the second is the apparent loss of information in the observation process, through which classical reality emerges from quantum reality. This opens a possible line of analysis that will be addressed later.

But going back to the analysis of what is the relationship between classic and quantum states, it is possible to have an intuitive view of how the state (CAT) ends up being disconnected from the state |CAT⟩, analyzing the system qualitatively.

First, it should be noted that virtually 100% of the quantum information contained in the state |CAT⟩ remains hidden within the elementary particles that make up the system. This is a consequence of the fact that the physical-chemical structure [3] of the molecules is determined exclusively by the electrons that support its covalent bonds. Next, it must be considered that the molecular interaction, on which molecular biology is based, is performed by van der Waals forces and hydrogen bonds, creating a new level of functional disconnection with the underlying layer.

Supported by this functional level appears a new functional structure formed by cellular biology  [4], from which appear living organisms, from unicellular beings to complex beings formed by multicellular organs. It is in this layer that the concept of living being emerges, establishing a new border between the strictly physical and the concept of perception. At this level the nervous tissue [5] emerges, allowing the complex interaction between individuals and on which new structures and concepts are sustained, such as consciousness, culture, social organization, which are not only reserved to human beings, although it is in the latter where the functionality is more complex.

But to the complexity of the functional layers must be added the non-linearity of the laws to which they are subject and which are necessary and sufficient conditions for a behavior of deterministic chaos [6] and which, as previously justified, is based on the algorithmic information theory [1]. This means that any variation in the initial conditions will produce a different dynamic, so that any emulation will end up diverging from the original, this behavior being the justification of free will. In this sense, Heisenberg’s uncertainty principle [7] prevents from knowing exactly the initial conditions of the classical system, in any of the functional layers described above. Consequently, all of them will have an irreducible nature and an unpredictable dynamic, determined exclusively by the system itself.

At this point and in view of this complex functional structure, we must ask what the state (CAT) refers to, since in this context the existence of a classical state has been implicitly assumed. The complex functional structure of the object “cat” allows a description at different levels. Thus, the cat object can be described in different ways:

  • As atoms and molecules subject to the laws of physical chemistry.
  • As molecules that interact according to molecular biology.
  • As complex sets of molecules that give rise to cell biology.
  • As sets of cells to form organs and living organisms.
  • As structures of information processing, that give rise to the mechanisms of perception and interaction with the environment that allow the development of individual and social behavior.

As a result, each of these functional layers can be expressed by means of a certain state. So to speak of, the definition of a unique macroscopic state (CAT) is not correct. Each of these states will describe the object according to different functional rules, so it is worth asking what relationship exists between these descriptions and what their complexity is. Analogous to the arguments used to demonstrate that the states |CAT⟩ and (CAT) are not equivalent and are uncorrelated with each other, the states that describe the “cat” object at different functional levels will not be equivalent and may to some extent be disconnected from each other.

This behavior is a proof of how reality is structured in irreducible functional layers, in such a way that each one of the layers can be modeled independently and irreducibly, by means of an ordered set of bits processed according to axiomatic rules.

Refereces

[1] P. Günwald and P. Vitányi, “Shannon Information and Kolmogorov Complexity,” arXiv:cs/0410002v1 [cs:IT], 2008.
[2] C. E. Shannon, «A Mathematical Theory of Communication,» The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
[3] P. Atkins and J. de Paula, Physical Chemestry, Oxford University Press, 2006.
[4] A. Bray, J. Hopkin, R. Lewis and W. Roberts, Essential Cell Biology, Garlan Science, 2014.
[5] D. Purves and G. J. Augustine, Neuroscience, Oxford Univesisty press, 2018.
[6] J. Gleick, Chaos: Making a New Science, Penguin Books, 1988.
[7] W. Heisenberg, «The Actual Content of Quantum Theoretical Kinematics and Mechanics,» Zeit-schrift fur Physik. Translation: NASA TM-77379., vol. 43, nº 3-4, pp. 172-198, 1927.

Why the rainbow has 7 colors?

Published on OPENMIND August 8, 2018

Color as a physical concept

Visible light, heat, radio waves and other types of radiation all have the same physical nature and are constituted by a flow of particles called photons. The photon or “light quantum” was proposed by Einstein, for which he was awarded the Nobel Prize in 1921 and is one of the elementary particles of the standard model, belonging to the boson family. The fundamental characteristic of a photon is its capacity to transfer energy in quantized form, which is determined by its frequency, according to the expression E=h∙ν, where h is the Planck constant and ν the frequency of the photon.

Electromagnetic spectrum

Thus, we can find photons of very low frequencies located in the band of radio waves, to photons of very high energy called gamma rays, as shown in the following figure, forming a continuous frequency range that constitutes the electromagnetic spectrum. Since the photon can be modeled as a sinusoid traveling at the speed of light c, the length of a complete cycle is called the photon wavelength λ, so the photon can be characterized either by its frequency or its wavelength, since λ=c/ν. But it is common to use the term color as a synonym for frequency, since the color of light perceived by humans is a function of frequency. However, as we are going to see, this is not strictly physical but a consequence of the process of measuring and interpreting information, which makes color an emerging reality of another underlying reality, sustained by the physical reality of electromagnetic radiation.

Structure of an electromagnetic wave

But before addressing this issue, it should be considered that to detect photons efficiently it is necessary to have a detector called an antenna, whose size must be similar to the wavelength of the photons.

Color perception by humans

The human eye is sensitive to wavelengths ranging from deep red (700nm, nanometers=10-9 meters) to violet (400nm).  This requires receiving antennas of the order of hundreds of nanometres in size! But for nature this is not a big problem, as complex molecules can easily be this size. In fact, the human eye, for color vision, is endowed with three types of photoreceptor proteins, which produce a response as shown in the following figure.

Response of photoreceptor cells of the human retina

Each of these types configures a type of photoreceptor cell in the retina, which due to its morphology are called cones. The photoreceptor proteins are located in the cell membrane, so that when they absorb a photon they change shape, opening up channels in the cell membrane that generate a flow of ions. After a complex biochemical process, a flow of nerve impulses is produced that is preprocessed by several layers of neurons in the retina that finally reach the visual cortex through the optic nerve, where the information is finally processed.

But in this context, the point is that the retinal cells do not measure the wavelength of the photons of the stimulus. On the contrary, what they do is convert a stimulus of a certain wavelength into three parameters called L, M, S, which are the response of each of the types of photoreceptor cells to the stimulus. This has very interesting implications that need to be analyzed. In this way, we can explain aspects such as:

  • The reason why the rainbow has 7 colors.
  • The possibility of synthesizing the color by means of additive and subtractive mixing.
  • The existence of non-physical colors, such as white and magenta.
  • The existence of different ways of interpreting color according to the species.

To understand this, let us imagine that they provide us with the response of a measurement system that relates L, M, S to the wavelength and ask us to establish a correlation between them. The first thing we can see is that there are 7 different zones in the wavelength, 3 ridges and 4 valleys. 7 patterns! This explains why we perceive the rainbow composed of 7 colors, an emerging reality as a result of information processing that transcends physical reality.

But what answer will a bird give us if we ask it about the number of colors of the rainbow? Possibly, though unlikely, it will tell us nine! This is because the birds have a fourth type of photoreceptor positioned in the ultraviolet, so the perception system will establish 9 regions in the light perception band. And this leads us to ask: What will be the chromatic range perceived by our hypothetical bird, or by species that only have a single type of photoreceptor? The result is a simple case of combinatorial!

On the other hand, the existence of three types of photoreceptors in the human retina makes it possible to synthesize the chromatic range in a relatively precise way, by means of the additive combination of three colors, red, green and blue, as it is done in the video screens. In this way, it is possible to produce an L,M,S response at each point of the retina similar to that produced by a real stimulus, by means of the weighted application of a mixture of photons of red, green and blue wavelengths.

Similarly, it is possible to synthesize color by subtractive or pigmentary mixing of three colors, magenta, cyan and yellow, as in oil paint or printers. And this is where the virtuality of color is clearly shown, since there are no magenta photons, since this stimulus is a mixture of blue and red photons. The same happens with the white color, as there are no individual photons that produce this stimulus, since white is the perception of a mixture of photons distributed in the visible band, and in particular by the mixture of red, green and blue photons.

In short, the perception of color is a clear example of how reality emerges as a result of information processing. Thus, we can see how a given interpretation of the physical information of the visible electromagnetic spectrum produces an emerging reality, based a much more complex underlying reality.

In this sense, we could ask ourselves what an android with a precise wavelength measurement system would think of the images we synthesize in painting or on video screens. It would surely answer that they do not correspond to the original images, something that for us is practically imperceptible. And this connects with a subject, which may seem unrelated, as is the concept of beauty and aesthetics. The truth is that when we are not able to establish patterns or categories in the information we perceive it as noise or disorder.  Something unpleasant or unsightly!

Biology as an axiomatic process

The replication mechanisms of living beings can be compared with the self-replication of automatons in the context of computability theory. In particular, DNA replication, analyzed from the perspective of the recursion theorem, indicates that its replication structure goes beyond biology and the quantum mechanisms that support it, as it is analyzed in the article Biology as an Axiomatic Process.

Physical chemistry establishes the principles by which atoms interact with each other to form molecules. In the inorganic world the resulting molecules are relatively simple, not allowing establishing a complex functional structure. On the other hand, in the organic world, molecules can be made up of thousands or even millions of atoms and have complex functionality. It highlights what is known as molecular recognition, through which the molecules interact with each other selectively and which is the basis of biology.

Molecular recognition plays a fundamental role in the structure of DNA, in the translation of the genetic code of DNA into proteins and in the biochemical interaction of proteins, which ultimately form the basis on which living beings are based.

The detailed study of these molecular interactions makes it possible to describe the functionality of the processes, in such a way that it is possible to establish formal models, to such an extent that they can be used as a computing technology, as is the case of DNA-based computing.

From this perspective, this allows us to ask if the process of information is something deeper and if in reality it is the foundation of biology itself, according to what is established by the principle of reality.

For this purpose, this section aims to analyze the basic processes on which biology is based, in order to establish a link with axiomatic processing and thus investigate the nature of biological processes. For this, it is not necessary to describe in detail the biological mechanisms described in the literature. We will simply describe its functionality, so that they can be identified with the theoretical foundations of information processing. To this end, we will explain the mechanisms on which DNA replication and protein synthesis are based.

DNA and RNA molecules are polymers formed from the ribose and deoxyribose nucleotides, respectively, bound by phosphates. On the basis of this nucleotide chain, one of the four possible nucleic acids can be linked. There are five different nucleic acids, adenine (A), guanine (G), cytosine (C), thymine (T) and uracil (U). In the case of DNA, nucleic acids that can be coupled by covalent bonds to nucleotides are A, G, C and T, whereas in the case of RNA they are A, G, C and U. As a consequence, molecules are structured in a helix shape, fitting the nucleic acids in a precise and compact way, due to the shape of their electronic clouds.

The helix structure allows the nucleic acids of two different strands to be bound together by hydrogen bonds, forming pairs A-T and G-C in the case of DNA, and A-U and G-C in the case of RNA, as shown in the following figure.

Base-pairing of nucleic acids in DNA

As a result, the DNA molecule is formed by a double helix, in which two chains of nucleotides polymers wind one on top of the other, remaining together by means of hydrogen bonds of nucleic acids. Thus, each strand of the DNA molecule contains the same genetic code, one of which can be considered the negative of the other.

Double helix structure of DNA molecule

The genetic information of an organism, called a genome, is not contained in a single DNA molecule, but is organized into chromosomes. These are made up of DNA strands bound together by proteins. Thus, in the case of humans, the genome is formed by 46 chromosomes, and so, the number of bases in the DNA molecules that compose it being about 3×109. Since each base can be encoded by means of 2 bits, the human genome, considered as an object of information, is equivalent to 6×109 bits.

The information contained in the genes is the basis for the synthesis of proteins, which are responsible for executing and controlling the biochemistry of living beings. The proteins are formed by the bonding of amino acids, through covalent bonds, which is done from the sequences of the bases contained in the DNA. The number of existing amino acids is 20 and since each base codes 2 bits, 3 bases (6 bits, 64 combinations) are necessary to be able to code each one of the amino acids. This means that there is some redundancy in the assignment of base sequences to amino acids, in addition to control codes for the synthesis process (Stop), as shown in the following table.

Translation of nucleic acids (Codons) to amino acids

However, protein synthesis is not done directly from DNA, since it requires the intermediation of RNA. This is called the translation process and involves two types of different RNA molecules, the messenger ARM (mRNA) and the transfer RNA (tRNA). The first step is the synthesis of mRNA from DNA. This process is called transcription, in such a way that the information corresponding to a gene is copied into the mRNA molecule, which is done through a process of recognition between the molecules of the nucleic acids, carried out by the hydrogen bonds, such as shows the following figure.

DNA transcription

Once the mRNA molecule is synthesized, the tRNA molecule is responsible for mediating between mRNA and amino acids to synthesize proteins, for which it has two specific molecular mechanisms. On the one hand, tRNA has a chain of three amino acids called anticodon at one end. On the opposite side, tRNA binds to a specific amino acid, according to the translation table of nucleic acid sequences into amino acids. In this way, tRNA is able to translate mRNA into a protein, as shown in the figure below. 

Protein synthesis (mRNA translation)

But perhaps the most complex process is undoubtedly DNA replication, so that each molecule produces two identical replicas. Replication is performed by unwinding each strand of the molecule and inserting the nucleic acid molecules on each of the strands, in a similar way to that shown in the mRNA synthesis. DNA replication is controlled by enzymatic processes supported by proteins. Without going into detail and in order to show its complexity, the table below shows the proteins involved in the replication process and their role.

The role of proteins in the DNA replication process

The processes described above are defined as the central dogma of molecular biology and are usually schematically represented schematically as shown in the following figure. It also depicts the reverse transcription that occurs in retroviruses, which synthesizes a DNA molecule from RNA.

Central dogma of molecular biology

The biological process from the perspective of computability theory

Molecular processes supported by DNA, RNA and proteins can be considered from an abstract point of view as information processes. As a result, input statements corresponding to a language are processed resulting in new output statements. Thus, the following languages can be identified:

  • DNA molecule. Sentence consisting of a sequence of characters corresponding to a 4-symbol alphabet.
  • RNA molecule – protein synthesis. Sentence consisting of a sequence of characters belonging to a 21-symbol alphabet.
  • RNA molecule-reverse transcription. Sentence composed of a sequence of characters belonging to a 4-symbol alphabet.
  • Protein molecule. Sentence composed of a sequence of characters belonging to a 20-symbol alphabet.

This information is processed by the machinery established by the physicochemical properties of control molecules. To better understand this functional structure, it is advisable to modify the scheme corresponding to the central dogma of biology. To do this, we must represent the processes involved and the information that flows between them, as shown in the following block diagram.

Functional structure of DNA replication

This structure highlights the flow of information between processes, such as DNA and RNA sentences, where the functional blocks of information processing are the following:

  • PDNA. Replication process. The functionality of this process is determined by the proteins involved in DNA synthesis, producing two replicas of DNA from a single molecule.
  • PRNA. Transcription process. It synthesizes a RNA molecule from a gene encoded in DNA.
  • PProt. Translation process. It synthesizes a protein from an RNA molecule.

This structure clearly shows how information emerges from biological processes, something that seems to be ubiquitous in all natural models and allows the implementation of computer systems. In all cases this capacity is finally supported by quantum physics. In the case of biology in particular, this is produced from the physicochemical properties of molecules, which are determined by quantum physics. Therefore, the information process is something that emerges from an underlying reality and ultimately from quantum physics. This is true as far as knowledge goes.

This means that, although there is a strong link between reality and information, information is simply an emerging product of reality. But biology provides a clue to the intimate relationship between reality and information, which are ultimately indistinguishable concepts. If we look at the DNA replication process, we see that DNA is produced in several stages of processing:

DNA → RNA → Proteins → DNA.

We could consider this to be a specific feature of the biological process. However, computability theory indicates that the replication process is subject to deeper logical rules than the physical processes themselves that support replication. In computability theory, the recursion theorem determines that replication of information requires at least the intervention of two independent processes.

This shows that DNA replication is subject to abstract rules that must be satisfied not only by biology, but by every natural process. Therefore, the physical foundations that support biological processes must verify this requirement. Consequently, this shows that the information processing is essential in what we understand by reality.