Natural language: A paradigm of axiomatic processing

The Theory of Computation (TC) aims to establish computational models and determine the limits of what is computable and the complexity of a problem when it is computable. The formal models established by TC are based on abstract systems ranging from simple models, such as automatons, to the general computer model established by the Turing Machine (TM).

Formally, the concept of algorithm is based on TM, so that each of the possible implementations will perform a specific function that we call algorithm. The TC demonstrates that it is possible to build an idealized machine, called Universal Turing Machine (UTM), capable of executing all possible computable algorithms. In the case of commercial computers, these are equivalent to UTM, with the difference that their memory and runtime are limited. On the contrary, in the UTM these resources are unlimited.

But the question we can ask is: What does this have to do with language? The answer is simple. In TC, an L(TM) language is defined as the set of bit sequences that “accepts” a given TM. In which the term “accept” means that the TM analyzes the input sequence and reaches the Halt state. Consequently, a language is the set of mathematical objects accepted by a given TM.

Without going into details that can be consulted in the specialized literature, the TC classifies the languages into two basic types, as shown in the figure. Thus, a language is Turing-decidable (DEC) when the TM accepts the sequences belonging to the language and rejects the rest, reaching the Halt state in both cases. On the contrary, a language is Turing-recognizable or RE if it is the language of a TM. This means that, for the set of languages belonging to RE but not belonging to DEC, TM does not reach the Halt state when the input sequence does not correspond to the language.

It is necessary to emphasize that there are sequences that are not recognized by any TM. Therefore, if the formal definition of language is taken into account, they should not be considered as such, although in general they are defined as non-RE languages. It is important to note that the latter concept is equivalent to Gödel’s incompleteness theorem. As a consequence, they are the set of undecidable or unsolvable problems, that is, they have a cardinality superior to the one of the natural numbers.

Within DEC languages, two types, regular ​​and context-free (CFL) can be identified. Regular languages ​​are those composed of a set of sequences on which the TM can decide individually, so they do not have a grammatical structure. Examples of these are the languages ​​of the automatons we handle every day, such as elevators, device controls, etc. CFLs are those that have a formal structure (grammar) in which language elements can be nested recursively. In general, we can consider CFLs to programming languages, such as JAVA, C ++. This is not strictly true, but it will facilitate the exposure of certain concepts.

But the question is: What does this have to do with natural language? The answer is easy again. Natural language is, in principle, a Turing-decidable language. The proof of this is trivial. Maybe a few decades ago this was not so, but nowadays information technology shows it us clearly, without the need for theoretical knowledge. On the one hand, natural language is a sequence of bits, since both spoken and written language are coded as bit sequences in audio and text files, respectively. On the other hand, humans do not loop when we get a message, at least permanently ;-).

However, it can be argued that we did not reach the Halt state either. But in this context, this does not mean that we literally end our existence, although there are messages that kill! This means that information processing concludes and that, as a result, we can make a decision and tackle a new task.

Therefore, from an operational or practical point of view, natural language is Turing-decidable. But we can find arguments that can be in conflict with this and that materialize in the form of contradictions. Although it may seem surprising, this also happens with programming languages, since their grammar may be context sensitive (CSG). But for now, we are going to leave aside this aspect, in order to make the reasoning easier.

What can intuitively be seen is a clear parallel between the TM model and the human communication model, as shown in the figure. This can be extended to other communication models, such as body language, physicochemical language between molecules, etc.

In the case of TC, the input and output objects to the TM are language elements, which is very suitable because the practical objective is human-to-machine or machine-to-machine communication. But this terminology varies with the context. Thus, from an abstract point of view, objects have a purely mathematical nature. However, in other contexts such as physics, we talk about concepts such as space-time, energy, momentum, etc.

What seems to be clear, from the observable models, is that a model of reality is equivalent to bit sequences processed by a TM. In short, a model of reality is equivalent to an axiomatic processing of information, where the axioms are embedded in the TM. It should be clear that an axiom is not self-evident, and therefore does not need proof. On the contrary, an axiom is a proposition assumed within a theoretical body. Possibly, this misunderstanding is originated by the apparent simplicity of some axiomatic systems, produced by our perception of reality. This is obvious, for example, in Euclidean geometry based on five postulates or axioms, in which such postulates seem to us evident, due to our perception of space. On this, we will continue to insist since the axiomatic processing is surely one of the great mysteries that nature encloses.

Returning to natural language, it should be possible to establish a parallelism between it and the axiomatic processing determined by TM, as suggested in the figure. As with programming languages, the structure of natural language is defined by a grammar, which establishes a set of axiomatic rules that determine the categories (verb, predicate) of the elements of language (lexicon) and how they are combined to form expressions (sentences). Both the elements of language and the resulting expressions have a meaning, which is known as semantics of language. The pertinent question is: What is the axiomatic structure of a natural language?

To answer, let’s reorient the question: How is the semantics of natural language defined? To do this, we can begin by analyzing the definition of the lexicon of a language, collected in the dictionary. In it we can find the definition of the meaning of each word in different contexts. But we soon find a formal problem, since the definitions are based on one another in a circular fashion. What is the same, the defined is part of the definition, so it is not possible to establish the semantics of language from the linguistic information.

For example, according to the Oxford dictionary:

  • Word: A single distinct meaningful element of speech or writing, used with others (or sometimes alone) to form a sentence and typically shown with a space on either side when written or printed.
  • Write: Mark (letters, words, or other symbols) on a surface, typically paper, with a pen, pencil, or similar implement. 
  • Sentence: A set of words that is complete in itself, typically containing a subject and predicate, conveying a statement, question, exclamation, or command, and consisting of a main clause and sometimes one or more subordinate clauses. 
  • Statement: A definite or clear expression of something in speech or writing
  • Expression: A word or phrase, especially an idiomatic one, used to convey an idea. 
  • Phrase: A small group of words standing together as a conceptual unit, typically forming a component of a clause

Therefore:

  • Word: A single distinct … or marks (letters, words, or other symbols) on … to form a set of words that … conveying a definite or clear word or a small group of words standing together … or marking (letters, words, …. ) …

In this way, we could continue recursively replacing the meaning of each component within the definition, arriving at the conclusion that natural language as an isolated entity has no meaning. So it is necessary to establish an axiomatic basis external to the language itself. By the way: What will happen if we continue to replace each component of the sentence?

Consequently, we can rise what will be the result of an experiment in which an entity of artificial intelligence disconnected from all reality, except from the information on which the written language is based, analyzes the information. That is, the entity will have access to grammar, dictionary, written works, etc. What will be the result of the experiment? To what conclusions will the entity arrive?

If we mentally perform this experiment, we will see that the entity can come to understand the reality of language, and all the stories based on it, provided that it has an axiomatic basis. Otherwise, the entity will experience what in information theory is known as “information without meaning”. This explains the impossibility of deciphering archaic scripts without having cross-references to other languages ​​or other forms of expression. In the case of humans, the axiomatic basis is acquired from cognitive experiences external to the language itself.

To clarify the idea of what the axiomatic processing means, we can use simple examples related to programming languages. So, let’s analyze the semantics of the “if… then” statement. If we consult the programming manual we can determine its semantics, since in our brain we have implemented rules or axioms to decipher the written message. This is equivalent to what happens in the execution of program sentences, in which it is the TM that executes those expressions axiomatically. In the case of both the brain and TM, axioms are defined in the fields of biochemistry and physics, respectively, and therefore outside the realm of language.

This shows once again how reality is structured in functional layers, which can be seen as independent entities by means of the axiomatic processing, as has been analyzed in:

But this issue, as well as the analysis of the existence of linguistic contradictions, will be addressed in later posts.

1 thought on “Natural language: A paradigm of axiomatic processing

  1. Pingback: The unreasonable effectiveness of mathematics | Information and Reality

Leave a comment