Home About Table Contact
Evolution of intelligence
From language to a new paradigm for intelligence research.

The current mainstream paradigm in artificial intelligence leads to throwing always more computing power into unintelligible models with the hope of brute-forcing the problem.

We would like to propose another path. In Darwin's dangerous idea, Daniel Dennett presents us with five hypothetical creatures arising from Darwin's evolutionary process.

Darwinian creatures are created by random mutation and selected by the external environment. The best designs survive and reproduce.

Skinnerian creatures can learn by testing actions (responses) in the external environment. Favourably actions are reinforced and then tend to be repeated. Pigeons can be trained to press a bar to receive food. Skinnerian creatures ask themselves, "What do I do next?"

Popperian creatures can preselect from possible behaviours / actions weeding out the truly stupid options before risking them in the harsh world. Popperian creatures have an inner environment that can preview and select amongst possible actions. For this to work the inner environment must contain lots of information about the outer environment and its regularities. Popperian creatures ask themselves, "What do I think about next?"

Gregorian creatures are named after Richard Gregory, an information theorist. Gregorian creatures import mind-tools (words) from the outer cultural environment to create an inner environment which improve both the generators and testers. Gregorian creatures ask themselves, "How can I learn to think better about what to think about next?"

Words / language are necessary to sustain long predictive chains of thought, eg. to sustain a chain or combination of pattern recognition. This is true in chess, for example, where the player uses chess notation to assist his memory. Learning from mistakes is an important and hard to learn part of this process. To learn from mistakes one has to be able to contemplate them and language / communication assists that process. For example, by being told by someone else you have made a mistake.

Finally, we have Scientific creatures which is an organised process of making and learning from mistakes in public, of getting others to assist in the recognition and correction of mistakes.

Language becomes intrinsically tied to intelligence with evolution. It may be its key.

Grammar is semantics. The context in which words can be used is sufficient to define their meaning. Word embeddings empirically demonstrate this. The context can be a situation, but also the surrounding words.

Therefore, words follow a structure, both grammatically and semantically, which determines the next possible and impossible words to carry on with a sentence.

It is the correspondence between the structure of reality and structure of language that allows the latter to model the former.

Words do not have to perfectly model reality, only the relevant structure.

When men communicate, they follow themselves the structure. The structure is shared, because it originates from reality. Whatever the words used; the common grammar allows men to understand their interlocutor even if one never learned a word before.

Words can be assimilated to combinators. Combinators are concatenative. They have a type α → β, meaning that they take a combinator of a type α and return a combinator of type β. Types define their grammar. They do not need variables; actually, one combinator, iota is sufficient to write all programs.

Binary is only a notation. Combinators can match it with a grammar, that is, meaning.

We suggest translating the NMSL into combinators. This would indeed allow intelligible models, for they could explain themselves with words.

Moreover, their computing nature will allow us to leverage existing research on probabilistic programming. Reasoning could indeed be assimilated to a Monte-Carlo search. Algorithms such as the No-U-Turn Sampler provide an optimal and complete search without need of pre-parameters.