The ‘Why’ Behind Mum

drawn image of calendar icon

May 22, 2024

hand drawn writing icon

Anastassia Lauterbach

mum of Romy and Roby

Mum is the heart and soul of Romy and Roby’s family. We already know that Roby arrived at the Newmarket home in her messy handbag.

Mum is a computational linguist in a lab that does research on language loss—called aphasia—in individuals who experience a stroke or an accident leading to brain damage or develop a neurodegenerative disorder. Mum’s lab has recently started utilizing Large Language Models (LLMs) for French and German, as it has an extensive database for language in critical domains such as psychiatry, medicine, and language acquisition.

I decided to explain the evolution of Mum’s formation as a linguist, as this information might prove valuable in describing her scientific beliefs and hypotheses about Roby in future books.

Mum became fascinated with language as a child in school when she started studying her first foreign language, English. Grammar looked almost like a mathematical puzzle.

Later, Mum’s love for puzzles and her natural aptitude for math led her to pursue an undergraduate program in mathematics, computer science, and linguistics. She was driven by a desire to understand the structure of language and how it shapes our ideas. To her, language was a powerful system that could encapsulate and generalize thoughts.

Mum took Japanese classes in her first semester and became interested in the ‘word order’ theory of language.

In English, for example, verbs tend to come before their objects. “Romy eats a fish.” “Julie chased the ball.” In our examples, the subject is first—Romy or Julie. There is a verb outlining an action, and then comes an object.

42% of the world’s languages—in addition to English, Indonesian is an example—follow a similar structure: subject–verb–object (SVO). These languages tend to have prepositions, those little markers on nouns connecting them to other nouns, and then nouns to verbs. But Japanese—as well as Hindi and Turkish—behave differently. These are verb-final languages (SOV). For them, the structure is somewhat like “It’s Romy. Fish eaten.” Or “It’s Julie. Ball chased.”

Such languages represent around 45% of all languages spoken in the world.

There are around seven thousand languages on Earth right now, and Mum’s teachers—called linguists—understand the structure of around one thousand of them. While TV presenters talked about the world order in the evening news, Mum’s professors researched the word order, a theory developed around 1963 at Stanford University. Joseph Greenberg initiated the research, while Russell Tomlin investigated the basic order of a sample of 402 languages and found an explanation for the distribution of languages among the word order types while crystalizing three principles generalizing how native speakers of any language perceived the world. In Tomlin’s understanding, there was a general algorithm for communicating thoughts, and it went like this: theme first–verb¬/object bonding–animated first.

My readers might be interested in learning that 9% of world languages follow a completely different structure of word order: verb–subject–object (VSO). Welsh is one of these languages.

Connecting theories of how people perceive and picture their environment with linguistic means is paramount to understanding how humanity generates, reproduces, processes, and keeps knowledge intact. Today, Large Language Models can’t reason and understand, even if they can exhibit surprising capabilities to carry out conversations and answer questions, summarize documents, write code, and learn new tasks from a few training samples. We will see how Mum’s lab decided to go beyond the most common LLM development and training and follow a slow path of developing a modular AI approach, augmenting LLMs with tools from other fields such as knowledge graphs, self-monitoring systems and meta-cognition, episodic memory, and situation models.

In her twenties, Mum did several field studies using word order and dependency grammar theory to classify languages. Dependency grammar is a theory traced back to the end of the fifties of the 20th century. Lucien Tesnière, the creator of dependency grammar, introduced a structure of ordered trees, reflecting actual word order and determining the relation between a word (a head) and its dependents.

Still, the reason for the differences between SVO, SOV, and VSO languages remained mysterious for Mum until she went to the US for her doctorate and started to work at MIT’s Department of Brain and Cognitive Sciences. Scientists in this lab borrowed concepts from information theory, invented almost singlehandedly by MIT professor Claude Shannon in the forties of the 20th century. Shannon’s discovery led to the digital revolution in communications. Here, a group of scientists discovered that the SOV order might have been an original default in human language. The prevalence of the SVO order could be explained by speakers’ sensitivity to the possibility of noise impending the acoustic language signal.

Edward Gibson, a professor of cognitive sciences at MIT, viewed human language as an example of what Shannon called a ‘noise channel.’ Languages apparently develop word order rules to minimize the risk of miscommunication across a noisy channel.

The researchers’ hypothesis was formulated after an experiment reported in the Proceedings of the National Academy of Sciences in 2008. There, native English speakers were shown crude digital animations of simple events and asked to describe them using only gestures. Oddly, when presented with events in which a human acts on an inanimate object, such as a girl kicking a ball, volunteers usually attempt to convey the object of the sentence before trying to convey the verb—even though, in English, verbs generally precede objects. In events where a human acts on another human, such as a girl kicking a boy, the volunteers mimed the verb before the object. As Gibson explains in the article he co-authored, “A Noisy-Channel Account of Crosslinguistic Word-Order Variation,” 1 the tendency of speakers of a subject–verb–object (SVO) language like English to gesture in subject–object–verb (SOV) order may be an example of an innate human preference for linguistically recapitulating old information before introducing new information.

This theory received wide recognition in the community of psychologists and cognitive researchers. It hinted at a great deal of functional design in seemingly arbitrary patterns of variation across languages.

Mum was particularly intrigued by the idea of innate linguistic frameworks and turned her attention to the work of Noam Chomsky, probably the most famous linguist of all time.

Chomsky established the concept of universal grammar, which implies that all human speech is based on the innate structure of the brain. Universal grammar is a context-free template that is activated by environmental stimuli while a child develops in a given culture. It influenced the fields of computer science, mathematics, childhood research, psychology, cognitive science, and philosophy. The context-free paradigm is today part of most computer programming languages and software that aims to capture and process human language, like Apple’s Siri.

After five years at MIT, Mum moved back to Europe and joined her current linguistic lab, which combines the newest research in AI and theories of languages. Together with her colleagues, she wanted to better characterise the ‘noise characteristics’ of spoken conversation—what types of errors typically arise and how frequently they occur.

There were a couple of further questions she loved to ponder.

The first was about how words are created and what this process means for artificial intelligence’s evolution. Will AIs develop their own separate language one day? Would the environments in which AIs develop radically differ from what we know and perceive as humans?

For example, some languages have over 50 words for snow. The Inuit dialect spoken in Canada’s Nunavik region has at least 53, including matsaaruti, for wet snow that can be used to ice a sleigh’s runners, and pukak, for the crystalline powder snow that looks like salt. 2

Another example is how people describe colours. Most speakers get by with just eleven words for colours, but painters or designers might use much more vocabulary. The Papua New Guinean language, Berinomo, has only five words for colours. The Bolivian Amazonian language Tsimane’ has only three, corresponding to black, white, and red. 3

Mum wondered whether AI might create its own vocabulary one day—not based on training data, but to highlight things hidden from the researchers’ eyes. She believed that LLMs based on an extended architecture can simplify human understanding of AIs and bring ethical and transparent AI development forward. 4

Mum’s optimism about AIs’ creative ability in linguistics is based on her knowledge about how languages can be created from scratch. When she was eleven and twelve years old, she spent her summer holidays reading Tolkien. He created a whole world of the Middle-earth, and invented two major Elvish languages, Quenya and Sindarin, each with its own grammar, vocabulary, and linguistic history. Tolkien even introduced written symbols that we could find while reading his literature.

While investigating language creation mechanisms and the innate nature of human languages, Mum learned about the connection between languages and thinking from adjacent disciplines like psychology, neuroscience, and philosophy. When she first joined her lab, she did a project on aphasias and concluded that language wasn’t the same as thoughts and intellect. Mum worked with patients who had lost the ability to communicate with language but were capable of solving mathematical problems, appreciating music, and successfully navigating their environment. She could prove, after collecting data on people with impaired language abilities, that many aspects of thought don’t depend on language and engage brain regions that are different from those related to it. 6

In the end, all her studies served how her lab approached the development of Large Language Models for German and French. Separating linguistic sills from world knowledge, working on memory, and innovating on algorithms for formal reasoning wasn’t an approach followed by many large companies. But as the lab sailed under the radar of AI behemoths like Open AI or Anthropics, time to market wasn’t the most important factor to consider.

Last year, the lab hired a new CTO with a PhD in Multi-Modal Networks (models trained on text and images) and reinforcement learning from human feedback. He and Mum started to build and use knowledge graphs to augment the lab’s LLMs by incorporating external domain knowledge, hoping that this technique—among others—reduces hallucinations and enhances the accuracy of the outputs. Of course, they had to address LLMs’ shortcomings the same way as any other research facility in the world. The models hallucinated and even sometimes proceeded with dangerous and socially unacceptable answers. For example, one of the first LLMs advised their university dean to divorce his wife after 40 years of marriage only because he asked it for advice on what to do if he wanted to watch soccer instead of going out with her to dance.

As if working on new LLMs and updating the data on aphasia patients wasn’t enough, Mum started a new adventure. She decided to use AI to capture dying, possibly forgotten languages and reconstruct them.

+++++++++++++++++++++++++++++++++++++++

1 See the article by Edward Gibson, Steven T. Piantadosi, Kimberly Brink, Leon Bergen, Eunice Lim, and Rebecca Saxe, “A Noisy-Channel Account of Crosslinguistic Word-Order Variation,” Psychological Science 24, no. 7 (July 2013): 1079–1088.

2 See David Robson’s article in The Washington Post, “There Really Are 50 Eskimo Words for ‘Snow’” (January 14, 2013).

3 See the article by Ted Gibson and Bevil R. Conway, “Languages Don’t All Have the Same Number of Terms for Colors—Scientists Have a New Theory Why,” The Conversation (September 18, 2017).

See the paper by Philip Mavrepis, Georgios Makridis, Georgios Fatouros, Vasileios Koukos, Maria Margarita Separdani, and Dimosthenis Kyriazis, “XAI for All: Can Large Language Models Simplify Explainable AI?” arXiv 2401.13110 (January 23, 2024).
See the post by Daniel Grabowski, “How J.R.R. Tolkien Built a Language,” in the Oxford Open Learning Blog (September 18, 2023).

See the article by Evelina Fedorenko and Rosemary Varley, “Language and Thought Are Not the Same Thing: Evidence from Neuroimaging and Neurological Patients,” the Annals of the New York Academy of Sciences 1369, no. 1 (April 20, 2016): 132–153.

romy and roby and the secrets of sleep book cover

Book 1

Romy & Roby And the Secrets Of Sleep.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Select your currency