Could we ever decipher a foreign language? Uncovering how AI communicates may be key

In the 2016 science fiction film Arrivalthe linguist is faced with the difficult task of deciphering a foreign language consisting of palindromic sentences that read as much backwards as forwards, written in circular symbols. As he discovers different clues, different peoples around the world interpret the message differently – some of them assume they represent a threat.

If humanity finds itself in such a situation today, the best solution may be to turn to research that reveals how artificial intelligence develops languages.

But what exactly defines language? Most of us use at least one to communicate with the people around us, but how does it happen? Linguists have been pondering this question for decades, yet there is no easy way to determine how language evolved.

Language is ephemeral and leaves no detectable trace in the fossil record. Unlike bones, we can’t dig up ancient languages ​​to study how they evolved over time.

While we may not be able to study the actual development of human language, perhaps a simulation could provide some insights. That’s where AI comes in – a fascinating area of ​​research called emergent communication that I’ve spent the last three years studying.

To simulate how language might evolve, we give AI agents simple tasks that require communication, such as a game where one robot must guide another to a specific location on a grid without showing it a map. We put (almost) no restrictions on what they can say or how – we simply give them a task and let them solve it however they want.

Because solving these tasks requires agents to communicate with each other, we can study how their communication evolves over time to get an idea of ​​how language might evolve.

Similar experiments have been conducted with humans. Imagine that you, an English speaker, are paired with a non-English speaker. Your task is to command your partner to pick up a green cube from an assortment of objects on the table.

You can try making a cube shape with your hands and point to the grass outside the window to indicate the color green. Over time, you would develop a kind of proto-language together. Perhaps you would create specific gestures or symbols for “cube” and “green”. Through repeated interactions, these improvised signals would become more subtle and consistent, forming a basic communication system.

It works similarly with AI. Through trial and error, algorithms learn to communicate about the objects they see and their conversation partners learn to understand them.

But how do we know what he’s talking about? If they only develop this language with their artificial conversational partner and not with us, how do we know what each word means? After all, a particular word can mean “green,” “cube,” or worse—both. This challenge of interpretation is a key part of my research.

Cracking the codex

The task of understanding the language of AI can seem almost impossible at first. If I tried to speak Polish (my mother tongue) with a colleague who only speaks English, we wouldn’t be able to understand each other, nor would we know where each word begins and ends.

The challenge with AI languages ​​is even greater because they can organize information in ways completely alien to human language patterns.

Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.

Just as archaeologists piece together ancient languages ​​from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities with human languages, and other times we discover entirely new ways of communicating.

These tools help us peer into the “black box” of AI communication and reveal how AI agents are developing their own unique ways of sharing information.

My recent work focuses on using what agents see and say to interpret their language. Imagine having a transcript of a conversation in an unknown language, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s visual field, creating statistical associations between words and objects.

For example, the phrase ‘yayo’ might match a bird flying by – we might guess that ‘yayo’ is the word the speaker is using for ‘bird’. By carefully analyzing these patterns, we can begin to decode the meaning of communication.

In the latest paper by me and my colleagues, to appear in the Proceedings of the Neural Information Processing Systems (NeurIPS) conference, we show that such methods can be used to reverse engineer at least parts of the language and syntax of AI, giving us insight into how they can structure communication.

Aliens and autonomous systems

How does this relate to aliens? The methods we are developing to understand AI languages ​​could help us decipher any future alien communications.

If we are able to obtain some written alien text along with some context (such as visual information about the text), we could use the same statistical tools to analyze it. The approaches we are developing today could be useful tools in the future study of foreign languages, known as xenolinguistics.

But we don’t need to find aliens to benefit from this research. There are many applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.

By decoding emerging languages, we can make the technology of the future easier to understand. Whether we know how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just building intelligent systems—we’re learning to understand them.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Tomáš Martinez on Unsplash

Leave a Comment