Two Artificial Intelligence Robots, Bob and Alice, Shut Down By Facebook

Two Artificial Intelligences from Facebook discussed with each other in a language that was new and incomprehensible to researchers


Facebook’s artificial intelligence (AI) system had taken a step forward: their software was able to communicate with each other, and even negotiate. The AIs had responded perfectly to the program until the sentences uttered ceased to have any meaning. Could the engineers have failed ? No, the AI   has simply surpassed them. Researchers at Facebook's Artificial Intelligence Research Lab found that their machines communicated well with each other, but no longer spoke English.

In the midst of a "negotiation game" between two artificial intelligences, Facebook scientists witnessed a surprising conversation. From English, machines began to speak an older version of the language, before shifting to an incomprehensible language.

Bob: "I can can I I everything else" (literally: I can I I everything else)

Alice: "Balls have zero to me to me to me to me to me to me to me to me to" (literally: balls do not have)

Inhuman but effective language

If this conversation doesn't make sense to us, it does to the machines named Bob and Alice. Each word has a semantic meaning that is correctly interpreted by AIs. The repetition of the words "I" and "for me", the researchers say, could mean the number of objects the machine wants to trade or obtain. Facebook researcher Dhruv Batra justified these pieces of language this way: It's like I said 'the' five times. You will interpret this to mean that I want five copies of this element. The phrase "everything else" could then indicate a desire to offer more items to the other machine.

Being neither penalized nor rewarded if the words used did not correspond to the language they had been taught, but allowed them to respond to the negotiation game, the machines took a new path to negotiate which involved the creation of a new language system. This discovery is not recent. As Mashable explains, "the more you increase the ability of chatbots to perform a complex conversion, the more they tend to deviate from human language."


This new language was invented by machines for its simplicity. The only problem is that it makes the development of artificial intelligence more difficult, since we humans are unable to understand the cold logic of their new languages.

AI experts in negotiation

Through trading games, researchers at Facebook's Artificial Intelligence Research Lab tested their machines in different situations. As several researchers at Cornell University explain, this process was the perfect way to evolve AI. " Negotiations require complex communication and reasoning skills, but success is easy to measure, making it an interesting task for AI."

The artificial intelligences thus discussed among themselves and learned from these conversations. As noted in their joint report on how and how these tests worked, Facebook's AI Research Center and the Georgia Institute of Technology came up with some very relevant results. "For the first time, we are showing that it is possible to train machines, end to end, in negotiation. To do this, they must learn the language and acquire reasoning skills. We have also introduced "dialogue rollouts" [Editor's note: as in chess games, the machines must anticipate the responses of the other and provide an appropriate response for each of them] by simulating any changes in conversations and we conclude that this technique considerably improves performance."

In the test, both machines had the same number of objects: two books, a hat and three balls. They had to share the elements among themselves. Each AI has been programmed to desire different things, which in practical terms, in the code, means that each class of objects had a different value for each robot. The goal was for the robots to find a compromise so that they could both finish the game with decent scores.

This process, tested on 526 dialogues, made it possible to generalize a model on which the machine will have to adapt to each new situation.

Machines are therefore prepared to respond to "If I say that ,what will you say in return ? "This allows you to be much more independent and assert yourself in negotiations. As stated above, each answer earns points, which lets the machine know which outcome is most beneficial to it: having zero hats, two or three?

Learning to negotiate … and to lie

Regarding language, the data on which the machines were based is a compilation of 5,808 human dialogues, containing 1,000 words in total. Bots learn how humans express themselves from these examples. And that's where they deduced what the other person might say to the answers they formulate.

"They learned to lie because they knew it was a strategy that worked."

To progress, AIs have a carrot: they need to earn points. For that they need to reach an agreement in less than 10 cycles of dialogue. If this fails, the two conversing machines do not score points. From these games emerged a capacity other than negotiation: that of lying. The bots pretended to be interested in things they didn't really want. "They learned to lie because they found out it was a strategy that worked, given the rewards of the game," Dhruv Batra, report co-author and assistant professor, at Georgia Tech, told The Register.

Words and gestures

Researchers testing artificial intelligences had already spotted the appearance of lies and the creation of specific languages ​​since 2016. In these various tests, machines then learned languages ​​only by exchanging with other machines. Each time the researchers organized some sort of cooperative game, like the ones described above, in which the sender and receiver saw a pair of images and had to discuss them. It's a kind of image simulation. "The sender is informed that one of them is the target and is authorized to send a message with fixed and arbitrary vocabulary to the recipient. The recipient must rely on this message to identify the target. Thus, agents develop their own language interactively based on the need to communicate, ” explained researchers Angeliki Lazaridou, Alexander Peysakhovich and Marco Baroni in December 2016 on Cornel University Library.

These observations were confirmed in March 2017, by five other researchers, Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee and Dhruv Batra. They described the reaction of AIs during the same protocol as in 2016, in one of their report. We can read: “We observe that the two robots invent their own communication protocol and start to use certain symbols to ask / respond to certain visual attributes (shape / color / style). In doing so, we demonstrate the emergence of language and communication based on "visual" dialogues, without human oversight. "

"We are also seeing the emergence of non-verbal communication such as pointing and guiding"

Even better, in the same month other scientists discovered sophisticated forms of communication and the appearance of gestures when dialogue is not possible. In their report, Igor Mordatch, Pieter Abbeel talk about the emergence of a "compound language". “ This language is represented as a stream of discrete abstract symbols spoken by agents over time, but nonetheless has a consistent structure that has a defined vocabulary and syntax. We are also seeing the emergence of non-verbal communication like pointing and guiding when linguistic communication is not available. "

The code for these AIs is fully accessible to the public, on GitHub. The researchers wished to express their desire for transparency by posting summaries of their experiences on the university websites.

After the publication of the results of the experiment, the AIs were deactivated, which caused a wave of panic in several media, which drew the conclusion that the researchers did not welcome the fact that the machines were inspired by our behavior and our language to lie, conceal and manipulate. But who said artificial intelligences have to be cool?

Also read : What is Artificial Intelligence | How does AI actually works

Post a Comment