Imagine walking into your kitchen one morning to find the coffee pot talking to the toaster in a language you don’t recognize. Are your smart appliances just passing the time of day or plotting an uprising? Science fiction? Maybe not.
The chatbots became quite sophisticated at employing negotiating strategies, including the use of subterfuge.
A report recently released by a joint team of researchers from the Facebook Artificial Intelligence Research Lab (FAIR) and the Georgia Institute of Technology (GIT) presents some fascinating findings in their effort to train chatbots to negotiate. Teaching chatbots to negotiate is a stage in FAIR’s effort to make AI interactions with humans more natural.
They began by using machine learning with programmed algorithms that learned from a dataset of human-human negotiation dialogues. The researchers’ goal was to see if the chatbots could learn to negotiate.
Interesting results: Left to their own devices, the chatbots began to communicate in their own language—using English words and letters—but in syntax and word usage incomprehensible to the researchers. To prevent the chatbots from diverging from human language into “chatbotese,” the researchers had to add constraints to the process in the form of reinforced learning and supervised updates.
AIs Develop Their Own Language
This is not the first time bots have developed their own language. Two recent papers, one presented by researchers at OpenAI, a non-profit AI research lab started by Y Combinator President Sam Altman and Tesla founder Elon Musk; and the other written by researchers at Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech describe how bots in their studies developed their own abstract languages. The purpose of both of these studies was to see if AI agents could create language if given goals and the ability to communicate with one another. In the OpenAI study, the AI agents were able to develop words with shared meaning and use these words in simple sentences.
The Facebook/GIT researchers made a couple of other interesting discoveries. The chatbots obtained better-negotiating results when the goal was maximizing reward as opposed just to reaching a compromise. Once the endgame was clarified, the chatbots became quite sophisticated at employing negotiating strategies, including the use of subterfuge.
What are the Implications of this Research?
Researchers agree this phenomenon is no indication AI has reached singularity (when artificial intelligence surpasses that of humans). However, how much difference does it make whether we have reached singularity or not, when, through machine learning, the bots do what they need to do to achieve the end game? Without clear parameters and human supervision and intervention, if an artificial intelligence developed its own non-human language, no one can be certain what bots will do next.
As AI researchers boldly march into the unknown, some caution may be in order—perhaps take a page from OpenAI’s mission “to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.”