The team used a special 1,64,000-utterance dataset titled Persona-Chat to teach its AI to look for patterns.
Persona-Chat consists of more than 1,60,000 lines of dialogue, sourced from workers found on Amazon's "Mechanical Turk" marketplace, The Verge reported on 29 January.
Amazon Mechanical Turk (MTurk) is a crowdsourcing Internet marketplace enabling individuals and businesses to coordinate the use of human intelligence to perform tasks that computers are currently unable to do.
In the Facebook test, the data was used to train neural networks used for existing chatbots, with the results then assessed by another group of Mechanical Turkers.
In each case, they were asked to conduct a conversation with the persona-driven bot, and compare it with both other chatbots and humans.
The persona bot didn't score as highly on criteria like "fluency" and "consistency" as the humans but it outperformed the chatbot trained on movie dialogue.
The new test is significant after Facebook in 2017 had to shut down one of its AI systems after chatbots started speaking in their own language-defying the codes provided.
The FAIR team found that while they were busy trying to improve chatbots, the "dialogue agents" were creating their own language.
Soon, the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input.
Using Machine Learning (ML) algorithms, the "dialogue agents" were left to converse freely in an attempt to strengthen their conversational skills.
The researchers also found these bots to be "incredibly crafty negotiators".
News Source: TECH2Last modified on Monday, 29 January 2018