A senior engineer working at leading U.S. technology company Google claimed that the artificial intelligence (AI) chatbot used by the company has reached the level of a "sentimental" 7-8-year-old child.
Speaking to the Washington Post, Blake Lemoine, senior engineer at the Google Responsible Artificial Intelligence organization, stated that he witnessed the improvements that AI had had during his work where he was tasked to test whether the interface, called the Language Model for Dialogue Applications (LaMDA), contains "discriminatory" or "hate speech."
“If I didn't know exactly what this computer program we built recently, I would have thought it was a 7-8-year-old boy who knew physics,” Lemoine said of AI.
Google has denied all claims that Lamda AI has become sentient and has subsequently placed Lemoine on paid leave.
The firm states that the ‘Language Model for Dialogue Applications’ (Lamda) is a breakthrough technology that can engage in free-flowing conversations. Lemoine was working on the model, testing the AI’s ability to generate discriminatory language or hate speech. However, the tool’s impressive verbal skills led the scientist to believe it had developed a sentient mind.
To support his claims, Lemoine shared a document with company executives containing a transcript of his conversations with the AI. After his concerns were dismissed, the scientist decided to publish the transcript via his Medium account, in which the tool gave convincing responses regarding the rights and ethics of robotics.
Google has fired one of its engineers who said the company's artificial intelligence system has feelings.
Last month, Blake Lemoine went public with his theory that Google's language technology is sentient and should therefore have its "wants" respected.
Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked.
Comments