Blake Lemoine, a software engineer for Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
Google confirmed it had first put the engineer on leave in June. The company said it dismissed Lemoine’s “wholly unfounded” claims only after reviewing them extensively. He had reportedly been at Alphabet for seven years.In a statement, Google said it takes the development of AI “very seriously” and that it’s committed to “responsible innovation.”
Google is one of the leaders in innovating AI technology, which included LaMDA, or “Language Model for Dialog Applications.” Technology like this responds to written prompts by finding patterns and predicting sequences of words from large swaths of text — and the results can be disturbing for humans.
LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”
But the wider AI community has held that LaMDA is not near a level of consciousness.
It isn’t the first time Google has faced internal strife over its foray into AI.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.Source link