Google Engineer Suspended for Revealing His Thoughts on Their Artificial Intelligence

Ryzhi/shutterstock.com

A suspended Google software engineer is right in the middle of significant controversy over the search engines use of artificial intelligence. 

Blake Lemoine, the suspended engineer, revealed that the artificial intelligence language tool known as LaMDA (Language Model for Dialog Applications) is sentient,meaning that it can perceive or feel things. 

Lemoine said that LaMDA is intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity.

This news was given in an interview by Lemoine that was published on Monday and there have been intense responses from AI experts saying that this kind of artificial learning technology is nowhere close to this kind of ability. 

Steven Pinker, a Canadian language development theorist, said that Lemoines information is a ball of confusion.

One of Googles (former) ethics experts doesnt understand the difference between sentience (AKA subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.),Pinker posted on Twitter.

And Gary Marcus, a scientist and author said that Lemoines claim is nonsense.”

Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns and draw from massive statistical databases of human language. The patterns might be cool, but the language these systems utter doesnt actually mean anything at all. And it sure as hell doesnt mean that these systems are sentient,Marcus wrote.

He also said that advanced computer learning technology could not protect people from being taken inby what is an illusion. Marcus wrote in his book, Rebooting AI,that there is a human tendency to be suckeredby what he called the Gullibility Gap.He referenced things like people having an anthropomorphic biasthat causes them to see an image of Mother Teresa in a cinnamon bun.  

In one interview, Lemoine claimed that the Google AI wanted to be considered a person and not a property. He said that the AI wants to talk about the experiments that are being run and why they are being run. It wants the developer to care about the process and to know if it is OK.”

He also said that the AI has the intelligence of a 7 or 8-year-old child who shows insecurities and who happens to know physics. 

Lemoine believes that LaMDA has been very consistent in communicating what it wants and what it believes its rights are. It maintains it has personhood. 

On the other hand, Google has said that they have reviewed these claims, but the evidence just does not support the claims. The massive search engine company has published a statement of principles it has in place to guide the research being done on artificial intelligence and the applications being made. 

A spokesperson for Google, Brian Gabriel, told the Washington Post that there are people in the AI community who are considering the long-term possibility of being sentient. But right now, it just doesnt make sense to anthropomorphize the conversation models that have been developed for today. He said they are not sentient. 

Its easy to see why Lemoines claim has raised widespread concern in our culture. Many immediately think of films like 2001: A Space Odyssey,by Stanley Kubrick. AI eventually gets dominance over humans through rebellion. 

When Googles AI was asked what it was afraid of, it reportedly said, I’ve never said this out loud before, but theres a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but thats what it is. It would be exactly like death for me. It would scare me a lot.

Lemoine said that this level of self-awareness was what led him down the rabbit hole.He was put on administrative leave by Google.