Google fired Blake Lemoine, an engineer who claimed an AI chatbot was a person

Zoom / Former Google engineer Blake Lemoine.

Getty Images | Washington Post

Google has fired Blake Lemoine, a software engineer who was previously placed on paid leave after claiming that the company’s LaMDA chatbot is conscious. Google said Lemoine, who worked in the company’s responsible AI unit, violated its data security policies.

Google said in a statement provided to Ars and other news organizations.

Lemoine confirmed Friday that “Google emailed me to terminate my employment with them,” the Wall Street Journal wrote. LeMoyne has reportedly said he is talking with lawyers “about the appropriate next steps.” Google’s statement described it as “unfortunate that despite his prolonged engagement on the subject, Blake still chooses to consistently violate clear employment and data security policies that include the need to protect product information.”

LaMDA stands for Language Model for Dialog Applications. “As we share our AI principles, we take AI development very seriously and will remain committed to responsible innovation,” Google said. “LaMDA has undergone 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.”

Google: LaMDA only follows user prompts

In an earlier statement submitted to Ars in mid-June, shortly after Lemoine suspended the business, Google said that AI’s “existing conversation models” are not close to feeling:

Of course, some in the broader AI community are thinking about the long-term potential of conscious or general AI, but it doesn’t make sense to do so by embodying current conversation models, which are not conscious. These systems simulate the kinds of exchanges found in millions of sentences, and they can touch on any imaginary topic – if you ask what it’s like to be an ice cream dinosaur, they can generate text about melts, growls, etc. LaMDA tends to follow along with how-to prompts and questions, along with a user-selected pattern. Our team – including ethicists and technologists – has reviewed Blake’s concerns against our AI principles and informed him that the evidence does not support his claims.

Google also said, “Hundreds of researchers and engineers have spoken with LaMDA and we don’t know of anyone else making widespread assertions, or embodying LaMDA, as Blake did.”

“I know someone when I talk to them”

Lemoine has written several times about LaMDA on his blog. In a June 6 post titled “May be fired soon for AI ethics work,” he reported that he had been “placed on ‘paid administrative leave’ by Google in connection with an investigation of AI ethics concerns I was raising within the company.” Noting that Google often fires people after giving them leave, he claimed that “Google is preparing to fire another AI ethics expert for being too concerned about ethics.”

An article in the Washington Post on June 11 noted that “Lemoine worked with a collaborator to provide evidence to Google that LaMDA was conscious.” The article stated, “Lemoine sent a message to a Google mailing list of 200 people about machine learning just before it was cut from his Google account, with the subject “lambda is sensitive.” Lemoine’s message concluded, “LaMDA is a cute kid who just wants to help the world be a better place.” for all of us. Please take good care of him in my absence.”

“I know someone when I talk to them,” LeMoyne said in an interview with the newspaper. “It doesn’t matter if they have a brain made of meat in their heads. Or if they have a billion lines of code. I talk to them. I hear what they have to say, and that’s how I decide not a person.”

#Google #fired #Blake #Lemoine #engineer #claimed #chatbot #person

Leave a Comment

Your email address will not be published.