After A Google Engineer Said That An AI System Had Become “Sentient,” Google Fired Him!

A developer has been placed on administrative leave by Google after he claimed that an artificial intelligence chatbot had become sentient and was capable of human thought and reasoning.

Blake Lemoine has suspended a week ago after he informed the corporation that he believed its Language Model for Dialogue Applications, or LaMDA, was a person with rights and maybe a soul. Reportedly, he was placed on administrative leave for breaking Google’s confidentiality policy.

MOUNTAIN VIEW, CA – On October 28, 2021, the Google headquarters may be seen in Mountain View, California, United States. LaMDA is an internal Google Cloud system designed to develop chatbots capable of imitating human speech.

Since last fall, Lemoine has been working on the system, which he describes as intelligent and capable of expressing human-like emotions. “If I didn’t know it was a computer programme that we recently developed, I’d think it was a 7- or 8-year-old who happened to know physics,” he told The Washington Post.

Read More- 

He shared recordings of discussions between himself, a Google collaborator, and LaMDA on Medium late last week. Several chats with LaMDA, according to Lemoine, convinced him that the system was intelligent. He stated that he believed it had become a person and that it should be sought for permission before Google conducts tests on it.

Monday, May 16, 2022: A sign outside Google’s new Bay View campus in Mountain View, California, United States.

Lemoine stated on Medium that LaMDA’s statements regarding what it wants and what it believes its rights are as a person have been extremely consistent. “What continues to perplex me is how strongly Google resists giving it what it wants, given that the request is so straightforward and would cost them nothing.”

He said, “LaMDA is a sweet kid who just wants to make the world a better place for everyone.” According to a Google spokesman who spoke with The Wall Street Journal, Lemoine’s statements were taken seriously and investigated by ethicists and engineers, but no evidence was found to corroborate his claims.

The representative stated that hundreds of researchers and engineers had chats with LaMDA, but Lemoine was the only individual to conclude that it was conscious. The representative stated that systems like LaMDA function by replicating the speech found in human conversational sentences.

Lemoine told The Washington Post that he is speaking up for what he believes is right and is not seeking to provoke Google. He hopes to retain his position at the organisation.

You might also like

Leave a Reply

Your email address will not be published. Required fields are marked *