Blake Lemoine made his hypothesis that Google’s language technology is sentient and should have its “wants” honored public last month. Google and a number of AI professionals refuted the allegations, and on Friday, the business announced that he had been fired. Mr. Lemoine informed the BBC that he is seeking legal counsel but would not elaborate. Google stated that it worked with Mr. Lemoine for “several months” to make clear that his statements regarding The Language Model for Dialogue Applications (Lamda) were “wholly untrue.”
It is sad that Blake continued to consistently violate obvious employment and data security regulations, including the requirement to protect product information, despite extensive dialogue on this subject, the statement stated. Google claims that the ground-breaking technology Lamda can have spontaneous talks. It is the tool used by the business to create chatbots. When Blake Lemoine claimed that Lamda was displaying human-like consciousness last month, he immediately gained media attention. It generated debate over the development of technologies intended to emulate humans among professionals and enthusiasts in AI.
Read More:
- Lack Of Garbage Collectors Causes Lake County Recycling To Pile Up
- Salt Lake County Is Considering 23 Different Increases To The Local Property Tax
According to Mr. Lemoine, a member of Google’s Responsible AI team, his responsibility was to check whether the technology employed hateful or discriminatory rhetoric. He discovered Lamda to be self-aware and able to engage discussions about religion, feelings, and concerns. Mr. Lemoine became convinced as a result that, hidden beneath its amazing language prowess, might also be a sentient mind. Google rejected his conclusions, and he was put on paid leave for going against the company’s confidentiality policy. To back up his assertions, Mr. Lemoine later shared a chat he and another person had with Lamda.
Google produced a paper outlining how seriously it takes the responsible development of AI, according to its statement. It further stated that Lamda had undergone 11 evaluations and that any employee issues regarding the company’s technology are thoroughly examined. The message concluded, “We wish Blake the best.” Mr. Lemoine is hardly the first AI engineer to publicly assert that the field is moving toward more aware AI. Another Google employee expressed a similar viewpoint to The Economist also last month.