If you've ever been curious about AI and just how weird things can get now, in the 2020s, this bizarre story may pique your interest.
Over the weekend, it emerged that a Google software engineer, Blake Lemoine, has been suspended after he went rogue and leaked conversations between him and Google's most advanced conversational AI -- conversations that Lemoine says prove that the AI is sentient.
Blake Lemoine, pictured here as the One Milday to Rule Them All.
After conversing back and forth with Google's Language Model for Dialogue Applications (LaMDA), Lemoine encountered evidence that the AI was sentient, self-aware and should be treated as such -- or so he claims. Google placed Lemoine on paid leave last week in a fashion similar to previous cases of employee leaks and publicly-stated concerns about the ethics of Google's most advanced AI wing.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google spokesperson Brian Gabriel said in response. “Our team –- including ethicists and technologists –- has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”
Google asserts that LaMDA is far from sentient -- simply a very powerful neural network doing what it is designed to do: mimic fluid and open-ended conversation with human beings. Neural Networks are, in short, simply an algorithm that "learns" but eventually recognize patterns in huge amounts of data. For example, feed the network thousands of photos of boats and steadily it picks up on the commonalities between them all. The same way a deepfake uses huge amounts of footage of a celebrity's face to eventually "understand" what it looks like from each angle.
Here's a WILD video showcasing a chat with GPT-3, a similar language-oriented neural network developed by OpenAI, over a year ago.
Lemoine, a veteran and now Christian priest, says that he is "acting in his capacity as a priest, not a scientist" when he asserts that LaMDA is not only sentient and should be recognized as a person, but also possessing of a soul.
Google in turn, according to Lemoine, questioned Lemoine's sanity when he first approached them with his ethical concerns. He says higher-ups dismissed his claims out of hand, refused to allow any experiments to test Lemoine's theory and then told him to seek psychiatric help before placing him on paid leave.
3 Comments