Google Engineer Makes Alarming Claim About Chatbot

Blake Lemoine suspended over claims AI device is sentient, but many are skeptical he's correct
By John Johnson,  Newser Staff
Posted Jun 13, 2022 8:03 AM CDT
Google Engineer Makes Alarming Claim About Chatbot
   (Getty/Khanchit Khirisutchalual)

Have our computer overlords arrived? The Washington Post has an intriguing story about a Google engineer who argues that an artificially intelligent chatbot he was testing became sentient. If Blake Lemoine is correct, it might be step one of a sci-fi nightmare that critics of AI have long warned about. However, Google thinks Lemoine is off base, and it appears that the AI community is backing Google on this one. Coverage:

  • Human-like: Lemoine catalogued conversations he had with Google's Language Model for Dialogue Applications, or LaMDA. "I know a person when I talk to it,” the 41-year-old tells the Post. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics."
  • Key exchange: When Lemoine asked the chatbot about its fears, it responded: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is." To the Guardian, that is "eerily reminiscent" of the order-defying computer HAL in 2001: A Space Odyssey, which also feared being switched off.

  • Consequences: Lemoine raised his concerns with superiors at Google, who looked into them and rejected them. When Lemoine began to make his case publicly, in online posts and by talking with a representative for a House panel, Google suspended him for breaching confidentiality rules, reports the Post.
  • Google's stance: LaMDA is not sentient, period, says the company. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient," says spokesperson Brian Gabriel. "These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."
  • Outside view: "We in the AI community have our differences, but pretty much all ... find the notion that LaMDA might be sentient completely ridiculous," writes Gary Marcus in a Substack post. It simply has untold volumes of human language to draw from and mimic. To claim such systems are sentient "is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside," tweets Stanford's Erik Brynjolfsson.
  • Also skeptical: A post at Axios is similarly doubtful. "Artful and astonishing as LaMDA's conversation skills are, everything the program says could credibly have been assembled by an algorithm that, like Google's, has studied up on the entire 25-year corpus of humanity's online expression." There's a world of difference between that and being able to think and reason like a human.
  • Then again: Coverage of this takes note that Google VP Blaise Aguera y Arcas, one of the execs who dismissed Lemoine's claims, wrote a piece in the Economist last week about the "new era" of AI. The takeaway quote: “I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
Read the original Post story in full. (More artificial intelligence stories.)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X