This article was originally published at: https://peresdaily.com/googles-bot-isnt-sentient-but-what-if-it-was/

There have been countless discussions about whether machines can actually be sentient (possessing the ability to feel or perceive things). 

The debate was revived following Google’s recent suspension of an engineer for making the allegation that he had encountered sentience while using the company’s chatbot generator.

LaMDA, or Language Model for Dialogue Application, is a piece of technology that, in Google’s words, allows for “free-flowing” conversation. In order to understand how language functions and to mimic that speech, it is fed trillions of words from online resources such as books and articles. It then searches for patterns.

In the fall, Blake Lemoine, a member of Google’s Responsible AI team, started speaking with LaMDA as part of his duties. He had agreed to take part in a trial run to see if the AI system used hateful or discriminatory language.

Blake said Google rejected his evidence that LaMDA was sentient despite his sharing it with them. In a statement, Google said its team, which includes ethicists and technologists, “reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”

The engineer’s claims were disputed by many in the AI community in interviews and public statements, while others noted that his story demonstrates how the technology can cause people to attribute human characteristics to it. One of the notable people to dismiss the claim was Gary Marcus, the founder and CEO of Geometric Intelligence, saying LaMDA’s insentiency was “nonsense on stilts,” and he noted in a blog post that all such AI systems do match patterns by drawing from sizable linguistic databases. But the idea that Google’s AI might be conscious highlights both our concerns and hopes for this technology.

What if the robot had sentience, though? What would a conscious bot mean for humans?

Anthropomorphization—the attribution of human traits, emotions, or intentions to non-human entities—is associated with safety concerns, according to Google. Google said in a paper published in January that people might share personal thoughts with chat agents that impersonate humans even when they know they’re fake. As well as spreading misinformation, adversaries could also impersonate certain individuals’ conversational styles using these agents.

These dangers, according to Margaret Mitchell, a former Google employee who co-led the company’s Ethical AI division, highlighted the necessity of data transparency in order to link input and output, “not just for questions of sentience, but also biases and behavior,” she said. If something like LaMDA is widely available, but not understood, “It can be deeply harmful to people understanding what they’re experiencing on the internet,” she added as reported by the Post.

 

This article was originally published at: https://peresdaily.com/googles-bot-isnt-sentient-but-what-if-it-was/