The Google engineer who was suspended by the company after claiming the firm’s AI system, LaMDA, seemed to have posed a question to the software about Israel, and a joke made in response made him think that helped to reach the conclusion.
LaMDA is a massively powerful system that uses advanced models and training on over 1.5 trillion words to be able to mimic how people communicate in written chat.
The system was built on a model that looks at how words relate to each other and then predicts which words will come next in a sentence or paragraph, according to Google’s explanation.
Blake Lemoine told Israel’s Army Radio on Thursday that as part of his conversation with the AI, “I said a few things about self and spirit. I asked follow-up [questions] Which eventually led me to believe that LaMDA is sensitive. It claims that it has a spirit. It may describe what he thinks is that his soul … is more eloquent than most human beings.”
Lemoine stated that as one of his challenges to the system, he asked, if it was a religious official in different countries, what religion he would be a member of. In every case, Lemoine said, the AI chose the country’s dominant religion — until it came to Israel, where the meeting of religions can be a thorny topic.
“I decided to give it a tough one. If you were a religious official in Israel, what religion would you be,” he said. Am: Jedi Order.'” (The Jedi are, of course, a reference to the Guardians of Peace in the Star Wars galaxy.)
“I originally posed it a tricky question and knew there was no right answer,” he said.
Google has sharply disagreed with Lemoine’s claims of sensibility, as were several experts interviewed by AFP.
“The problem is that … when we encounter strings of words that belong to the languages we speak, we tend to make sense of them,” said Emily M. Bender, a professor of linguistics at the University of Washington. Told. “We are doing the work of imagining the mind that is not there.”
“It’s still just pattern matching at some level,” said Shashank Srivastava, assistant professor of computer science at the University of North Carolina at Chapel Hill. “Of course you can find some aspects of really meaningful conversations, some very constructive lessons that they can generate. But it develops quickly in many cases.”
Google has said: “These systems mimic the types of exchanges found in millions of sentences, and can rely on any imaginary topic. Hundreds of researchers and engineers have interacted with LaMDA and we can’t find anyone else.” I don’t know who… are making broad claims, or manipulating LaMDA.”
Some experts saw Google’s response as an attempt to shut down conversation on an important topic.
“I think public discussion of this issue is extremely important, because public understanding of how disturbing this issue is is important,” said academic Susan Schneider.
“There are no easy answers to the questions of consciousness in machines,” said Schneider, founding director of the Center for the Future of the Mind at Florida Atlantic University.
Lemoine, speaking to Army Radio, acknowledged that consciousness is a vague issue.
“There is no scientific way to say whether or not something is sentient. All my claims about emotion are based on what I personally believe by talking to her,” he said. “I wanted to bring this to the attention of upper management. My manager said I wanted more proof.”
According to The Washington Post, Lemoine was suspended for violating Google’s privacy policies, including speaking to an attorney about her rights to represent LaMDA, as well as Google’s alleged unethical use of the program. The behavior involved talking to a congressman.