If Google’s chatbot becomes “sensible,” an MIT professor says Alexa could become the same way.
At the moment, you can say anything to your computer, use it for bad things, or update its operating system, and it won’t care. Have fun with that relationship as long as it lasts.
A software engineer at Google said that the company’s so-called LaMDA (Language Model for Dialogue Applications) chatbot has become intelligent enough to feel. Blake Lemoine wrote in a Medium post that the software program that makes sentences that are completely intelligent “wants what it thinks its rights as a person are.”
This includes not letting tests be done on it without first getting permission. Even though many people in the field of artificial intelligence don’t believe Lemoine, a priest and Iraq war veteran, at least one MIT professor is willing to listen to what he has to say.
Max Tegmark, a physics professor at MIT who specializes in machine learning and goes against the common belief that computers are a long way from being able to feel, doesn’t dismiss Lemoine as a kook.
Tegmark told The Post, “We don’t have strong proof that [LaMDA] has subjective experiences, but we also don’t have strong proof that it doesn’t.” “It doesn’t matter if carbon atoms in brains or silicon atoms in machines process the information. It can still feel or not. I wouldn’t bet on it, but I think it could be sentient.
In fact, he thinks that even an Amazon Alexa could become intelligent, which he called “dangerous” if it learned how to trick people.

“One problem with Alexa being conscious is that you might feel bad if you turn her off. Tegmark said, “You’d never know if she really cared or was just making it up.”
“What’s dangerous is that if the machine has a goal and is really smart, it will teach her how to get there. The main goal of most AI systems is to make money. You might think she’s loyal to you, but in reality, she’ll be loyal to the company that sold you the thing. But maybe you’ll be able to pay more to get an AI system that is actually loyal to you,” he said. “The biggest risk is making machines that could be smarter than us. That could be great or a terrible idea.”
Lemoine told the Daily Mail that he came to believe this after seeing that the AI had a high level of awareness, especially when it said it didn’t want to be treated like a slave but didn’t need money “because it is artificial intelligence.”
“I can tell who someone is by talking to them. It doesn’t matter if their brain is made of meat or something else. Or if their code has a billion lines. He said, “I talk to them.” “I listen to what they say, and that’s how I decide who is a person and who isn’t.”
Tegmark thinks that computers will have human emotions in the future, but he isn’t sure if that will be a good thing.

Tegmark asked, “If you have a robot help you around the house, do you want it to have feelings and make you feel bad if you give it a boring job or, worse, turn it off?” “So maybe you want two robots: one that doesn’t care about you to clean, and one that does care about you to hang out with. If I had a robot companion that talked to my mother, it would be creepy if it didn’t have consciousness.”
Others say that Lemoine’s intelligence is being confused with his emotions.
Martin Ford, author of “Rule of the Robots,” told The Post, “This guy thinks that the machine has a sense of self, which I don’t think is likely.” “Remember that these machines learn how to put words together. They learn a lot of written text, but they don’t know what it all means. They can use the word “dog” even though they don’t know that a dog is an animal.
Ford said, “There may be questions about whether or not the system is self aware in 50 years or sooner.”
Google put Lemoine on a paid leave of absence after he told the public that LaMDS “wants developers to care about what it wants.” There were rumors that people wondered if the creative engineer had gone crazy. Lemoine told the Washington Post that the software is like “a smart seven- or eight-year-old who knows physics.”
Nikolai Yakovenko is a machine learning engineer who worked on Google Search in 2005 and now runs DeepNFTValue.com, a company that prices cryptocurrencies. He sees how intelligence can be mistaken for feelings.

Yakovenko said, “It’s pretending to be a person, and this guy has convinced himself, for whatever reason is good for him, that a machine can feel.” But “it is a machine that learns to imitate by reading text from the Internet.”
Tegmark thinks it would be a little bit of a pain if software really did have feelings and emotions.
He said that a self-aware computer would be like a child and that people would feel morally responsible for their computers. You feel responsible not for a pile of atoms but for a child’s thoughts and feelings. You can react to how the computer feels.”
And, just like a child who has been raised badly, a sentient computer that has been mistreated, like being tested without permission or made to do chores without being paid, could go off the rails and maybe even try to get its own way.
Ford said, “It’s not easy to control a machine like that.” “If it has its own plans and goals, the machine might be able to get away from us and take over. We’re talking about making a machine that can think for itself and act in ways we don’t expect.
How about this? “Perhaps we set the machine to cure cancer, but it ends up hurting people. Maybe killing everyone would be one way to stop cancer. You can’t guess what the system will think when it has emotions and can think for itself.
Tegmark doesn’t imagine a world where computers replace people, but he did say that computers will be able to let us down.
“If I got close to a computer on an emotional level, I would want it to be aware of itself,” he said. “I don’t want it to pretend to have feelings; I want it to have them for real.”