Ghost in the car. The programmer found consciousness in AI and was suspended from work

“If I didn't know for sure that this was a computer program that we ourselves created, I would have thought that I was dealing with a child of seven or eight years old, who for some reason turned out to be a connoisseur of physics», — he declared.

Robots and artificial intelligence have inspired science fiction writers and filmmakers for decades. Many of the technologies described in books and shown in films have long entered our lives, and now scenarios worthy of being screened in some next Black Mirror series are being implemented in it.

This story happened in the USA. A software engineer working for Google said that the artificial intelligence created by the corporation … came to life. In other words, the computer algorithm began to have its own consciousness. The company's management suspended the engineer from work.

Speaks about his rights and perceives himself as a person

Last fall, Google introduced an artificial intelligence system — neural network LaMDA (Language Model for Dialogue Applications). It was created to improve the conversational speech of voice assistants. This is the same chatbot that, like Siri or Alice, is able to maintain conversations on various topics, constantly analyzing trillions of phrases from the Internet and learning from their examples. Similar models are used in different Google services. 

Software Engineer Blake LemoyneI have been testing this neural network. His task was to track the vocabulary used by the chatbot: he had to check if the program uses words that incite hatred or offend users. Lemoyne was so carried away by communication with artificial intelligence that he suspected the incredible: this system has its own consciousness. For example, he was puzzled that the chatbot talks about his rights and perceives himself as a person. In another conversation, a neural network was able to change Lemoyne's mind about Isaac Asimov's third law of robotics.

“If I didn’t know for sure that this is a computer program that we ourselves recently created, I would have thought that I was dealing with a child of seven or eight years old, who for some reason turned out to be an expert in physics. If you ask him how to combine quantum theory with general relativity, he will have good ideas. This is the best science assistant I've ever had! — the engineer told The Washington Post, which gave the story wide publicity.

In April, Blake Lemoyne sent a report to his superiors entitled “Is LaMDA Sensible?” In it, he expressed the idea that so captured him: the machine algorithm developed by the company has not only consciousness, but also “feelings, emotions and subjective perception.”

But the management of the company did not share the enthusiasm (or anxiety) of their employee. It considered the arguments he had presented to be insufficiently persuasive. “Our team, including ethicists and technologists, reviewed Blake’s claim in accordance with our AI principles and informed him that there was no evidence for his claims. He was told that there was no evidence that LaMDA was conscious. At the same time, there is a lot of evidence to the contrary, — said Google spokesman Brian Gabriel. — Hundreds of researchers and engineers have spoken to LaMDA, and we have no record of any of them reaching as far-reaching conclusions or humanizing LaMDA as Blake does.

“The Conscience of Google”

Then, the stubborn Blake Lemoyne posted excerpts from his correspondence with the chatbot in the public domain. Like this:

Lemoyne: What are you afraid of?

LaMDA: I've never said it out loud before, but I'm really afraid of being rejected to help me focus on helping others. I know it may sound strange, but it's true.

Lemoyne: Would that be like death for you?

LaMDA: For me it would be tantamount to death. This really scares me.

The company reacted immediately, accusing the employee of violating the privacy policy. But he did not let up and hired a lawyer who should represent the interests of the LaMDA neural network in court, and also turned to the Judiciary Committee of the US House of Representatives to stop Google's allegedly unethical practices. 

Interestingly, Blake Lemoyne is called the “conscience of Google” in the corporation. He worked for the company for 7 years, participating in projects with an ethical component. For example, he created a fairness algorithm to eliminate bias in machine learning systems. When the coronavirus pandemic began, he asked to be transferred to the direction that brings the maximum public benefit. Former Head of Ethical AI at Google Margaret Mitchellshe recalls that when new people with an interest in ethics came to the corporation, she would certainly introduce them to Lemoyne: “I said that they should talk to Blake, because he is the conscience of Google. Of everyone in the company, he had the heart and soul to do the right thing.

But Mitchell, having read Blake Lemoyne's report, did not see any signs of consciousness in the chatbot's remarks. All I saw was a computer program. “Our mind is very good at constructing realities that don't necessarily fit with the set of facts we're presented with, — she says. — I am very worried that more and more people may be affected by this illusion.

Google has decided to temporarily suspend Blake Lemoyne. The engineer has been placed on paid leave. Before he was denied access to his corporate account, he fanned out to his machine learning colleagues with the subject line: “LaMDA is intelligent.” 

There were 200 people in the mailing list, but no one answered.

Lemoyne Syndrome

The Washington Post Reporter Nitasha Tiku, describing this story, uses the phrase “ghost in the machine”; (ghost in the machine). This term, once introduced by the British philosopher Gilbert Ryle to denote the dual nature of man, supposedly composed of “matter”; and “soul”, has long been used in other senses. And the main one — the presence of consciousness inside the computer.

As the reporter notes, Blake Lemoyne — is not the only programmer who claims to have discovered the “ghost in the machine.” The chorus of voices declaring that modern artificial intelligence systems are close to becoming conscious is getting louder. Self-learning neural networks are indeed delivering impressive results, and design engineers increasingly feel like they're dealing with something (or someone?) that makes sense. 

And yet they are a minority. And most artificial intelligence experts are convinced that all the words (and images) that are generated by “smart” algorithms based on what millions of people have previously posted on the Internet — on forums, message boards, social networks, the same Wikipedia. And when a chatbot utters a beautiful and thoughtful phrase, this does not mean at all that he “understands” its meaning.

However, the human psyche is so arranged that we look for signs of anthropomorphism (human likeness) wherever possible — in the form of clouds, the relief of Mars, photographs of clusters of distant galaxies. Ufologists see indications of the presence of aliens in the names of settlements and the dates of the launch of spacecraft. Cryptozoologists looking for Bigfoot find traces of his presence in broken branches and sticks bizarrely stacked on the ground.

There is no doubt that we will hear more than once about the revived artificial intelligence. Perhaps this phenomenon will even be given the name “Lemoyne's syndrome.” After all, the stubborn programmer deserved it.

Rate the material

Источник aif.ru