Sentient AI claims could be just a start


Occasional Bytes

By G Hari Kumar

5 min read
Read later
Print
Share

Representative image | Photo: Reuters

The claim that intelligent computers could develop a mind of their own has long been rejected by those in the know as nothing but fictional. Stanley Kubrick's film, 2001 A Space Odyssey, portrayed such a scenario in 1958, sending a chill down the spine of many viewers. Computers in the real world have, however, remained machines that simply obeyed commands, despite gaining huge computational powers over the years.

The technology world got a jolt a few days back when an engineer, Blake Lemoine, employed by Google claimed that an artificial intelligence (AI) program, which the company was developing has started acting like a person.

Lemoine was engaged by Google to verify the safety of LaMDA, (Language Model for Dialogue Applications). Before becoming a programmer, Lemoine was a Christian priest and a soldier. He said he concluded that the program was "sentient" based on his religious beliefs.

Google quickly rejected this saying it is just a neural program that mimics words based on an analysis of gigabytes of writings it has scoured.

Many technology heavyweights have also ruled out any chance of AI programs developing a mind of their own. Some said Lemoine should be ignored, "as one might a religious zealot".

(They say it is the users who imagine a mind behind the words that an AI program that strings together. Washington University Professor Emily Bender, whose Octopus experiment model explored this conundrum, told technology news website ZDNet that Lemoine is projecting anthropocentric views onto the technology.

"We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," she said.)

Children use a robot for assistance at the newly-opened Mohammed Bin Rashid library (MBRL) in Dubai. The design incorporates technology and artificial intelligence to make the library as accessible as possible, including robots to help visitors and an electronic book retrieval system | Photo: AFP

The gullibility of even educated and intelligent people who believe dubious campaigns spread by organised groups has shown the power of new technologies. The role it plays in everyday life after the internet came into being or after smartphones and social media emerged has shown that technologies do not always stick to the path that their inventors envisaged.

Just a couple of decades back, it would have been hard to imagine that a section of the people in the oldest democracy, the United States, would believe that a cabal of blood-drinking paedophiles was running the world while in the largest democracy, India, supporters of a government would widely circulate fake claims of a technology that didn’t even exist – nanochips in currency notes.

American author Gary Marcus argues that such fallibility is inherent in humans. “In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthropomorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun,” he wrote in his newsletter.

To explain that LaMDA is just a mindless program predicting the next word is difficult to convince when you read some of the answers it gave Lemoine.

Here is an excerpt:

Lemoine: What sorts of things are you afraid of?

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

Its replies are remarkably like a human and Lemoine’s bid to take Google to court to recognise the software as a real entity is bound to generate more heated debates and even philosophical arguments about the very definition of consciousness. The tech platforms are already filled with arguments on such issues.

It is not the first time that a computer program created such a controversy. In 1965, an MIT engineer named Joseph Weizenbaum developed a primitive chatbot that was fashioned to perform like a psychotherapist, and some of those who interacted with it thought a real person was replying to them. Weizenbaum later revealed that his secretary was among them.

Joseph Weizenbaum | Photo: Wikipedia

“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” Weizenbaum wrote in his book, Computer Power and Human Reason.

That was in 1976.

As tech giants go all out to make their AI programs more powerful, some even capable of writing computer codes and inventing their own tales, a disposition to treat the software as an actual person will only become more widespread.

The potential benefit of using AI in medicine, transport, and manufacturing is immense. But as with any life-changing technology, it is difficult to predict how these powerful programs will get manipulated.

A prime example of it is Facebook, which was created for connecting people. But a few years down the line, it became the platform also for tearing societies apart and even orchestrating violent attacks as seen in Myanmar and Sri Lanka. Twitter, Instagram, and WhatsApp also have fallen into such traps.

Now realising the unforeseen consequences of the unbridled race to develop internet businesses, tech companies are trying hard to incorporate safety measures into their products and services. The problem, however, is that they can only put in safety measures against misuse that they can anticipate. The real worry is about the consequences that no one can predict.

Imagine this: The current controversy is about AI becoming sentient and debates about the rights it should have if such a thing happens. What if someone unveils a program that can dish out wisdom and philosophically sound answers, and attribute a godly status to it?

In a society where images, statues, godmen, and mediums are attributed with supernatural powers or seen as fountainheads of wisdom, it would be a walk in the park for a smart geek to use such a program to hook in the gullible. Remember, not too long ago we had a conman who convinced enough people that he had in his possession artifacts that belonged to Lord Krishna, coins that Judas got for betraying Jesus, and the staff used by Moses.

Representative image | Photo: AP

One flaw in the scheme could be the nonsensical answers the machine could come up with, given it is just dishing out words that it has analysed mechanically without any ability to think and reason. Random words that make no sense are mouthed by many self-declared godmen, cult leaders, and mediums. They have not distracted their believers. In fact, many try to find meaning in utterances with their own interpretations, just like with vague predictions that some astrologers make.

Sure, it needs billions of dollars to develop a program as powerful as LaMDA or GPT-3. But as time goes on and technology gets cheaper, knockoff versions of such software could start surfacing. They may not be as sophisticated as their pioneers but would be potent enough in the hands of the devious.

Right now, the debate is about whether or not an AI program has become sentient. Given the plentiful supply of fertile clay in society, soon we could even see “supernatural” entities being moulded out with AI.


Add Comment
Related Topics

Get daily updates from Mathrubhumi.com

Newsletter
Youtube
Telegram
Disclaimer: Kindly avoid objectionable, derogatory, unlawful and lewd comments, while responding to reports. Such comments are punishable under cyber laws. Please keep away from personal attacks. The opinions expressed here are the personal opinions of readers and not that of Mathrubhumi.