India gets ahead of the US – in fake AI row


Occasional Bytes

By Hari Kumar

5 min read
Read later
Print
Share

Tamil Nadu Finance Minister Palanivel Thiagarajan (PTR) was caught in a controversy recently after a BJP leader in the state released an audio tape in which PTR was allegedly heard saying that some of his party leaders are corrupt. PTR denied this, saying that the audio was created with AI using his voice, and also posted clips showing how fake videos of US leaders like Barack Obama and Donald Trump have been created with the help of artificial intelligence.

The audio sounds like the voice of PTR and the BJP leader Annamalai says he stands by his claim. As technological advancements has made the production of deepfakes easy, it will take some time to figure out who is speaking the truth.

Ever since ChatGPT burst onto the scene late last year, experts have been warning about deep fakes, as powerful AI chatbots have the ability to create text, audio, and videos that appear genuine. The political drama that is playing out in Tamil Nadu may well be the first of many that India will have to confront, as more people learn how easy it is to generate deep fakes.

Fake audios and videos are nothing new, but most of the fakes were crudely made and could be detected through forensic analysis. But AI is able to create much more authentic looking and sounding deep fakes that are difficult to detect and easy to make. This could lead to deep fakes flooding the internet and make it difficult for internet users to discern the authenticity of information.

Such warnings are getting louder and the latest has come from Geoffrey Hinton who some consider as the godfather of artificial intelligence. Hinton was one of the key figures who developed neural networking that has enabled AI platforms to shift gears and enter a fast track.

In March 2021, the Wired magazine explained the dramatic shift neural networks have brought in. “Rather than carefully defining how a machine was supposed to behave, one rule at a time, one line of code at a time, engineers were beginning to build machines that could learn and apply lessons from such enormous amounts of data that no human could ever wrap their head around it all. The result was a new breed of computing that was not only more powerful than anything that came before but also more mysterious and unpredictable,” it said.

AI is able to create much more authentic looking and sounding deep fakes

Now the man who set the ball rolling says he himself is worried about the way tech giants are pushing ahead with AI development. “I don’t think they should scale this up more until they have understood whether they can control it,” Hinton told the New York Times after he quit Google where he was working as a vice president after they sold the firm he and three of his students founded, to Google in 2012 for US$44 million.

Hinton was upbeat about the development of AI all along, and during an address at IIT Mumbai in 2021, he said: “Neural networks with about a trillion parameters are so good at predicting the next word in a sentence, that they can be used to generate quite complicated stories or to answer a wide variety of question. These big networks are still about 100 times smaller than the human brain, but they already raise very interesting questions about the nature of human intelligence.”

Geoffrey Hinton

But now Hinton joins 1,000 tech experts such as Tesla founder Elon Musk and Apple co-founder Steve Wozniak, who issued an open joint letter demanding a temporary halt to AI experiments. Microsoft founder Bill Gates, Google CEO Sundar Pichai, and OpenAI founder Sam Altman have expressed their opposition to such calls.

In a recent interview, Pichai admitted that the people who developed AI chatbots are not sure how it works at times. “There is an aspect of this which we call – all of us in the field call it a ‘black box’. You don't fully understand. And you can't quite tell why it said this,” he said.

Still, companies are incorporating such powerful programs into software in critical fields like defence, health, finance, and robotics. Last week, reports said Palantir Technologies, founded by billionaire Peter Thiel, is launching a platform to integrate AI into military decision-making.

As the bottom line of companies remains revenue and profit, there can be very little incentive to restrain themselves. With the US, China, Russia and other countries following their own path to develop powerful AI platforms, there is very little chance of a unified global approach on this either.

The proliferation of AI power has already spread, and a recent report in The Washington Post stated that one tool, described as 'Stable Diffusion for perverts', has already been downloaded 77,000 times. People are also sharing ways to use AI technology to edit real images, including removing the clothing of fully dressed woman in photographs.

In India, experts in the IT field have also been warning about the dangers of AI chatbots. Recently, three prominent Indian figures – Zoho founder Sridhar Vembu, Ispirt’s Sharad Sharma, and former Niti Aayog vice-chairman and Pahle India Foundation chairman Rajiv Kumar – have joined these calls, stating that such bots “can have catastrophic consequences and that it is imperative for all nations, including India, to find an answer to this existential question”.

Even before the emergence of powerful AI tools, the amount of misinformation and fake news being peddled through Indian social media has been rising rapidly. Most of these fake audio and video clips are made crudely as the people behind them lack the skill or technical knowledge. Now AI programs are making such sophisticated jobs easy, even for the technologically challenged.

Despite its crude nature, fake news continue to go viral, amplified by social media while reports about the veracity of it hardly get the same mileage. Given this, the more sophisticated audio and footage produced by such cyber trolls and IT cells could result in disastrous effects.

Some media analysts in the US have warned the people there to get ready for a flood of fake text, images and audio as the next presidential election approaches in 2024. One thing that New York technology correspondent Kevin Roose pointed out is that more than the effect of deep fakes, what could be more worrying is if political leaders take cover behind deep fake claims when they are caught saying or doing something.

India too goes to the polls in 2024 and we are seeing all these being played out already. At least for once, India seems to be ahead of the US.

Add Comment
Related Topics

Get daily updates from Mathrubhumi.com

Newsletter
Youtube
Telegram
Disclaimer: Kindly avoid objectionable, derogatory, unlawful and lewd comments, while responding to reports. Such comments are punishable under cyber laws. Please keep away from personal attacks. The opinions expressed here are the personal opinions of readers and not that of Mathrubhumi.