When machines write poems, stories and essays


Hari Kumar

Programs powered by artificial intelligence are showing amazing abilities but that is leaving some experts worried.

Liam Porr | Photo: Facebook @St Joseph High School

Programs powered by artificial intelligence are showing amazing abilities but that is leaving some experts worried.

Start typing an email on your Gmail account or a short message on some of the texting apps on your smartphone, and you will see them suggesting the following words for you to save time.

Ever wonder how?

Welcome to the world of self-learning computers and artificial intelligence (AI). Google and the texting apps use a word prediction program designed to learn from every writer’s way of writing.

But don’t be amazed. These are just baby steps. Besides being a writing aide, AI is now used for analysing medical data, aiding doctors in diagnosis, self-driving cars, security measures like face recognition, even keeping track of the health of our planet through satellites. And it keeps on spreading into other areas.

The tech crowd is rightfully excited, convinced that the sky is the limit for AI. But some experts are worried. Without proper safety and guardrails in place, this great leap, they fear, could create mayhem.

Consider the AI writing program Generative Pre-trained Transformer (GPT-3), the multi-billion-dollar program released by OpenAI, an artificial intelligence lab founded by Tesla chief Elon Musk.

Elon Musk
Elon Musk | Photo: AP

It has crunched through billions of Wikipedia articles and over 67 billion books, analysed all the words to mimic how we write. The program is built to predict the next word in a sequence of words, and it uses billions of parameters.

“The model can create original long-form text, such as an essay or article, in less than 10 seconds, given a one-sentence prompt. The best essays written by the model fooled 88% of people into believing that they were written by humans,” gushed the investing platform Data-Driven Investor last year.

Last year, a student at the University of California went further. Liam Porr started a blog by posting text generated by the program.

“I would write the title and introduction, add a photo, and let GPT-3 do the rest. The blog has had over 26,000 visitors, and we now have about 60 loyal subscribers,” he wrote in his newsletter. The articles were published in Hacker News, which is widely read in Silicon Valley and no one suspected it was machine-generated text.

Liam Porr
Liam Porr | Photo: Facebook @St Joseph High School

It will have a major impact on all kinds of writing, say techies who are enthusiastic about this though some do see dark clouds ahead. Anyone can write essays, poems, and books with the help of this program, they warn.

“For example, after analyzing thousands of poems and poets, you can simply input the name of a poet, and GPT-3 can create an original poem similar to the author's style. GPT-3 replicates the texture, rhythm, genre, cadence, vocabulary, and style of the poet's previous works to generate a brand-new poem,” says Twilo, a US-based cloud communication firm.

In May this year, a team of researchers at the Washington-based Center for Security and Emerging Technology (CSET) issued a report on the multi-billion-dollar GPT-3 program.

The report, titled Truth, Lies, and Automation, found the program was astonishingly effective.

Still, it also made an alarming discovery: the program spins and makes up things though it has all the data in its command. This, the CSET researchers noted, makes the program more suitable for misinformation and disinformation campaigns.

In short, the ability to generate thousands of authentic-looking articles and essays in seconds would open up enormous opportunities for anyone with a crooked mindset and devious intentions. They – be it national governments or terrorist groups – could use it to “disrupt, divide, and distort” the audience, weaponize it for information wars, and target people based on religion, ethnicity, food choice, fashion, etc.

Of course, access to GPT-3 is allowed only after tight scrutiny. However, that doesn’t assure writing programs using AI would be available only to a select few. Many more such programs are in the works.

As competing internet giants like Google and Facebook race ahead with AI research, innovations are bound to happen. UK-based artificial intelligence lab, DeepMind, has already declared that their new version will use an external memory of a huge database that contains billions of written passages to help with generation of new text. This basically saves the firm millions of dollars as they need not ramp up the memory needed to do computational calculations, as it simply uses the ready-made text as a kind of cheat sheet.

Moreover, as nations with enough resources weaponize information to disrupt rival countries (like Russia-linked campaigns during the 2016 US elections), more such AI-driven programs are bound to arrive as they will get rid of the need for recruiting people to generate such content.

AI
Photo: Getty Images

The disinformation campaign unleashed in the US during the elections used social media platforms to spread hate messages based on race and ethnicity. Studies later showed they planted narratives on both sides of the divide with a clear aim of fanning the flames.
The tech companies in Silicon Valley have realised the consequences of the unbridled race to develop internet businesses. They are now trying hard to incorporate safety measures into their products and services.

Platforms like Facebook or Twitter were well-intentioned. They aimed to enable people to connect with each other and expand their horizons. But no one had foreseen the dark side — how these platforms could be used spread hatred and divide societies.

Now the genie is out of the bottle. From the US to India, democracies across the world are facing threats as domestic and foreign players find ways to exploit the situation.

Throw into this mix the artificial intelligence programs that can, in the blink of an eye, generate thousands of articles, essays and fake studies. That could make the current “infodemic” look like a walk in the park.

Add Comment
Related Topics

Get daily updates from Mathrubhumi.com

Youtube
Telegram
Disclaimer: Kindly avoid objectionable, derogatory, unlawful and lewd comments, while responding to reports. Such comments are punishable under cyber laws. Please keep away from personal attacks. The opinions expressed here are the personal opinions of readers and not that of Mathrubhumi.