ChatGPT: No substitute for originality

I Mean What I Say

by Shashi Tharoor

4 min read
Read later

Ultimately, how we handle the advent and growth of ChatGPT and AI will depend on our ability to harness its potential while minimizing its risks, our ability to adjust and adapt to its existence so that we can outsmart it when needed, and our own capacity to resist its blandishments so as to preserve our own integrity and creativity

Two recent pieces of news point to both the promise and the perils, the potential and the limitations, of the manner in which AI, Artificial Intelligence, is making an impact on our lives. The first said that publishers abroad are mystified at the way in which entire manuscripts are being submitted to them that have been 'written' not by the claimed authors but by 'ChatGPT', the latest AI sensation. A famous Japanese manga comic writer claimed it was ChatGPT that suggested a new story line to him for his latest comic. Second is news from India: someone fed the questions from the last UPSC exam to ChatGPT, and it failed disastrously, scoring only 54%.

ChatGPT is a large language model created by a company called OpenAI, that is designed to answer questions and provide assistance to users in natural language. It has the potential to revolutionize the way we interact with technology and communicate with each other. But it can do more - write stories and poetry, engage interactively with its users and find facts from the Internet.

One of the main opportunities of ChatGPT is its ability to provide instantaneous and accurate responses to users' questions. ChatGPT can process vast amounts of information and provide tailored responses to each user, making it an invaluable tool for research, education, and problem-solving. It can also help businesses improve their customer service by providing quick and efficient responses to customers' inquiries.

It's not just a time-saver or a shortcut-provider to lazy users. ChatGPT has the potential to enhance accessibility for people with disabilities. For individuals who are unable to use traditional input devices such as keyboards or mice, ChatGPT's ability to process natural language inputs could be life-changing. ChatGPT can also assist people with visual impairments by providing audio responses to their queries. Moreover, ChatGPT's ability to understand and process natural language could help bridge language barriers and enable effective communication across different cultures. This could have significant implications for international diplomacy, trade, and cooperation. Additionally, ChatGPT could help automate translation and interpretation services, making it easier for people to communicate across different languages.

The growth of AI will depend on our ability to harness its
potential while minimizing its risks | Illustration by Vijesh Viswam.

Finally, ChatGPT's ability to process and analyze large amounts of data could be harnessed for a wide range of applications, from scientific research to market analysis. By providing insights and predictions based on vast amounts of data, ChatGPT could help businesses and researchers make more informed decisions.
However, along with such opportunities, ChatGPT also poses several potential dangers. There's the obvious risk of plagiarism: kids in the West have already had ChatGPT write their homework assignments for them. An even more significant danger is the potential for bias and discrimination. ChatGPT's responses are based on the data it has been trained on, and if that data is biased or incomplete, the model's responses may reflect those biases. This could perpetuate and even amplify existing social and cultural prejudices, leading to unfair and discriminatory outcomes. There are claims that it has given wrong answers to some questions, based on inaccurate information on the web, or because (as in the UPSC exam) the questions aren't that easy to answer based on data alone.

Another danger of ChatGPT is its potential to spread misinformation and propaganda. Because ChatGPT can generate text that appears to be written by a human, it could be used to spread false or misleading stories, creating confusion and distrust among users. This could have significant implications for politics, public health, and other areas where accurate information is critical. Additionally, ChatGPT's ability to process natural language could be used for malicious purposes, such as generating convincing phishing emails or other forms of social engineering attacks. These attacks could be difficult to detect and defend against, making them a significant security threat.

Finally, ChatGPT's potential to automate jobs and replace human labour could have significant economic and social implications, especially in a country of high unemployment like India. While automation has the potential to improve efficiency and productivity, it could also lead to job losses and widening income inequality. Additionally, the widespread adoption of ChatGPT could further exacerbate existing disparities in access to technology, creating a 'digital divide' between those who have access to AI technology and those who do not.

So, should we be relieved that ChatGPT couldn't crack the UPSC exam? Probably. (I asked ChatGPT why this happened and it replied: 'as an AI language model, I do not have the physical capability to take exams, including the Indian UPSC exam for civil services'.) Increasingly, exam-setters are going to have to ask questions that are designed to test the student's ability to think originally and creatively, rather than merely to regurgitate facts that any robot can do. And should we be alarmed that it can come up with original story ideas, as in the Japanese manga writer's case? Not really, since as an AI tool, these ideas are those suggested by existing stories somewhere on the Internet. And any self-respecting writer wants to savour the pleasure of coming up with his own ideas and watching them evolve in his mind and on the page. If he subcontracts that to ChatGPT, he loses the greatest pleasure of being a writer, the joy of creation.

So, ChatGPT (and similar AI programmes) undoubtedly has the potential to revolutionize the way we communicate with technology and each other. Its ability to process natural language could provide significant benefits for research, education, accessibility, and international cooperation. However, it also poses several potential dangers, including bias and discrimination, misinformation and propaganda, security threats, and economic and social disruption. It is essential to be aware of these potential dangers and take steps to mitigate them as we continue to develop and adopt this technology. Ultimately, how we handle the advent and growth of ChatGPT and AI will depend on our ability to harness its potential while minimizing its risks, our ability to adjust and adapt to its existence so that we can outsmart it when needed, and our own capacity to resist its blandishments so as to preserve our own integrity and creativity.

Add Comment
Related Topics

Get daily updates from

Disclaimer: Kindly avoid objectionable, derogatory, unlawful and lewd comments, while responding to reports. Such comments are punishable under cyber laws. Please keep away from personal attacks. The opinions expressed here are the personal opinions of readers and not that of Mathrubhumi.