chatGPT has taken the world by storm much like Carlos Alcaraz has taken the tennis world, generating fear across the globe with high profile exits and calls for regulation to protect from the likely harm arising out of such AI tools. Generative AI offsprings like chatGPT (and potential rivals) are merely the latest in the development of machine intelligence but at a scale that is staggering and frightening. Extremely few companies in the world have the financial prowess to build them, creating an oligopolistic competitive field.

Until the advent and popularization of chatGPT and its variant, there were stories every day in the media about how AI is enabling this or that, how it is transforming whole fields, how it is accomplishing breakthroughs in this or that and so on. And now, in the last week, there have been high profile exits over fears of the risks from AI and the White House has called for a meeting with Satya Nadella and Sundar Pichai of Microsoft and Alphabet, the two companies battling in this space with Alphabet trailing Microsoft. I cannot help wondering whether an intense and expensive battle between two tech giants has been transformed into a larger dimension. Let us note an important facet of the mass media coverage over the last 12 months – AI is now the preferred term to describe anything done through algorithms. When was the last time you read, in the mass media, of machine learning algorithms or natural language programming? Companies are falling over one another to tell us what they are doing in AI and there is one business newspaper which has been giving extensive coverage to such announcements, almost becoming their champion!

chatGPT is the latest in the acrimonious history of technology. Alibaba has already announced it is developing a rival to chatGPT, which, by the way, is a generative AI itself part of what the tech world calls large language modelling (LLMs).  The media is awash with fears over the potential loss of several jobs and occupations with a global investment bank warning of disruption to the Indian IT industry because of chatGPT’s ability to generate software codes. IBM is planning to shed 7,800 jobs because of AI, according to the Times story by Chidanand Rajghatta, with a Goldman Sachs report warning of a potential loss of 300 million jobs in the US and Europe, if ‘generative AI delivers on its promises’. Meanwhile, there are already people setting up training shops in the use of chatGPT.   

The articulation of fears over generative AI began when George Hinton, considered the Godfather of AI, quit Google so that he could speak freely about the risks of unrestrained AI (The Times of India, May 03, 2023, page 2). Following him the next day, Michael Schwarz, chief economist, Microsoft, warned that AI ‘will cause real damage’, according to Bloomberg, reported in The Economic Times (May 4, 2023). While Hinton fears job losses in paralegal work, translation, personal assistants, or any rote tasks, his immediate concern is over fake news. Schwarz too fears interference in elections.

Real fear over the fake

The question is what is new? The 2016 US Presidential elections had encountered not just fake news but much more through active use and misuse of social media channels like Facebook and Twitter, both of which admitted to having many fake accounts. Ever since techniques developed to manipulate images, the fake has enjoyed flourishing times. That too isn’t new. Anyone who has seen the 1993 film ‘Rising Sun’ starring Sean Connery and Wesley Snipes, will recall the scene where two heads are ‘chopped’ off to be placed on the other neck; you couldn’t make the difference unless you saw it though an equipment. In the 30 years since then ‘Deepfake’ has become so much a part of our lives that we are no longer surprised when a picture is shown to be fake. Just as common is fake news.  

The use and misuse of all kinds of algorithms across diverse areas such as school & college admissions, parole hearings, has also been widely documented to reveal the bias against certain groups of people. The New York Times has carried a story (reported in the Times of India, May 3, 2023) that Israel is using facial recognition software to power ‘automated apartheid’ to track and restrict the movements of Palestinians. Interference in the lives of people, governments and society aided by technology is now so common that it has stopped shocking us, immune as we are to the daily barrage of stories which keeps destroying whatever little faith we may still hold about the human world.      

Let us face it. Fake news is here to stay and should not worry us at all. What should is the mindless and uncritical acceptance of and response to such news as real, giving rise to a new function of ‘fake news audit’. What we need is a behavioural change from people but that might just not happen at all. The continued success of WhatAapp university’s authoritative pronouncements on almost every topic is simply shocking. To me, this alone rubbishes all the talk about how we live in an information age, how data is the basis of decisions and such gibberish. We should modify the old English saying ‘An idle mind is a devil’s workshop’ to ‘An idle mind is fake news workshop’. Obviously, I understand the enormity of the risks involved here – we can have riots, vengeful acts, arson, doctored election results, all engineered by fake news but I don’t see a solution here, as we live in an atmosphere of intellectual laziness compounded by the basest of feelings. I doubt there is an organized solution. We should warn whosoever we can in whatever way possible about the pitfalls of embracing institutionalized ignorance. Government regulation is hardly a solution because all that such an intervention will do is make governments stronger, and we certainly don’t want that.

Surprise, surprise, surprise

There is a sense of urgency (and doom) as Hinton perceives a near immediate threat because what he thought would take 30 to 50 years to come to fruition is here and now – “The idea that this stuff could actually get smarter than people”. Even OpenAI, the creator of chatGPT and its latest variant chatGPT 4, has itself expressed surprised at how this has turned out.

As I mentioned earlier, chatGPT and its variants are part of a family of generative AI, modelled on what is called ‘Large Language Modelling’ (LLMs) which studies all written (and spoken) language to find patterns. You must have already used it in WhatsApp or any variants. LLMs carry it much farther because of the complex mathematics underlying. LLMs may be thought of as complex systems and one of the distinctive characteristics of complex systems is the impossibility of (completely) modelling all the interactions among the system’s various elements and more important, their outcome. In simple language, we don’t know what is happening as these interactions take place. In algorithmic language, this is a Black Box!  

Built over several years and involving millions of manhours, chatGPT’s success is the result of complex mathematics and computing power. To recall, the data on which chatGPT was trained was the 2021 data on the internet. We are talking of billions of bytes of data as large; several tens of millions is actually ‘small’. Whatever its consequences, it is extremely challenging to build generative AI – we are talking of millions of manhours and billions of dollars. Microsoft, an investor in OpenAI (which created chatGPT) is investing an additional $10 billion, given its strong and continuous positive cash flows. Alphabet and Alibaba too can but the list is short. My point is that we should be wary of claims about anyone claiming to develop rivals to chatGPT.

I will discuss LLMs in detail in my next article. Meanwhile, stay true to the true.

Takeaways

Generative AI is the field of competition among the Big Tech

The rise of the ‘Fake’ is here to stay

Behavioural change is the only protection but unlikely to happen

Government regulation is no guarantee

Success of chatGPT has surprised many including its promoters

Large Language Modelling needs to be understood