The global media is awash with talk of responsible AI and governments are formulating guidelines for ‘ethical use of AI’ but I have chosen to write on (the need for) responsible media, as so much of coverage is steeped in ignorance and hype.

Two observations to begin with.

Every age and time has its own blind beliefs.

The duty of a journalist is to ask.

The trend that began during the pandemic is intensifying at an alarming rate – the trend of mainstream media (and quasi-specialist media too) covering scientific research from educational and research institutions. In fact, this media coverage became much sought-after as every institution was keen to be projected as being at the frontlines of research. It was amusing (and frightening) to read the mainstream media coverage of even mathematical models of the disastrous impact of Covid, even though the technical dimensions of the research was typically beyond the ground normally traversed by mass media. 

This trend has continued with the coverage of large language modelling (LLMs) as used in genAI and AI in general. It is no longer amusing but just frightening to read the supposed breakthroughs of chatGPT and ‘AI’ and how pervasive its influence has become. The chasm between technology and mass media coverage is deepening; it doesn’t forebode well.

IITians and chatGPT

The Indian media reported a few days ago that around 30% of IITians graduating this year haven’t been placed and a leading daily newspaper quoted an ‘expert’ attributing this to ‘the effects of the workings of chatGPT’, without questioning him on the ‘how’. ‘IITians’ comprise of students from several engineering streams such as Mechanical, Electrical, Electronics, Chemical, Computer, Aeronautics, Civil, and other technical disciplines such as Engineering Physics, Operations Research, Applied Mathematics and various non-engineering streams including Management, Public Policy, Economics, Applied Psychology and so on. The journalist quoting the ‘expert’ didn’t ask how these different streams are affected by chatGPT, thus reinforcing the current environment in media which is completely uncritical of ‘expertise’.

The irony is that two months prior to this news story, IIT Mandi Director Laxmidhar Behere, called chatGPT “a good innovation but it is as dumb as possible”, as it “doesn’t understand the concept”. As we ought to know, it just gathers the information from voluminous data and puts it together. While it “can probably give a finishing touch, making things much better, this and that. But the core engineering discipline cannot be replaced by AI or AI technology. So this is a hype. I cannot tell you how long this will continue, but I am pretty sure every phase has its own end”.  (https://www.edexlive.com/news/2024/Mar/05/chatgpt-is-a-good-innovation-but-it-is-as-dumb-as-possible-iit-mandi-director-40647.html). How can chatGPT take away jobs of IITians?

Of course, some jobs are at stake, but the key aspect to grasp is the extent of automation possible or accomplished in any task, which becomes clear from OpenAI’s study and its findings – https://www.livemint.com/news/world/these-jobs-are-most-at-risk-due-to-chatgpt-as-per-openai-study-11679358453267.html. As Mint reports (March 21, 2023), “The study also discovered that professions that are heavily reliant on scientific and critical thinking skills are less prone to automation. Conversely, jobs that require proficiency in programming and writing are more susceptible to being automated”. This too is rather simplistic but I will address this in a later article.

Hype and reality

Reuters Institute and Oxford University just released the findings of a study they conducted across six countries involving 12,000 participants. Dr Richard Fletcher, the report’s lead author, remarked that there was a “mismatch” between the “hype” around AI and the “public interest” in it (https://www.bbc.com/news/articles/c511x4g7x7jo). Incidentally, many media channels have reported the study, which can be accessed at https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news.

Linus Torvalds, the creator of Linux, talking to Dirk Hohndel, Verizon’s Head of Open Source Program Office, found it “hilarious to watch. Maybe I’ll be replaced by an AI model!” Hohndel thinks most AI today is “autocorrect on steroids.” Torvalds summed up his attitude as, “Let’s wait 10 years and see where it actually goes before we make all these crazy announcements.” (https://linux.slashdot.org/story/24/04/19/1944235/linus-torvalds-on-hilarious-ai-hype).

In the field of medicine, while tall claims have been made, the ground reality is something else. In a study published in pubmed on the Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review (March 2024), the authors studied various literature to arrive at this conclusion: Of 161 articles, 141 (87.6%) reported using GenAI to assist services through knowledge access, collation, and filtering. GenAI was used for disease detection (19/161, 11.8%), diagnosis (14/161, 8.7%), and screening processes (12/161, 7.5%) in the areas of radiology (17/161, 10.6%), cardiology (12/161, 7.5%), gastrointestinal medicine (4/161, 2.5%), and diabetes (6/161, 3.7%). (https://pubmed.ncbi.nlm.nih.gov/38506918/0). As we can see, barring one area, its use is a small percentage.

One of the tallest claims made recently was that AI can help find a cure for Alzheimer’s which, incidentally, is a cottage industry in the US, where there is a ‘new’ diagnosis every quarter except that such ‘diagnoses’ are tied to a therapy they intend marketing. Underlying such claims is the assumption that we know how the human brain works, when, sadly, we don’t, despite the enormous and continuing sincere research in neuroscience. According to Behere, chatGPT cannot explain scientific concepts as well as possibilities “because we ourselves don’t understand our own cognitive processes. For example, we do not have an answer about why one person is able to understand Math and why another person doesn’t. So how can we create a system when we very poorly understand our own cognitive processes?”  (https://www.edexlive.com/news/2024/Mar/05/chatgpt-is-a-good-innovation-but-it-is-as-dumb-as-possible-iit-mandi-director-40647.html)

Just as I finished writing, I came across this excellent article on how technology can block human potential – https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011. Couldn’t think of a better way to end.