Generative AI tools like chatGPT, if embraced the way they have been, pose the gravest danger to the very process of knowing. If an aid to the process becomes the process, we are staring at dark times, notwithstanding all the tall claims made, deeply affecting students and teachers alike
In 1965, Richard Feynman said: “I think I can safely say that nobody understands quantum mechanics”. I am beginning to harbour the suspicion that pretty much the same observation can be made of artificial intelligence. Knowledgeable and serious scholars have shifted to the background, leaving the mainstream media full of raw, inexperienced people salivating on the wonders of AI. Marketing is enjoying an open, unobstructed field. I shudder to think of the influence this will have on unsuspecting young minds.
Investing memory is (deliberately) short – India has over 60 GenAI startups already attracting over $590 million (Times of India, June 23, page 17, print edition). The madness never stops – first every other company was SaaS (software as a service) netting over $100 billion in investment only to see it vanish and with many people losing their jobs, then it was AI and now it is GenAI. Tomorrow it will be something else. As long as there is money looking to invested in somewhere this trend of madness will never stop. Period.
Black and Bleak
We need to confront a basic issue – the black box phenomenon, widely considered to be characteristic of algorithms, because it is not clear what happens within the algorithms that leads to the output. GenAI is no exception. Ardent promoters of chatGPT have claimed that that future generations of genAI will be open and transparent, while admitting that the current versions are black boxes.
chatGPt is called Generative AI but such naming does not bestow any magical powers to grasp the inner workings of the model, commonly called large language models. These are complex models built on highly advanced mathematics in conjunction with linguistics and share the basic characteristics of any complex model: there are multiple interactions among constituting elements and these are (nearly) impossible to model. All that we seem able to say is that these interactions are crucial to the output that the model (or tool) produces. We need a sound grasp of systems thinking and complexity and the related concept of emergent property to make some progress towards unravelling the mysteries of artificial intelligence. Knowledge, for instance, is an emergent of an interactive process. Emergent property is a subject by itself demanding serious attention.
I had promised in my last article that I will address this issue but going beyond what I have written so far would take us into a highly technical territory which is not the function of this writing. There is a rich literature on the subject.
Undeterred by any lack of understanding, a new technique – prompt engineering – is developing with a mushrooming of trainers and coaches in how to use chatGPT. The superlatives used for chatGPT are simply shocking with one 20+ guy announcing that Microsoft has done away with the office worker! Visit YouTube to gauge the enormity of the absurdity. Amidst all these, is the concern voiced by V Kamakooti, IIT Madras Director, that most IIT students are not interested in core engineering! I am reminded of the title of James Bridle’s book – New Dark Age – Technology and the end of future.
‘Generative’ pitfalls
My principal interest here is to raise concerns over what happens to the process of knowing when GenAI tools like chatGPT proliferate. These tools are parasites because they feed off what is available on the internet – be it text or codes. Since a great deal of what is available is fake text or morphed images, this immediately taints the ‘output’ of chatGPT. As multiple chatGPTs keep producing ‘their’ outputs, be they texts or sets of codes, this very output will become ‘inputs’, which will keep growing in scale as more and more people access chatGPT for their own (hopefully) specific queries, not to mention that the data on which such tools are trained will go beyond the 2021 data on which chatGPT was trained.
We are faced with the prospect of such AI tools moving from spewing human nonsense to spewing their own nonsense, more so as many chatGPTs emerge competing with one another, circulating and recirculating dubious knowledge. It will displace social media led by WhatsApp university, currently the leading peddler of often vicious gibberish.
While Humanities are exposed to the deepest trouble as prejudices, vilification and wanton doctoring have become staple responses, I am terrified of what this means to even science and engineering. Unless you suffered from idealized notions of science and engineering, you should be familiar with prejudices and biases peculiar to these ‘objective’ disciplines too. If you don’t believe what I am saying, carefully go through Quora.
Take quantum computing, an extremely intimidating subject even for those who have formally studied physics and computing. Unless you displayed patience (a basic requirement for research) and a persistent questioning mind, you will navigate through absolute gibberish. Just read through what is written about superposition and its links to quantum computing and you will encounter some inane stuff. In fact, it is downright misleading, as the writings of Scott Aaronson keep warning us. Superficiality is the reigning norm!
Process and substance both have to function together to yield anything worthwhile. Employing a sophisticated process on a highly questionable substance cannot produce true knowledge.
Quo vadis?
Where does this leave us? What is in store? These are some of the most important questions we face today as we keep experiencing all the horrors of the distributional strength of social media. The distribution and redistribution of dubious and downright misleading ‘knowledge’ is perhaps the single greatest threat facing students and teachers.
We have a responsibility to protect our children and students but my fear is that in the intensely competitive education environment, people will resort to anything they think will give them an edge, forgetting a basic lesson: it takes only a small step to go over the edge.
Takeaways
We need to recognize that genAI tools are black boxes as are most algorithms
chatGPTs are by their nature parasitical – they feed off what is available
Process and Substance functioning together produce meaningful outcomes
Sophisticated processes applied on questionable substance will yield dubious outcomes
The intensely competitive educational environment is under threat