The difference between genius and stupidity is that genius has limits. This observation, attributed to Albert Einstein, has more than a ring of truth when it comes to the way people and the media continue to see ‘AI’ everywhere, which has become a catch all phrase.

Surely, the most revealing news on AI in the last few months is Google sacking one of its senior AI researchers, Blake Lemoine, in June-July for saying on record that Google’s AI system has feelings and it ‘wants’ its feelings respected. This is the Lamda (Language Model for Dialogue Applications), which Lemoine claimed was showing human-like consciousness (https://www.bbc.com/news/technology-62275326). Google was initially embarrassed and distanced itself from his claim. Hurriedly, many other AI researchers denounced the claim and finally Blake Lemoine had to go. It is amazing that at a time when we are ‘seeing’ AI everywhere, we are just not able to accept that AI can have feelings, although life-long AI researchers like George Zarkadakis and Arthur I Miller have argued the possibility. It disturbs us so much that we have to discredit the claim and the person making it. When it comes to intelligence though, we are willing to stretch our beliefs to any extent.

Unsustainable claims

Headlines scream everyday about this or that AI-powered device or software or whatever. There are ‘breakthroughs’ happening every day, should you believe the headlines. You can have AI-powered laptops, washing machines, watches or just about anything you can make! I have it on record from more than one quarter – corporate and academic – that not-so subtle suggestions have been given by CEOs and senior management to researchers to affix the AI tag to any research they are doing – you have companies in basic businesses such as steel, aluminum but engaged in AI research! Every business is now going the ‘AI way’!

A well-written piece of software code will perform as expected, including detecting patterns, but that does not make it AI. Threshold limits can be set without becoming AI. Unfortunately, simple pattern recognition is being touted as AI, when good old Excel detects pattern and picks up repetitions! Surprisingly, the Advertising Standards Council of India has not yet responded to such claims. In a sense, the claims being made are not so shocking as the ease with which they have passed muster and the glee with which people have embraced it. Anyone questioning such claims is instantly branded technologically backward. I recall Noam Chomsky’s famous book ‘Manufacturing consent’. This is stupidity beyond description because breakthroughs, by definition, cannot take place every day.

Evolve, evolve, evolve

In an article titled “‘Artificial Intelligence’ has become meaningless”, Ian Bogost (The Atlantic, March 4, 2017), comes straight to the point: “What to make, then, of the explosion of supposed-AI in media, industry, and technology? In some cases, the AI designation might be warranted, even if with some aspiration. Autonomous vehicles, for example, don’t quite measure up to R2D2 (or Hal), but they do deploy a combination of sensors, data, and computation to perform the complex work of driving. But in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software”. He goes on to identify instances where the AI (of Google and Facebook) has been fooled by typos and dubious profiling (https://www.theatlantic.com/technology/archive/2017/03/what-is-artificial-intelligence/518547/). By his logic, we will have to discount most of the claims made today but, in a true democratic spirit, let us consider other views.

Writing in Futurism, Kristin Houser (May 11, 2017) quotes Omar Abdelwahed, SoftBank Robotics America’s Head of Studio: “At base, for a system to exhibit artificial intelligence, it should be able to learn in some manner and then take actions based on that learning. These actions are new behaviors or features of the system evolved from the learnings”. A spokesperson for IBM, home to Watson, went one step further, positing that an AI should not only be able to learn and reason, it should also be able to interact and react: AI platforms should do more than answer simple questions. They should be able to learn at scale, reason with purpose, and naturally interact with humans. They should gain knowledge over time as they continue to learn from their interactions, creating new opportunities for business and positively impacting society. (https://futurism.com/this-is-what-a-true-artificial-intelligence-really-is). Many researchers working in the field of AI consider this ability to evolve, to learn new things and newer aspects of things, recognize and act in a new environment and so on as core to what constitutes AI. Serious AI researchers admit we are a long way in reaching that goal.

The problem is that AI is a broad rubric enveloping fields such as machine learning (algorithms), natural language processing, robotics, computer vision, knowledge representation. To the public at large and I regret to say, many students and teachers in the field of computers, computing & IT and engineering, AI has become a simple, non-layered word capable of reasoning similar to human beings. Very few make the effort to find out what is what. This is true even of senior academic faculty working in the field of natural language programming (NLP), where they keep making ‘discoveries’ of what are commonly known in language and linguistics. Language, alongside mathematics and natural sciences, is the most studied of all subjects for centuries and researchers in NLP remain blissfully ignorant of all such developments. And the irony is that, given the way academic research gets circulated, such ‘discoveries’ get published and discussed!

Blindsided

The dilemma for many is that they have a lot invested in the field – careers are made (and destroyed), professional lives have to flourish, budgets have to be obtained, incremental accomplishments are conveyed as incisive openings, creating new paths. Stuart Russell, one of the most famous names in the field of AI, has recently written a book ‘AI: human compatible’ where he has raised questions over the claims made by several researchers that a generalized intelligent machine is near achievement and likely in one generation but he himself was guilty of similar arguments. Speaking to The Guardian in October 2021, he said that “most experts believed that machines more intelligent than humans would be developed this century, and he called for international treaties to regulate the development of the technology”. The Guardian quotes him saying thus: “The AI community has not yet adjusted to the fact that we are now starting to have a really big impact in the real world. That simply wasn’t the case for most of the history of the field – we were just in the lab, developing things, trying to get stuff to work, mostly failing to get stuff to work. So the question of real-world impact was just not germane at all. And we have to grow up very quickly to catch up” (https://www.theguardian.com/technology/2021/oct/29/yeah-were-spooked-ai-starting-to-have-big-real-world-impact-says-expert). The Guardian is a sober newspaper but failed in the basics of journalism in this article since it did not speak to anyone else in the field of AI research.

It is not just mainstream media alone that is guilty of elevating the mundane to the monumental (I know I have used this expression earlier). The tech media too often writes like the general mainstream media without engaging in depth, and separating the wheat from the chaff. Amidst this willful ignorance several businesses have succeeded in projecting themselves as being pivotal to the frontiers of AI. We are really living in a world of false magnification – just compare Tesla’s output with others and examine its valuation. A Chinese company called BYD, where Warren Buffet has a significant stake, produces three times Tesla’s output!

History, especially business history, is important because it will bear out a simple fact: exaggerated claims have a predictable pattern of falling by the wayside but after causing untold losses to many, including the innocent who get carried away. I will just repeat Edmund Burke’s advice and warning: “Eternal vigilance is the price of liberty”.

Takeaways

AI having feelings seems unacceptable
Too many false claims on AI
Imagination is stretched on intelligence in AI
Simple software accomplishments are portrayed as AI
Business interests driving the orchestrated cacophony