Notwithstanding the ‘qualified success’ of chatGPT, AI and all its branches are in early stages of evolution. It is naïve and irresponsible to call for regulation without a deep understanding of the science behind AI and all its branches, each of which is a highly specialized field. In any case, the global record of regulation isn’t inspiring

Something doesn’t add up in the barrage of stories on genAI, with multiple influential voices speaking in tandem and at a tangent. What is certainly, at the moment, a battle between two technology giants, has been elevated to the level of the future of mankind!

It is just one company – Open AI – that has achieved a qualified success with chatGPT (many versions) and they have themselves been surprised at how it has worked, implicitly admitting that they are not quite sure about its (inner) workings. The family of algorithms of which chatGPT is an example – large language modelling – has been around for quite some time. Scared and threatened by its success, many including Google are scrambling to launch rival genAI tools. An important point to remember is that chatGPT has been trained on 2021 internet data, and we cannot take it for granted that it will work equally well if not better, as the data expands to the whole of what is available on the internet.

Current versions of genAI are ‘black boxes’ as even many enthusiasts have admitted, adding that future generation versions will be ‘open and transparent’. And yet, all kinds of claims are being made by some of the most powerful people like Satya Nadella and Bill Gates. Nadella is betting everything on AI and Gates has said that AI will do away with search and even entities like Amazon!

However tempting to believe otherwise, the simple truth is that AI is as yet in very early stages of evolution, its effective origins traceable to the 1950s. The limited successes so far have been specific in data-intensive machine learning algorithms, computer vision, robotics, as the substantial research literature shows. Much of the success is yet to be translated to the real world. The qualified success of chatGPT is quite incredulous because natural language programming (NLP) has had the least success, not surprisingly because language is inherently ambiguous, as I have already written in this space. Language and the workings of the human mind are notoriously difficult to fathom. In the last few years, many physicists from the field of quantum mechanics have been exploring consciousness. Most readily agree that we are clueless. Marvin Minsky, one of the earliest researchers into AI, published a book titled ‘Society of minds’ in 1985, exploring how the human mind works.

Regulate but what?

It is therefore extremely surprising (and suspicious) to hear calls for government regulation of AI. Regulate, but what? How can you regulate something whose ways of working are far from clear, especially given the abysmal global record of government regulation? This call has been made most insistently by Sam Altman, OpenAI CEO. Could it be a preemptive move to discourage fresh entrants so that they can consolidate their first mover advantage, given that it has not always borne fruit?

Not surprisingly, Google has articulated a strong reservation on the call for government regulation, with Sundar Pichai recommending against a sweeping generalize regulation arguing instead for specific regulations. Now, Prabhakar Raghavan, Senior Vice President, Google, has said that “rules to govern AI should be ‘based on science and a deep understanding of the subject’”, (The Economic Times, 22 June, 2023, page 14, print edition). Whatever Google’s interests in promoting this viewpoint, its inherent validity is beyond question. Will it happen? Unlikely, to my mind, because there will be fundamental disagreement on what constitutes ‘deep understanding’. Hark back to the days of the Enron scandal and the hastily passed legislation of Sarbanes-Oxley Act in 2002. Or the Dodd-Frankel Act in banking in the wake of the 2008 housing mortgage disaster. If a well-developed sector such as banking continues to throw up one scandal after another, despite a whole gamut of regulations, what can we expect from an as yet evolving world of computing and internet technology?

AI is a broad rubric; its sub-divisions are each specific with their own rhythm and logic but also working across – knowledge representation, machine learning algorithm, natural language processing, computer vision, robotics. The last two are complex phenomena as they bring together computing, mathematics especially control theory, physics (especially solid state physics), communication including radio frequencies, and electronics including sensors. Designing a fully functioning robotic arm is a gigantic challenge as there are physical barriers to overcome. Just search for degrees of freedom in designing robotic arms and legs or any other body part for that matter. Each AI sub-subject calls for a different deep understanding involving a high degree of specialization. Generalization quite simply impossible.           

One day!

In his deposition to the US Senate, Sam Altman claimed that one day, AI will solve the many problems of mankind including cancer and climate change. ‘One day’!  Despite the many claims made by all and sundry, there is no substantive evidence to show that AI is anywhere close to unearthing anything new about cancer, climate change or any major problem, as revealed by the growing body of research papers. The trend of deliberate conflating of computing power with intelligence continues because it suits certain entities, including governments. While listed companies are required to indicate which of their statements are projections and forward-looking, tech companies get away with making all kinds of claims without an appropriate basis.  

Perhaps, regulation can start with these claims! Doubtful, since some governments are themselves involved in this orchestrated narrative. Stay atma nirbhar!

Takeaways

AI is still in early stages of evolution

Each branch of AI is evolving in their own way and pace

Perhaps regulation is a premature consideration

Deep understanding of each AI branch is mandatory

Regulate claims made by all and sundry