John von Neumann, an extraordinary mind, an effortless polymath, observed: ‘It is not a particular purpose, destructive race, one specific invention that creates danger. The danger is intrinsic. For progress, there is no cure’. (Ascribed to him as a Quote in The MANIAC, a book by Benjamin Labatut)
OpenAI’s hybrid structure promoted illusions about the future of AI, but recent events leave no doubt that it will move in the direction of business interests and not some vague ‘benefit for mankind’. While there is a lot of mystery about the ‘risks and dangers’ from AI, we continue to face risks from combinations of existing technologies
There is a fundamental contradiction, an absurdity, in the very core of the structure of OpenAI: it was set up as a non-profit entity with the directors’ primary fiduciary duty to uphold the founding mission: ‘to ensure that artificial general intelligence (AGI) benefits all humanity’, but it has a for-profit section with a corollary duty of the directors to rein it in. According to media reports, Ilya Sutskever, (one of the Board members), had become more concerned that OpenAI’s technology could be dangerous and felt Sam Altman (OpenAI’s CEO) should be more cautious. (https://www.wired.com/story/mystery-at-the-heart-of-the-openai-chaos/).
Now, Meta and IBM have formed an alliance to promote an ‘open science’ approach to the development of AI! Surprises never cease! It is ironic but it is ‘spin doctors’ who are at the forefront of the most modern technology. However cynical this might seem, neither ‘mankind’ nor ‘for the benefit of mankind’ is definable. You cannot write a program or design an AI agent to ‘work for the benefit of mankind’. None of the ‘informed people’, essentially businesses and governments, believe in this mythology, which has captivated a large mass of lay individuals, fuelled by the media.
This is exactly what happened decades ago when space opened up, largely as a result of the Cold War in the early 1950s, when governments proclaimed how they will harness space technology for ‘peaceful purposes’, a conveniently vague term that let governments justify whatever they did. However, the militarization of space began right at the start, the ‘original sin’ in space, as Bleddyn Bowen calls it in his insightful book ‘Original Sin – Power, Technology and War in Outer Space’. I can imagine another title – ‘Original Sin – Power, Technology and War in AI’.
The open secret
It cannot be otherwise. OpenAI is funded by billions of dollars from businesses and the alacrity with which investors, especially Microsoft, reacted to Sam Altman’s ouster and reinstated him, demonstrates that the fears expressed over Altman’s aggressive pursuit of AI ignoring any potential harm have been simply swept aside. Meanwhile, Sutskever has recanted what he said and now supports Altman and Microsoft has been given a non-voting seat on the Board. Microsoft has been assuring that OpenAI’s agenda will be protected but that is just as vague as what Altman said a few months ago. Media reports suggest that the future of Sutskever’s research is unclear and uncertain.
The intrigue and mystery reads like a Hollywood script, because the ’danger’ that Sutskever flagged off was the AI’s ability to solve a complex mathematical problem – media reports refer to this as Q*. Why that necessitated an urgent call is as perplexing as is the subsequent recanting. What this means for OpenAI’s mission is a serious question but the answer is unlikely to be favourable. Incidentally, some board members were women (who have left since) working in the area of AI ethics.
Naïve and irresponsible mythology
It is astonishing that anyone believed in this kind of a hybrid structure. In a naïve Adam Smithian fashion, the belief seems to have been (note the past tense) that the pursuit of private profit will also yield benefits to society, which, in this case, is the whole of humanity. However, there is no specific description of how, if at all, AGI will ‘benefit mankind’. Altman did say a few months ago that ‘one day’ AI will solve problems such as cancer, climate change and so on. One day! That’s so certain and specific!
Th entry of Google’s Gemini should do away with this empty rhetoric as the business in AI becomes more and more competitive. It will also ensure another result: it will dent the equating of ChatGPT with AI per se in the minds of most people, thanks to the extensive and continuing media coverage. There are many other competing products emerging, based on large language modelling (LLMs), but none will have the muscle of Google behind them.
The future of AI – frightening?
We are still far away from the kind of dangers so frighteningly captured in ‘Eagle Eye’ (https://www.youtube.com/watch?v=Ve_HGFCyCd8), although serious AI researchers have flagged off many tripping points. ChatGPT is a fetching vehicle – it fetches what you ask for based on whatever is available, which simply means that the risk lies in the way it is used. Are these dangers any different from the demonstrated risks and dangers, including fraud, from several combinations of existing technologies? The Dark Web has been around long before any AI came into picture; in ‘Dark Market’ (2011), Misha Glenny, who became famous with his book ‘McMafia’, describes in great detail how hackers became the new mafia.
Unless someone springs a surprise, there is no evidence of any AGI product that will benefit mankind. Hence, those aligned on the other side should redefine their goal to be a watchdog over the real risks from existing and emerging AI products, as a more fruitful way of using their intellectual energies, becoming the kind of countervailing power that John Kenneth Galbraith spoke of in economics (https://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/countervailing-power). Structure can be fought only with another structure.
Takeaways
The hybrid structure of OpenAI is foredoomed to fail
Equating ChatGPT with AI is misleading but will diminish
ChatGPT is a fetching vehicle
The Dark Web is a serious threat, much before AI