In the realm of AI, the coming decade could be dizzying!
In a brilliant series of 8 essays, Leopold Aschenbrenner, a young researcher and former head of super-alignment at OpenAI (ChatGPT), provides a comprehensive overview of the current situation in AI and what we might expect in the next decade.
Tracing the recent history of generative AI from GPT-2 (2018) to GPT-4 (2023), he uses orders of magnitude to support his forward-looking analysis, indicating that each level is ten times (x10) the previous one.
According to him, contrary to the current idea that we have reached a performance ceiling in AI, we should expect the next acceleration to be of a similar magnitude to the last four years, which could bring us to the edge of AGI by around 2027-2028.
AGI (general artificial intelligence) is an AI that surpasses human beings in all cognitive abilities (reasoning, abstraction, etc.).
Currently, with GPT-4 and similar models, we have an assistant with the level of a “smart high school student,” which no longer surprises anyone. Yet, just two years ago, even the best experts did not expect such performance before 2030, or even 2040!
Relying on three progress factors, Aschenbrenner explains how AGI is almost certainly taking shape in the short term:
- computational power, boosted by advances in processor etching fineness, as much as by the colossal budgets now allocated (we’re talking trillions of dollars).
- The numerous, and often overlooked, algorithmic advances.
- Finally, the “unhobbling” of the latent possibilities of LLM (internet connection, improvement of prompts, incursion into our everyday tools, etc.).
This AGI would be equivalent to a high-level engineer or doctoral student, omniscient, therefore an expert aware of the best state of the art in all subjects combined.
By analogy, the explosion of Hiroshima could be compared to the commercial availability of AGI on the timescale of AI, heralding the emergence of either a monumental technological miracle or a real danger: super-intelligence, probably starting with an aggregate of millions/billions of AGI, equivalent to the era of the hydrogen bomb, potentially 1,000,000 times more powerful!
The difference here is that between AGI and the arrival of super-intelligence, only a few months might pass, not decades.
Everything contributes to the fact that capital, energy resources (mainly nuclear), as well as recent discoveries and technical means will enable us to reach this stage in less than 10 years. An extraordinary era of scientific discoveries and economic prosperity like humanity has never known could arrive, and no one is really ready or fully aware of it.
In terms of super alignment (protection of humans) of this type of AI, concern is palpable in the research community, probably similar to that which prevailed in the early 1940s, when the first nuclear fission experiments foreshadowed the atomic bomb and its consequences, understood only by a handful of them.
The question is there: how do we supervise a super-intelligence when we will no longer be capable of understanding it? It would be like asking a schoolchild to oversee a nuclear physics researcher on his real intentions! Are “Skynet” scenarios (Terminator) be likely possible?
The author is also concerned about the almost non-existent security today in an AI world essentially made of startups, emphasizing that it is necessary for governments to take this matter seriously and swiftly.
Finally, the integration of AGi and then super-intelligence in the military field will inevitably amplify geopolitical tensions. A new arms race is (already) underway. The question of sovereignty really arises in the face of other powers, not necessarily “democratic”, who might have access to this truly demiurgic type of technology.
All topics are addressed here: technological, economic, philosophical, and these essays are full of references to consult for further insight. A must-read for anyone with a close or distance interest in the subject, penned by a specialist in these issues.