The race against time: can humanity successfully control artificial intelligence before it leads to a worldwide catastrophe?
This article has been translated from its original publication at https://vas3k.blog/blog/ai_alignment/
Welcome to the year 2023, a time when artificial intelligence has once again captivated humanity's attention. People across the internet are engaged in a race to automate various tasks using ChatGPT and similar AI models, while tech enthusiasts like Elon Musk are investing substantial resources into developing "real" AI. This new breed of AI, known as Artificial General Intelligence (AGI), is designed to possess the ability to learn, evolve, and solve complex problems autonomously, potentially surpassing human capabilities.
In 2015, Tim Urban, the author of the blog "Wait But Why?", eloquently highlighted in his article The AI revolution why we tend to underestimate the pace at which machine intelligence surpasses our own. We often perceive progress as a linear trajectory based on our past experiences.
However, it is crucial to recognize the potential risks associated with AGI development. While AGI has tremendous potential for solving global challenges, there is also a concern that its rapid advancement could lead to unintended consequences or a global catastrophe if not properly managed.
We often find ourselves feeling uncertain about technological progress due to its cyclical nature, characterized by alternating periods of hype and general disappointment. Initially, we become enamored with a new technological toy, only to find ourselves disillusioned within a year or two, believing that it has failed to deliver any significant advancements and has merely introduced new problems.
However, those who have witnessed multiple cycles of technological development can appreciate that new waves arrive more frequently and with greater force. And the upcoming wave has the potential to usher humanity into a new era, one where our own intelligence may no longer reign supreme.
Presently, GPT models excel at simulating intelligence, but they are still far from possessing genuine intelligence. While generative models have initiated a wave of enormous neural networks that were previously beyond the reach of human computing resources, fundamentally, they remain "dumb" text generators lacking their own memory.
The apparent dialogue between you and ChatGPT is indeed an illusion, as neural networks essentially use the history of previous messages as contextual input and begin each response from a fresh starting point.
This current state of AI falls considerably short of what we traditionally perceive as true "intelligence."
However, researchers in the field of AI hold a strong belief that we will undoubtedly develop Artificial General Intelligence (AGI) in the coming decades. This confidence is reflected in platforms like Metaculus, a popular prediction market, where optimism regarding AGI's creation is even more pronounced. The median prediction on Metaculus suggests AGI could emerge as early as 2026, with the 75th percentile estimation placing it at 2029.
Today, I will refrain from delving into the "10 reasons why you're using ChatGPT wrong" hype tracks and instead explore the potential implications of creating highly advanced artificial intelligence.
Should we succeed in developing a powerful artificial intelligence, one important consideration is whether it will possess its own goals and motivations. If it does, and if it begins achieving these goals, there is a legitimate concern that it might disregard minor obstacles in its path, including humans, whom it might perceive as limited by their biological brains, ineffective moral systems, and legal frameworks. In such a scenario, it is crucial to contemplate our potential course of action and the current perspectives on this matter.