When Will AGI Appear?
Experts have significantly differing opinions on when AGI will emerge.
Demis Hassabis from DeepMind mentioned the possibility of AGI appearing in about 5 to 10 years. Sam Altman from OpenAI stated that a significant turning point could come soon, within a few years. Dario Amodei from Anthropic predicted that around 2026 would be a crucial time.
On the other hand, some scholars take a more cautious stance. Geoffrey Hinton, a pioneer in AI research, estimated a timeframe of 5 to 20 years, while Yann LeCun from Meta suggested that it might take several more decades for AGI to emerge.
According to the prediction platform Metaculus, many forecasts place the emergence of human-level AGI between the early to mid-2030s. Thus, there is considerable diversity in expectations between industry and academia.
For AGI to be realized, several critical technological issues must be resolved.
The first is the world model. For AI to understand the real world, it must comprehend three-dimensional space, the flow of time, and causal relationships. This ability is crucial in robot control and physical environment recognition but has not yet been fully resolved.
The second is long-term memory and self-correction capabilities. Current AI can experience information degradation or errors when handling long conversations or complex problems. There is a need for technology that can maintain information over long periods and update its learning autonomously.
The third is energy and infrastructure issues. Operating the latest AI models requires enormous computing resources. Some high-performance inference tasks are known to incur significant costs to solve a single problem.
The fourth is safety and alignment issues. As AI becomes more powerful, it becomes increasingly important to ensure that it accurately follows human intentions and goals. If technology is misused, risks can increase in areas such as biological research or cyberattacks.
As AI technology rapidly advances, governments around the world are also preparing related policies.
In the United States, discussions are underway regarding policies that require pre-reporting and safety planning for frontier AI model development. The European Union is moving towards restricting the use of high-risk AI and strengthening regulations through the AI Act.
Additionally, international cooperation frameworks like G7 and OECD are showing movements to create a network of AI safety research institutions to jointly develop risk assessment standards.
The emergence of AGI could lead to significant changes in the economy and society.
In the economic sector, automation could expand across various fields such as research and development, design, and software development. While there is a potential for significant productivity increases, issues related to job structure changes and income disparity may also arise.
In the scientific field, the speed of drug development and new material exploration could increase dramatically. Conversely, there are discussions about the potential misuse of biological risk factors.
In education and creative fields, personalized learning tools and idea generation tools may emerge. At the same time, issues such as copyright concerns and an increase in low-quality content may also arise.
Changes are also expected in the security sector. Cybersecurity and threat detection capabilities may be enhanced, but there is also the potential for the proliferation of technologies like autonomous weapons and deepfakes.
Ultimately, the key question regarding AGI is not simply "When will it appear?" but rather in what form it will emerge and for what purposes it will be used. As AI technology continues to advance, it seems we are approaching a time when both society and individuals must consider how to prepare for these changes.