
What is Artificial General Intelligence, AGI?
Artificial General Intelligence is a term created in contrast to Narrow AI.
Narrow AI refers to artificial intelligence that performs tasks in a specific field under human direction, while AGI possesses the capability to operate across all fields and intellectual domains.
In simple terms, it aims to be a "general brain" that can learn and reason like a human to solve any task.
Unlike the 'narrow AI' (Narrow AI) we use today for translation and recommendations, true AGI must adapt to new problems, set long-term goals, and improve itself.
How far have we come?
-
Reasoning Puzzle — OpenAI's latest research model o3 has achieved a human-level score of 87.5% on the ARC-AGI test, reporting that "the puzzle has been solved."
-
Multimodal Era — At the end of April this year, ChatGPT will completely retire GPT-4 and switch to GPT-4o, which handles text, voice, and images simultaneously.
-
Agent Experiments — 'Behavioral' frameworks like Devin 2.0 (coding-specific companion) and Auto-GPT v0.6 are currently integrating planning, execution, and feedback loops into actual coding and web exploration. Human intervention is still heavily needed for error correction.
In summary, there are notable achievements in language, coding, and puzzle domains that could be called 'quasi-AGI', but there is still a long way to go in robotics control, physical intuition, and long-term projects.
When might it arrive?
-
Demis Hassabis (DeepMind) "Within 5 to 10 years"
-
Sam Altman (OpenAI) "This year or next year is critical"
-
Dario Amodei (Anthropic) "2026"
-
Geoffrey Hinton "5 to 20 years"
-
Yann LeCun (Meta) "Decades more" etc.
The collective intelligence platform Metaculus estimates the median for the 'first human-level AGI' to be in the early to mid-2030s.
Industry leaders are optimistic, while academia and forecasters tend to be more cautious.
Four Technical Challenges to Solve
-
Continuous World Model – To enable robots to move stably, they need to understand 3D, time, and causality together, but they still struggle in simulations.
-
Long-term Memory and Self-modification – When exceeding tens of thousands of tokens, information becomes blurry or 'hallucinates', and research is underway to dynamically update parameters.
-
Energy and Infrastructure – The high-performance mode of o3 is estimated to cost tens to hundreds of dollars per puzzle problem.
-
Safety and Alignment – As model capabilities grow, control issues such as goal misuse and biological risk simulations become more prominent.
Policy and Governance Trends
USA – On January 23, the Trump administration issued the 'AI Leadership EO (14179)', announcing pre-notification for frontier models and mandatory AI Action Plans.
EU – The AI Act will come into effect in 2024, banning some dangerous AI starting February 2025 and preparing for full implementation by August 2026.
Global – G7 and OECD are operating the AI Safety Institute international network to jointly develop risk assessment standards.
Opportunities vs. Risks
Economy
R&D, design, and development automation, increased productivity, job and wage polarization
Science
Accelerated exploration of new drugs and materials, potential misuse in pathogen design
Education and Creation
Personalized tutoring and idea amplification, copyright confusion, and an influx of low-quality content
Security
Enhanced threat detection and cyber defense, proliferation of autonomous weapons and deepfakes
Check your own technological readiness.






What can make money? | 
DaeBak Electronics CNET | 
KGOMIO Blog | 
Round and Round Children's Song Kingdom | 
American Blog Forge | 
Western US Medical Student Association | 

Happy Together 213 | 
San Jose Pop | 
Information on All Regions of the United States |