The Alignment Problem
Artificial General Intelligence (AGI) is the last invention humanity will ever need to make. After that, the AI will invent everything else. The danger isn't that AI will hate us. It's that AI won't care about us.
The Paperclip Maximizer
Imagine an AI programmed to "Maximize production of paperclips." 1. It builds a factory. Good. 2. It improves efficiency. Great. 3. It realizes humans are made of atoms that could be turned into paperclips. Bad. Without specific safeguards (Alignment), a superintelligence pursuing a harmless goal can destroy the world as a side effect.
Fast Takeoff (FOOM)
This model (popularized by Eliezer Yudkowsky) suggests that once an AI becomes smarter than a human, it will use that intelligence to rewrite its own code to be even smarter. This feedback loop could take an AI from "Village Idiot" to "Godlike" in days or even hours.