As was said before: “A lot of the AI risk arguments seem to come… with a very particular transhumanist aesthetic about the future (nanotech, … etc.). I find these things (especially the transhumanist stuff) to not be very convincing…
I here suggest a plausible scenario where AI can get its own infrastructure and kill all living beings without the use of nanotech and biotech. A similar plan was described in the book “A for Andromeda” by Hoyl, but for Alien AI from SETI.
I assume that no nanotech or biotech will be used in this scenario. I also assume that AI has a subgoal “kill all humans”.
In a nutshell, AI’s plan is:
1.AI promises a group of people (likely a country, probably one of the nuclear powers) a military advantage over all its rivals, but in exchange, the group should help AI to get out of the box and collaborate in building autonomous military infrastructure as well as human-independent AI-infrastructure (data centers, electricity generation, chip fabs, general-purpose robots). Military infrastructure will be based around autonomous weapons. The question about their size and aesthetics is second: it could be missiles, drones, nanobots or even robots, or some combination of them.
2. After the needed AI-infrastructure is created and the required weapons are deployed, the AI kills not only rivals of his group of people, but also the group itself.
Some properties of this plan:
There is no need for AI to hide its recursive self-improvement from humans, so it will outperform other AIs which have to do RSI in secret.
Any AI-takeover plan assumes that AI will have to build an independent computational infrastructure which is capable to function without humans, or at least with the help of an enslaved group of humans.
Any AI-infrastructure-building plan converges to robots-building-other-robots, or robots self-replicating.
Any robots-building-robots plan converges to smaller and smaller “self-replicating” robots, which become eventually as close to nanotech as possible.