Are Unfriendly AI the Biggest Risk to Humanity?
Are Unfriendly AI the Biggest Risk to Humanity? (investing.com)
Posted
by
EditorDavid
from the all-in-favor-say-AI dept.
“Ethereum creator Vitalik Buterin believes that unfriendly artificial intelligence poses the biggest risk to humanity…” reports a recent article from Benzinga:
[In a tweet] Buterin shared a paper by AI theorist and writer Eliezer Yudkowsky that made a case for why the current research community isn’t doing enough to prevent a potential future catastrophe at the hands of artificially generate intelligence. [The paper’s title? “AGI Ruin: A List of Lethalities.”]
When one of Buterin’s Twitter followers suggested that World War 3 is likely a bigger risk at the moment, the Ethereum co-founder disagreed. “Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it’s really bad, it won’t kill off humanity. A bad AI could truly kill off humanity for good.”
//GO.SYSIN DD *, DOODAH, DOODAH
Working…