$30.00
Author: Eliezer Yudkowsky
Publisher: Little Brown
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives.
Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? ...
Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
Review Quotes:
"Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key--an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I'll keep betting on humanity, but first we must wake up."-- R.P. Eddy, former director, White House, National Security Council
Review Quotes:
"You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact."-- Grimes
Review Quotes:
"This book offers brilliant insights into history's most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all. Yudkowsky and Soares's memorable storytelling about past disaster precedents (e.g., the inventor of two environmental nightmares: tetra-ethyl-lead gasoline and Freon) highlights why top thinkers so often don't see the catastrophes they create."-- George Church, Founding Core Faculty & Lead, Synthetic Biology, Wyss Institute at Harvard University
Review Quotes:
"The most important book of the decade. This captivating page-turner, from two of today's clearest thinkers, reveals that the competition to build smarter-than-human machines isn't an arms race but a suicide race, fueled by wishful thinking."-- Max Tegmark, author of Life 3.0: Being Human in the Age of AI
Review Quotes:
"A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."-- Scott Alexander, founder, Astral Codex Ten
Review Quotes:
"Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong."-- Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge
Review Quotes:
"Everyone should read this book. There's a 70% chance that you--yes, you reading this right now--will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."-- Daniel Kokotajlo, AI Futures Project
Share: