The Precipice
Existential Risk and the Future of Humanity
What's it about
What if the greatest threats to humanity aren't the ones you see on the news? This book summary reveals the hidden existential risks—from rogue AI to engineered pandemics—that could end our story forever, and shows you how we can safeguard our future for generations to come. Discover Toby Ord's groundbreaking analysis of these "precipice" moments and learn the concrete actions we can take right now. You'll gain a powerful new perspective on our civilization's true vulnerabilities and understand your role in the most important mission of our time: ensuring humanity's survival.
Meet the author
Toby Ord is a senior research fellow at Oxford University's Future of Humanity Institute, which he co-founded to study the big-picture questions facing humanity. A philosopher by training, his research into ethics and global priorities led him to focus on existential risks—threats that could permanently destroy our future. This journey from abstract ethical questions to the tangible dangers we face culminated in The Precipice, a crucial guide to navigating the most important challenge of our time.

The Script
We tend to think of history as a vast, continuous ocean, stretching back thousands of years. From this perspective, our own lives are a fleeting drop, a single human generation almost statistically insignificant. This view is comforting; it implies a certain resilience, a guarantee that humanity will just keep going, carried forward by the sheer momentum of the past. But what if this is a profound misunderstanding of our place in time? What if history is a single, fragile chain? Each generation forges its own link, and for millennia, the worst dangers we faced could only break a few links at a time—wars, plagues, and natural disasters might devastate a population, but the chain itself remained intact. We are now the first generation with the power to break the entire chain, forever.
This startling realization is what drove philosopher Toby Ord to dedicate his career to a new, urgent field of study. As a researcher at Oxford University's Future of Humanity Institute, Ord began to calculate the odds of ultimate catastrophe. He saw a dangerous mismatch: our technological power was accelerating exponentially, while our wisdom and foresight were not. This was a present-day moral emergency. He wrote The Precipice to sound an alarm, arguing that safeguarding humanity's future is the central challenge of our time, and that we, the people alive right now, are standing on the edge of a decision that will echo for all of time, or into silence.
Module 1: The View from the Precipice
We often think of history as a long, continuous line. But Ord suggests we are at a unique inflection point. He calls it "The Precipice." It's a brief, dangerous period where our technological power has outpaced our wisdom. For the first time, human-caused risks are greater than all natural risks combined.
A key insight here is that humanity's long-term potential is almost incomprehensibly vast. If we survive this dangerous era, our descendants could flourish for hundreds of millions of years. They could end disease. They could settle the stars. They could achieve levels of well-being we can't even imagine. Ord isn't being poetic here. He uses the Earth's remaining habitable lifespan—roughly a billion years—as a baseline. The potential number of future lives dwarfs the number of all humans who have ever lived. An existential catastrophe, therefore, is the squandering of a near-infinite future.
So, what makes this moment so dangerous? The author points to a dangerous gap. Our power is accelerating exponentially. Our wisdom and coordination, however, are growing only linearly. This creates a period of profound vulnerability. Consider the Cuban Missile Crisis. A Soviet submarine captain, Valentin Savitsky, was ready to launch a nuclear torpedo. Only the refusal of another officer, Vasili Arkhipov, prevented a potential nuclear war. We survived because of one person's judgment under immense pressure, not because our systems were robust. We got lucky. And here's the thing: we cannot rely on luck to navigate the 21st century.
Ord argues that this requires a new ethical perspective. He calls it longtermism. This is the idea that we have a profound moral duty to protect the long-term future. It suggests our most important role may be to act as guardians for future generations. This is about recognizing that preventing an existential catastrophe is a uniquely high-leverage way to help the largest number of people over the longest period of time. This perspective reframes our priorities. It asks us to weigh the immediate benefits of a new technology against the long-term risks it might create.
We've covered the stakes. Next up: how do we even begin to measure these risks?