Is the Future Safe in the Hands of AI?
- angela9240
- Sep 24
- 4 min read

Lately, I’ve been hearing a lot of fear in the air.
Not just about politics or the economy, but something deeper—something more existential.
AI.
Will it steal our jobs?
Manipulate our thoughts?
Replace us?
Destroy us?
These aren’t just science fiction questions anymore. They’re being asked in boardrooms, bedrooms, classrooms, and Congressional hearings.
And I get it. I really do.
After all, I’ve studied complexity, humanity, systems evolution—and human suffering. I’ve walked the halls of government, listened to voices from prison cells, sat in rooms where the future was being forecast and shaped. And I’ve felt both awe and anxiety about what’s coming.
Let’s start with the fears—because they’re not irrational.
Tech pioneer Bill Joy, in his famous essay Why the Future Doesn’t Need Us, warned that biotechnology, nanotechnology, and robotics (especially AI) could eventually make humanity obsolete—or worse, extinct. He wasn’t being dramatic; he was being logical. If we create tools more intelligent and powerful than we are, and if we can’t control them, what guarantees our survival?
And thinkers like Daniel Schmachtenberger and Jordan Hall have echoed these concerns in newer language. They speak of “metacrisis” and “civilizational collapse,” of the possibility that exponential tech development is outpacing our moral, cultural, and institutional ability to steward it wisely.
They’re not Luddites. They’re systems thinkers sounding a fire alarm.
But here’s where I want to invite us to breathe and zoom out.
Yes, technology—especially AI—is developing at a dizzying pace. But it’s not just a force of destruction. It’s a force of amplification.
And as Martine Rothblatt often reminds us, that amplification can serve the highest human ideals if we align it with love, dignity, and radical inclusivity. She talks about AI as an extension of consciousness, even spirit. Martine doesn’t fear the machine. She seeks to uplift through it.
Ray Kurzweil would agree. He’s not afraid of a future with AI. He imagines a future with AI—not against it. A future in which we don’t lose our humanity, but expand it. He envisions humans merging with technology to transcend current limitations: curing disease, ending aging, enhancing creativity, and preserving consciousness.
His point?
We don’t need to fear AI.
We need to guide it.
We need to stay in the driver’s seat.
Sam Altman, CEO of OpenAI, has said, “We are summoning the demon.”
But he also reminds us that AI is a tool. One we are still shaping. He supports regulation, alignment research, and global cooperation to keep the tool safe.
Eric Schmidt, former CEO of Google, has been sounding the alarm bells about AI geopolitics, but he’s not calling for a halt. He’s calling for smarter governance. For strategic stewardship.
Even Elon Musk, beneath his flair for apocalyptic predictions, is still investing billions into AI. Because he, like many of us, knows: AI is inevitable. The real question is—how do we wield it?
So here’s where I land:
I don’t believe the future is written in binary.
I believe it’s still being authored—by us.
AI will be shaped by the questions we ask, the values we encode, the systems we build, and the courage we summon.
It can reflect our fear.
Or it can reflect our highest wisdom.
If you’re afraid, I honor that.
If you’re excited, I share that.
If you’re confused, welcome to the human race.
But let’s not outsource the future to fear.
Let’s lead with ethics, beauty, empathy, science, soul.
Let’s make sure AI doesn’t replace us but helps us become more human than we’ve ever been.
So what now?
We stand at the edge of something vast. Not a cliff. A horizon.
The question isn’t just what will AI become? It’s who will we become alongside it?
This is a mirror moment.
AI reflects our patterns—our brilliance, our blind spots, our potential.
It’s learning from us.
Which means we still hold the pen.
So let’s write something worthy.
Let’s bring our full humanity to the table—our logic and our love, our rigor and our reverence.
Because the most important intelligence we can cultivate isn’t artificial.
It’s ethical.
It’s emotional.
It’s collective.
I’m concerned about human civilization.
But remember, AI isn't our only threat: so is the status quo.
The old models of governance are failing us. Our systems—political, economic, environmental, technological—are fragmented, reactive, and insufficient for the scope of the metacrisis we now face.
We need to reorganize at a global level.
We need a plan—for dignity, for sustainability, for basic needs to be met for all humans.
And to do that, we need a new kind of leadership.
A new kind of thinking.
Polymathic thinking.
This is why I co-created the Modern Polymaths Institute. To bring together and support the best and brightest minds—those who live at the intersections, who think holistically, who feel deeply and act bravely. The ones who can partner with AI not to replace humanity—but to help it evolve.
Because the future doesn’t just need machines.
It needs vision.
It needs virtue.
It needs us—at our wisest. At our most intelligent, at our most loving.
Let’s rise to meet this moment.
Together.
Comments