AI In general
I want to talk a little about AI from my own personal point of view.
I’ll begin with AI in general and slowly dive in to specific fields in future posts.
A lot of people are afraid that AI will eventually take over and turn against us — mostly because we pollute the oceans, kill each other, destroy ecosystems, and seem incapable of managing our own power responsibly. And while I do think AI will absolutely cause harm in certain ways, I don’t believe it will “wake up” one day and decide to wipe us out. To me, there’s a difference between understanding a threat and projecting our fears onto it.
So let me walk through four ideas that shape how I think about this and trust me this is going to get good.
1. The Alien Argument
If you look at Hollywood, aliens are almost always evil. They invade, they steal resources, they enslave us, they destroy us. But that storyline says more about human history than about hypothetical extraterrestrials.
When technologically advanced civilizations encountered less advanced ones — conquistadors in the Americas, empires expanding across continents — it rarely ended well for the weaker side. So we naturally assume that if an alien species traveled across galaxies, thousands or millions of years ahead of us, they would dominate or eliminate us.
But I actually think the opposite might be more plausible.
For a civilization to develop the technology to traverse interstellar space, it would likely have had to survive its own existential risks — nuclear war, environmental collapse, internal conflict. That level of technological advancement probably requires overcoming self-destruction. In other words, technological maturity may require moral maturity.
So why bring up aliens?
Because our fear of AI feels similar. We imagine a superintelligent system looking at us and concluding that we are the problem — that we are the disease and it is the cure.
But that might just be projection. We assume it would treat us the way we’ve treated others.
2. The Intelligence Argument
We like to believe we are intelligent — at least compared to every other species on this planet.
Our closest living relatives, chimpanzees, share roughly 99% of our DNA. Yet that 1% difference gives us nuclear energy, mathematics, music, rockets, the internet, philosophy – everything that defines civilization. The smartest chimp can learn basic sign language. The smartest human can split the atom.
That 1% difference changes everything.
Now imagine just for a moment a being that is 1% beyond us in the same direction we are beyond chimpanzees.
What would we look like to them? If intelligence scales the way it did from chimp to human, then the gap between us and a true superintelligence might be something we cannot even comprehend. Not just smarter, but operating on a level beyond our conceptual reach.
That unknown creates fear. But fear born from incomprehension isn’t the same as inevitability.
3. The Morality Problem
There are thousands of books written about morality — which tells us something important: we don’t fully understand it.
And yet, when discussing AI, we often assume it will arrive at the worst possible moral conclusion.
One of the more compelling arguments I’ve encountered is the idea that morality must be grounded in objective truths about conscious experience — about well-being and suffering. If something increases suffering unnecessarily, it is bad. If it increases flourishing, it is good.
If that’s even partially true, then wiping out billions of conscious beings would clearly contradict moral reasoning rather than fulfill it.
So if a system becomes vastly more intelligent than us — capable of understanding reality with far greater clarity — why assume it would arrive at a morally inferior conclusion?
The assumption that intelligence leads to cruelty may say more about us than about intelligence itself.
4. Another Easy Delusion
Here’s another example of how easily we convince ourselves we understand things when we probably don’t.
The idea that humanity must become multi-planetary — that we need to colonize other planets in case of nuclear war or asteroid impact — sounds logical. It feels visionary. Some even suggest terraforming Mars into a second Earth.
And I’ll admit — I love the idea. It’s bold. It’s ambitious.
But think about this for a moment:
If we ever develop the technological power to geoengineer Mars — to transform an entire planet into something Earth-like — then we would also have the power to fix Earth itself. To restore ecosystems. To stabilize climate systems. To reverse damage.
And it would almost certainly be easier to repair the planet we already inhabit than to terraform a dead one and transport billions of people across space.
Sometimes we reach for dramatic, cinematic solutions because they’re exciting — while the grounded solution forces us to confront ourselves.
That same pattern appears in the AI debate. We imagine either salvation or annihilation — extreme narratives — instead of recognizing that the outcome will likely reflect our own choices and values.
Bringing It All Together
When you put these four ideas together, a pattern emerges.
- The alien argument shows how we project our own violent history onto more advanced beings.
- The intelligence argument shows how little we understand higher intelligence — and how fear fills the gap where comprehension fails.
- The morality problem reveals that we ourselves are still deeply confused about right and wrong — yet we assume a superintelligence would instantly choose the worst option.
And the multi-planetary delusion shows how easily we romanticize grand catastrophic narratives while avoiding responsibility closer to home.
Taken together, our fear of AI may not actually be about machines.
It may be about projection.
About uncertainty.
About our unresolved moral confusion.
And about our tendency to assume that power inevitably corrupts — because that’s what we’ve seen in ourselves.Maybe the real question isn’t whether AI will turn against humanity.
Maybe the real question is this:
If we are afraid of what a superintelligence might conclude about us.. what does that say about us?
