You know that feeling when you're scrolling through your feed and you realize you have no idea what’s actually real anymore? That's basically the starting line for Yuval Noah Harari’s latest book, Nexus: A Brief History of Information Networks from the Stone Age to AI. Harari, the guy who wrote Sapiens, isn't just looking at tech. He’s looking at the "glue" that holds us together.
Most of us grew up thinking that more information is always a good thing. Give people the facts, and they’ll make the right choices. Right? Well, Harari says that’s a "naive" view. It’s actually kinda dangerous.
He argues that the primary job of information throughout history hasn't been to tell the truth. Its job has been to connect people. And honestly, you don't need the truth to connect people. You can do that much more effectively with a good lie, a myth, or a mass delusion.
The Information Trap: From Witch Hunts to Algorithms
Think back to the early modern witch hunts in Europe. Thousands of people were tortured and killed. Why? Because a network of bureaucrats, priests, and "experts" produced a massive amount of information—manuals like the Malleus Maleficarum—that "proved" witches were real.
The information was technically "organized" and "detailed," but it was 100% false. It created order, sure, but it didn't create truth.
This is the central warning in Yuval Noah Harari Nexus. Harari isn't just worried about AI becoming "conscious" and killing us like in a Terminator movie. He’s worried about AI being a "non-human agent" that can create its own ideas and manipulate our information networks better than any human ever could.
What Makes AI Different?
Earlier technologies like the printing press or the radio were just "tools." A book doesn't decide to write itself. A radio doesn't choose which songs to play. AI is different. It's the first technology in history that can make decisions and generate new ideas on its own.
It’s an "alien intelligence" that we’ve invited into our homes and our governments.
Why Democracy Is at Risk
Harari makes a pretty compelling point about why democracies are more vulnerable to this than dictatorships. Democracy is basically a "conversation." It relies on people talking to each other, debating, and—crucially—having a shared reality.
When AI-driven social media algorithms prioritize "engagement" over truth, they end up amplifying the most outrageous, divisive content because that’s what keeps us clicking.
- The Myanmar Example: Harari points to the tragic ethnic cleansing of the Rohingya in Myanmar. Facebook’s algorithms, seeking only to maximize user engagement, ended up promoting hateful anti-Muslim propaganda. The "network" was working perfectly for its goal (engagement), but the result was real-world catastrophe.
- The Death of Conversation: If we can’t agree on basic facts because our feeds are feeding us different realities, democracy literally stops working. You can't have a conversation with someone who lives in a different universe.
Totalitarianism 2.0
While democracies are struggling, Harari warns that AI could be a dream come true for dictators. In the past, totalitarian regimes like Stalin’s USSR or Nazi Germany were limited by how much information they could actually process. Humans had to read the reports and watch the people.
Now? AI can monitor everyone, everywhere, all the time.
In Yuval Noah Harari Nexus, he mentions how Iran is already using AI-enabled facial recognition to identify and punish women for not wearing the hijab. It’s automated oppression. No need for a secret police officer on every corner when you have a camera and an algorithm.
The Problem of "Explainability"
Here’s the really spooky part: we often don't even know how AI makes its decisions. Harari talks about the "black box" problem. If an AI denies you a loan or tells a judge you’re likely to re-offend, it might be basing that on thousands of tiny patterns that a human brain can't even perceive.
We’re outsourcing our most important decisions to a system that we don't fully understand and that doesn't share our values.
Is There a Way Out?
It’s not all doom and gloom, though. Harari insists that technology isn't deterministic. We still have choices. But we have to stop being lazy and start building self-correcting mechanisms.
Science works because it has peer review. It’s designed to find and fix its own mistakes. Our information networks—especially AI—need the same thing.
Actionable Steps for the AI Age
If you want to survive the "Nexus" of information and AI, here’s what you can actually do:
- Demand Human Accountability: Never let a "machine" have the final say on a life-altering decision without a human in the loop who is legally responsible for the outcome.
- Ban "Fake Humans": We should make it illegal for AI bots to impersonate real people. If you're talking to a computer, you should know it's a computer.
- Support Decentred Networks: Put your energy (and money) into platforms and institutions that prioritize truth and self-correction over raw engagement or centralized control.
- Slow Down: The rush to integrate AI into everything—from our schools to our nuclear silos—is a recipe for disaster. We need "regulatory sandboxes" where we can test these things before they go global.
The ultimate takeaway from Yuval Noah Harari Nexus is that we are currently summoning a power we don't control. We’ve spent millennia learning how to handle human power, but we have almost no experience handling non-human intelligence. It’s time to start paying attention to the "glue" before the whole thing comes apart.