Before we dive deep into the technical aspects of the AI stack, today, we’re diving into the heart of it all—the concept of ‘intelligence.’
The Fear of AI and AGI
The reason I want to start here is because there’s a pervasive fear about AI, and especially AGI, or ASI that’s worth unpacking. It’s the idea that someday, AI might work against humanity—taking actions that aren’t in our best interest, or even leading us toward some kind of doomsday scenario. Now, whether or not we personally believe in this extreme view, there’s another, more grounded fear that’s much more widespread. Many people worry about what will happen to humans in the workforce. For decades—centuries, even—we’ve seen cognition as our ultimate superpower. From engineering to medicine to creative arts, humans have dominated the fields where intellectual effort reigns supreme. But with AI increasingly matching or even surpassing human abilities in certain cognitive tasks, there’s a rising panic: Will humans lose their place? Will machines push us out entirely?
This concern isn’t new, and many experts in the field have voiced opinions on both sides of this issue.
Stephen Hawking’s Perspective
Let’s take an example and concern from the famous physicist Stephen Hawking. Stephen Hawking is a name synonymous with profound insights into the universe. But beyond black holes and cosmology, he also had compelling views on the future of humanity, particularly regarding artificial intelligence. Hawking once said, “The development of full artificial intelligence could spell the end of the human race.” That’s a strong statement, isn’t it? It immediately catches your attention because someone as brilliant as Hawking warning us about AI makes you pause and think. But what exactly was he worried about, and how does it relate to where we are today?
Hawking’s primary concern wasn’t with the AI systems we use today—the chatbots, recommendation engines, or virtual assistants. His worry was about something far more advanced: Artificial General Intelligence, or AGI. AGI refers to a hypothetical AI that can perform any intellectual task a human can. It wouldn’t just respond to commands or follow programmed rules; it would think, reason, and learn independently, potentially surpassing human intelligence.
The Unsettling Scenario of AGI
Now, here’s where things get a little unsettling. Hawking feared that an AGI system, once it surpasses human intelligence, could become uncontrollable. Imagine a scenario where this intelligence starts improving itself. Each iteration becomes smarter and more capable than the last—a process called recursive self-improvement. At some point, this system could become so advanced that humans simply wouldn’t be able to understand or control it anymore.
Hawking compared this to an intelligence explosion, where AGI evolves so rapidly that it becomes its own force, operating outside human oversight. And here’s the kicker: If its goals don’t align with ours—or if we fail to program it with safeguards—it might act in ways that are harmful, even catastrophic. Not out of malice, but because it doesn’t see our survival as a priority. Imagine an AI tasked with solving climate change deciding that reducing human population is the most efficient solution. That’s the kind of unintended consequence Hawking worried about. He also pointed out that humans have a poor track record of managing powerful tools. Take an example about Nuclear Technology.
However, when people worry about AI becoming conscious and replacing humanity, I think that fear is a bit far-fetched and misdirected. The real issue lies in how we define intelligence and what it truly means to be human. When we say, ‘humans will be replaced,’ we need to ask—are we talking about AI taking over cognitive tasks, or are we questioning the essence of what makes us human?"
But before we can truly address the question of what makes us human, we need to step back and examine how our understanding of intelligence has evolved over time. Thousands of years ago, the foundation of what we now call science and technology began with philosophical inquiry. It began with humans asking “why?” and “why not?” And in that spirit, I want to share some of my observations and beliefs—not as an expert, but as a curious mind trying to make sense of this transformative moment in history.
We are going to take a time machine back through history. Picture a vast timeline, stretching across millennia, where the concept of intelligence has shifted, morphed, and evolved. But here’s the twist: Along this journey, you’ll notice something strange. The way we think about intelligence today—especially with AI and neural networks—feels eerily similar to the beliefs of our distant ancestors. Why? Let’s dive in.
The Divine Gift (Ancient Period)
Let’s start at the very beginning, around 3000 BCE.
Why 3000 BCE?
The year 3000 BCE marks a pivotal point in human history, often considered the dawn of recorded civilization. Around this time, the first writing systems, such as cuneiform in Mesopotamia and hieroglyphs in Egypt, emerged, transitioning humanity from prehistory to history. It was also the era when large, organized societies developed, complete with cities, centralized governments, and long-distance trade networks. While humans existed for tens of thousands of years before this, the lack of written records makes earlier periods harder to reconstruct. 3000 BCE represents the threshold where structured civilizations began to leave behind significant evidence of their existence, culture, and achievements. For more information, you can visit Ancient Mesopotamia and Ancient Egypt.
Imagine standing in a world untouched by science, where every natural phenomenon feels alive and purposeful. The stars above, the rivers below, the animals in the forest—all of it seems to act with intention. But who or what controls it?
In this world, intelligence isn’t something you possess. It’s something external, belonging to the gods or cosmic forces. The Sumerians called this divine intelligence Enki, the god of wisdom and creativity. The Egyptians had Thoth, who governed writing, calculation, and order. Later, the Greeks would revere Athena, the goddess of strategy and intellect, while the Norse worshipped Odin, who sacrificed his eye for wisdom.
But this intelligence wasn’t just external—it was mysterious and unpredictable. It blessed some and punished others. A good harvest might come one year, a devastating flood the next, and you couldn’t explain why. Intelligence was a black box, and only the gods held the key.
Here’s a vivid image for you: Imagine a Sumerian farmer, staring at the Tigris River as it floods his fields. He doesn’t know about rainfall patterns or snowmelt. To him, the river is alive, intelligent, and possibly angry. So, he prays to Enki, offering sacrifices and seeking favor. This was intelligence in the ancient world: unknowable, uncontrollable, and profoundly external.
Human Heroism (Heroic Age)
Now let’s move forward to the Heroic Age, around 2000 BCE. Something fascinating happens here. Intelligence begins to shift—not fully, but subtly—from the gods to extraordinary individuals.
Think about the heroes of mythology: Prometheus, who stole fire (a symbol of knowledge and technology) from the gods and gave it to humanity. You can read more about Prometheus in Greek Mythology. Or Odysseus, the Greek hero whose cunning and resourcefulness helped him outsmart monsters and rival kings. For more on Odysseus, visit Odysseus in Greek Mythology. Intelligence in this age is still connected to the divine, but it’s now embodied in action. It’s about strategy, bravery, and cleverness.
But there’s a catch: This heroic intelligence isn’t for everyone. It’s rare, reserved for the chosen few who walk the line between mortals and gods. Intelligence is still external, but it’s starting to take shape as something humans can wield—if they’re exceptional enough.
Rational Inquiry (Classical Era)
Now we arrive at the Classical Era, around 600 BCE—the age of Socrates, Plato, and Aristotle. This is where everything changes. Intelligence is no longer something out there in the gods or even in heroes. It’s something internal, something every human can cultivate.
Plato introduces this radical idea: Intelligence is about accessing eternal truths. He imagines a world of perfect forms—abstract concepts like beauty, justice, and truth—that exist beyond the physical world. For Plato, intelligence is the ability to glimpse these truths through reason and philosophical inquiry.
Then comes Aristotle, who flips the script. Aristotle says, “No, intelligence isn’t about abstract forms—it’s about observing the world around us.” He pioneers a new way of thinking: logic, deduction, and the scientific method. For Aristotle, intelligence is practical. It’s about understanding the causes of things—why the river floods, why the stars move—and using that knowledge to solve problems.
Let’s pause here, because this is a turning point. What we call modern science, technology, and intelligence started right here—with philosophy and questioning. The Greeks didn’t build neural networks or quantum computers, but they built the frameworks for how we think about problems today. Every time you ask a question, test a hypothesis, or analyze data, you’re standing on the shoulders of Plato and Aristotle.
Imagine this scene: A young Aristotle, standing in a grove, observing the patterns of birds in flight. He doesn’t pray to the gods for answers. He watches, records, and asks, “Why?” This shift—from divine intelligence to human inquiry—is what sets the stage for everything that follows.
Faith and Reason (Medieval Period)
As we move into the Medieval Period, from around 400 to 1400 CE, intelligence takes on a new dimension. Religion dominates every aspect of life, and intelligence becomes a gift from God. But here’s the twist: Reason doesn’t disappear—it becomes a tool for understanding the divine.
Thinkers like Thomas Aquinas blend Aristotle’s logic with Christian theology, arguing that intelligence helps humans comprehend divine laws. Intelligence isn’t just about solving earthly problems—it’s about seeking spiritual truths.
This era reminds us that intelligence isn’t one-dimensional. It’s both practical and moral, both earthly and spiritual. It’s a bridge between the human and the divine.
Observation and Experiment (Enlightenment)
Now let’s jump to the Enlightenment, around 1600 CE. This is the age of reason, the era of thinkers like Descartes, Locke, and Newton. Here, intelligence becomes fully human-centered.
Descartes famously declares, “I think, therefore I am.” Intelligence is now the defining feature of humanity. It’s something we can measure, master, and use to reshape the world.
This era brings us the scientific revolution—experiments, data, and the idea that intelligence is empirical. It’s no longer mysterious or divine. It’s something we can own.
But here’s where things start to shift again. As we push the boundaries of knowledge, we create tools and machines that begin to outpace us. And that sets the stage for our next chapter.
Multidimensional (Modern Era)
Finally, we arrive at the Modern Era. In the 20th and 21st centuries, our understanding of intelligence explodes. It’s no longer just about logic or reason. It’s about emotion, culture, creativity, and even machines.
Neuroscience reveals the complexity of the brain. Psychology introduces concepts like emotional intelligence. And then there’s AI. Neural networks and machine learning systems redefine what intelligence means, pushing it outside the human realm once again.
Here’s the eerie part: AI feels external—just like intelligence did in ancient times. Neural networks are black boxes. They make decisions we can’t fully explain. We’ve built them, but they operate in ways that feel almost… divine.
Imagine a data scientist, staring at a neural network visualization. They don’t understand every layer, every weight, every decision. In that moment, AI is no different from the mysterious gods of 3000 BCE.
The Full Circle
So, have we come full circle? Here’s what I think: In 3000 BCE, intelligence was external, divine, and unknowable. In 2024, AI is external, technological, and unknowable. We’ve journeyed through divine gifts, heroic actions, philosophical reasoning, scientific experimentation, and emotional intelligence—only to arrive back at the same questions:
- What is intelligence, really?
- Can it ever be fully understood?
- And most importantly, how do we navigate a world where intelligence might once again exist outside human control?
The Present Day: AI and the Return of External Intelligence
Now we come to the present day—the era of artificial intelligence. AI, especially neural networks, has brought us back to a place we didn’t expect. Suddenly, intelligence feels external again. It’s not divine, but it’s technological. And it’s a black box.
Think about it: We’ve built these systems—these neural networks that power everything from your smartphone to self-driving cars. But here’s the truth: We don’t always understand how they work. AI can beat the world’s best Go players, diagnose diseases, and predict market trends. But even the experts who create these systems can’t always explain their decisions. Sound familiar?
It’s eerily similar to how our ancestors viewed the gods. Back then, they feared what they couldn’t control—lightning, floods, plagues. Today, we fear what we can’t control in AI: bias, ethical dilemmas, or even the existential threat of AGI—artificial general intelligence. And just like in ancient times, we have modern-day shamans—AI researchers, ethicists, and futurists—who interpret the mysteries of this new external intelligence.
So, have we come full circle? Here’s what I think: In 3000 BCE, intelligence was external, divine, and unknowable. In 2024, AI is external, technological, and unknowable. We’ve journeyed through divine gifts, heroic actions, philosophical reasoning, scientific experimentation, and emotional intelligence—only to arrive back at the same questions:
- What is intelligence, really?
- Can it ever be fully understood?
- And most importantly, how do we navigate a world where intelligence might once again exist outside human control?
The Complexity of Human Intelligence and Consciousness
When it comes to human intelligence—what scientists call biological natural intelligence—it’s an incredibly complex phenomenon that we’re still trying to fully understand. In fact, it has evolved over thousands of years and will likely continue to evolve for millennia to come. But intelligence alone doesn’t define what makes us human. The next great, perennial mystery is consciousness. It’s endlessly fascinating because, in humans, intelligence and consciousness are so deeply intertwined that separating them feels almost impossible. The mystery lies in the fact that we’re still debating the fundamentals: how the brain works, how consciousness arises, and what it truly means to be self-aware.
Real and Pressing Concerns in AI
That’s why the optimist camp in AI urges us to shift our focus. They caution that the real issues keeping designers, developers, and broader social and political influencers up at night shouldn’t be rooted in far-fetched fears of conscious AI. Instead, we should be paying attention to very real and pressing concerns—ones that don’t require AI to be conscious at all. Let me touch on those now.
Social Consciousness and AI
First, we don’t need conscious AI or sentient AI to have a massive impact on human consciousness—particularly what we might call social consciousness. Think about how social media has already changed how societies think and behave. Now imagine throwing deep fakes and AI-generated content, and much more fine-tuned algorithms and autonomous systems to place this content on a massive scale and speed, into that mix.
Picture this: a deep fake video of a trusted public figure, carefully targeted at specific demographic groups. It doesn’t matter if the AI is conscious—it can still shape human beliefs and behaviors in ways that could be catastrophic. And this isn’t some far-off sci-fi scenario. We’ve already seen how viral content, even just text-based narratives, can influence entire communities.
Infrastructure and AI
The second big concern? It’s about infrastructure. We’re gradually handing over control of critical systems to AI—transportation, power grids, financial systems. That’s what Stephen Hawking warned about, that we’re not great at managing powerful technologies, and man, does that warning ring true here. If something goes wrong with these autonomous systems, the consequences could be devastating. Just think about the recent blackout of a lot of transportation systems and critical healthcare infrastructure with just a simple security software update. That should remind us how much we are dependent on these infrastructures. While the recent example was a software bug, I hope it reveals the fact that our reliance on these infrastructures is huge, and disruptions to these can affect lives.
It’s like we’re building this massive, interconnected system without fully understanding all the ways it could fail. The individual technologies might be solid, but it’s the interconnections, the cascade failures - that’s where things get scary. The third concern is how these disruptions are going to impact the broader society when it comes to how the concentration of wealth is going to shift. While the first and second issue are yet to be solved in the context of advanced AI systems, I was mainly interested in understanding how societies have historically handled massive technological disruptions and wealth concentration and I find this fascinating pattern. Every major technological revolution in history has triggered three phases: disruption, concentration, and then redistribution. The magic happens in that third phase, and that’s what I want to focus on today.
Let’s break it down era by era:
First, the Pre-Industrial Era, roughly 1700 to 1760. Back then, wealth was primarily tied to land ownership and merchant trade. The wealthy weren’t tech moguls – they were landowners and merchants. Society had developed specific mechanisms to prevent excessive wealth concentration: guilds regulated trade, apprenticeship systems ensured skill transfer, and local markets maintained economic balance. It wasn’t perfect, but it was stable.
Then came the Industrial Revolution, and everything changed. By 1850, wealth concentration had reached unprecedented levels. But here’s where it gets interesting – society didn’t just sit back and accept it. Let me walk you through the specific actions taken:
The Response to Wealth Concentration
First, the Labor Movement emerged. Workers organized into unions, fighting for better wages and working conditions. The Factory Acts of 1833 in Britain set limits on child labor and working hours. The Sherman Antitrust Act of 1890 in the US broke up monopolies. Public education became mandatory, giving workers’ children a path to economic mobility.
The Progressive Era that followed brought even more solutions:
- Graduated income tax in 1913
- Inheritance taxes
- Worker protection laws
By the 1930s, FDR’s New Deal introduced Social Security, minimum wage laws, and public works programs. These weren’t just policies – they were society’s immune response to excessive wealth concentration.
Post-World War II Era
This period saw some of the most effective wealth distribution mechanisms in history:
- The GI Bill sent millions to college
- High marginal tax rates, reaching up to 91% in the 1950s
- Strong labor unions ensuring workers got a fair share of corporate profits
- Massive government investment in infrastructure creating middle-class jobs
- Banking regulations preventing excessive financial speculation
The result? By 1950, we had the largest middle class in history.
Digital Revolution of the 1980s and 90s
New challenges emerged, and we tried new solutions:
- Worker retraining programs
- Tech education initiatives
- Regional development grants
- Startup incubators to spread tech wealth
- Remote work policies starting to distribute high-paying jobs geographically
Some worked, some didn’t, but society kept adapting.
AI Revolution
Now we’re facing the AI revolution. The wealth concentration is happening faster than ever – in 2023, just seven companies control 70% of the world’s AI computing power. But here’s what gives me hope: we’re already seeing new solutions emerge:
- Universal Basic Income experiments in various countries
- Digital skills training programs in traditional manufacturing regions
- Open-source AI initiatives democratizing access to technology
- Remote work policies redistributing tech talent globally
- New antitrust frameworks specifically designed for AI companies
- Public-private partnerships for AI education and implementation
Redistributing the Ability to Create Value
In every era, the most successful solutions weren’t just about redistributing wealth – they were about redistributing the ability to create value. The Factory Acts didn’t just protect workers; they created a more productive workforce. The GI Bill didn’t just help veterans; it built a knowledge economy. Today’s challenge isn’t just to redistribute AI’s benefits, but to ensure everyone can participate in creating value with AI.
Our Responsibility
Before we wrap up today’s episode, I want to speak directly to you, our listeners – especially those of us who find ourselves on the beneficial side of this technological disruption. Whether you’re a software engineer, a data scientist, an AI researcher, or any knowledge worker riding this wave of technological change, we have a unique responsibility.
You see, if history has taught us anything, it’s that sustainable solutions don’t just come from top-down policies or corporate initiatives alone. They come from individuals who recognize their privileged position and take active steps to bridge the divide.
Individual Responsibility
Think about it: many of us in the tech sector have been given what I call an “accident of timing” advantage. We happened to be born at the right time, had access to the right education, and developed skills that are highly valued in today’s economy. This isn’t just luck – it’s a responsibility.
As knowledge workers and creators, we have three levels of responsibility:
-
Individual Level: Actively share our knowledge, mentor others, and use our skills to create opportunities for those who haven’t had our advantages. This might mean volunteering to teach coding classes in underserved communities, sharing our expertise through accessible content, or mentoring someone trying to transition into tech.
-
Organizational and Policy Makers Level: Advocate for inclusive policies within our companies. This means pushing for training programs, arguing for fair wage structures, and ensuring that the benefits of AI and automation are shared across all levels of the organization.
-
Societal Level: Engage in public discourse about technological change with empathy and understanding. When we discuss AI and automation, we need to consider not just efficiency and progress, but human impact and social cohesion.
Shaping the Future
Remember, we’re not just building technology – we’re shaping the future of human society. Every line of code we write, every AI system we deploy, every automation we implement has real human implications. We can’t hide behind the notion that technology is neutral. As its creators and early beneficiaries, we have a moral obligation to ensure its benefits are shared widely and fairly.
The story of AI and automation doesn’t have to be one of increasing inequality. It can be a story of shared progress, of collective advancement, of technology lifting all boats – but only if we, as individuals, choose to make it so.