The Choice We Face – The Matrix or Star Wars?
On a crisp autumn evening in Seoul, I watched a room full of entrepreneurs, technologists, and academics lean forward in their chairs as a presenter posed a question that has haunted our species since the first human picked up the first tool: Will the thing we’ve made serve us, or will we end up serving it?
The question, of course, was about artificial intelligence. But the answer, I would learn over two days of conversations in that conference hall overlooking Seoul, depends entirely on which movie we think we’re living in.
We are caught between two competing visions of our immediate future. Dr. Winslow Sargeant raised this dichotomy in Jinju City, three hours away from Seoul, and I believe it is the philosophical question of our age—our generation’s “to be or not to be.” But this time, the question is: The Matrix or Star Wars? Which one will it be?
In one vision, we are sleepwalking into the Matrix—that dystopian realm where artificial intelligence doesn’t merely assist human decision-making but supplants it, where algorithms don’t augment human judgment but replace it, where the boundary between tool and master dissolves until we can no longer tell who is optimizing whom. Remember how that story unfolded: humans, desperate to stop AI, exploded a nuclear bomb to block out the sun, believing they could cut off the machines’ power source. But the AI adapted. It turned human bodies into batteries, reducing humanity to nothing more than an energy source for the very intelligence it had created. I hope—desperately—we do not get there. In this vision, AI becomes the architect of our choices, the curator of our thoughts, the invisible hand that moves us like pieces on a board we can no longer see.
In the other vision, we are entering the world of Star Wars—not the sterile, all-controlling Empire, but the scrappy Rebellion, where droids like R2-D2 remain unmistakably tools, however sophisticated, serving human courage and creativity. Here, artificial intelligence amplifies what makes us human rather than replacing it. The technology is powerful, even magical, but it remains subservient to human purpose, human values, human dreams.
The difference between these two futures is not merely academic. It is the difference between a world where AI helps a doctor operate on a heart and one where it decides who receives the operation. Between a world where algorithms surface opportunities for entrepreneurs and one where they determine which businesses are allowed to exist. Between a world where technology enhances human potential and one where it defines it.
Seoul, I discovered, has chosen its movie. Or rather, it is trying to.
The city has become an unlikely laboratory for what its advocates call “human-centered entrepreneurship”—a philosophy we began with Dr. Kichan Kim in 2015. Still, he took the lead on —sounds almost quaint in an age of move-fast-and-break-things techno-optimism —but which may prove to be the most radical idea of our time. The premise is simple: technology, including artificial intelligence, must be designed not for efficiency alone, not for profit alone, not for scale alone, but for human progress.
What does this mean in practice? In the conversations I witnessed, it meant asking different questions. Not “What can AI do?” but “What should AI do?” Not “How fast can we scale?” but “Who benefits from this scale?” Not “How do we optimize?” but “What are we optimizing for?”
These seem like philosophical niceties, the kind of bureaucratic thinking that slows innovation and advantages those willing to move without such rules. But the entrepreneurs and researchers in Seoul were making a different argument: that asking these questions early doesn’t slow innovation—it directs it. That human-centered design is not a constraint on artificial intelligence but a compass for it—an artificial Human Dashboard for values, emotions, and happiness.
I spoke with a Korean entrepreneur who had built an AI platform for small business owners—not to replace their judgment, but to handle the tedious work that kept them from exercising it. “The AI does the accounting,” she explained. “The human does the dreaming.” It was a minor distinction, perhaps, but it revealed a fundamentally different philosophy about where value resides.
Another presenter described AI systems designed to amplify human connection rather than replace it—algorithms that helped older adults maintain relationships rather than substituting digital companionship for human presence. The technology was sophisticated, but its purpose was almost radically simple: to make it easier for people to be human with each other.
There is something else happening in Seoul, something that runs deeper than technology. A resurgence of interest in reading—and re-reading—the country’s leading philosophers, figures like Jo-Shik, whose ideas about human nature and social harmony had been somewhat eclipsed by the rush toward modernization.
The origins of this movement trace back to 2023, when Dr. Ayman El Tarabishy, President and CEO of ICSB, published “The Origin of Korean Entrepreneurship.” In that essay, he made a provocative argument: that Korea’s economic miracle was not merely the result of hard work and smart policy, but was built upon a moral foundation—three principles in particular: inhwa, harmony; jeong, compassion; and gong-ik, public good.
These are not business concepts. They are philosophical ones. And yet El Tarabishy argued—and the entrepreneurs I met in Seoul seemed to believe—that they are precisely what made Korea’s development sustainable rather than extractive, generative rather than destructive.
This resurgence is profoundly significant. In a world drowning in TikTok videos and viral content, in an age when “engagement” is measured in seconds and attention is the most fought-over commodity, people are turning to philosophers. They are looking for principles, not platforms. For wisdom, not just information. For meaning, not just metrics.
I noticed something small but telling: I saw fewer business cards exchanged than in years past—that formal ritual of corporate Korea seemed to be fading and but bowing when greeting someone? That was flourishing. People were letting go of the performative gestures of global business culture while holding fast to something older, something that spoke to respect and human dignity.
This is not nostalgia. It is something more radical: a rejection of the idea that the future must abandon the past, that innovation requires us to forget everything that came before, that progress means severing our connection to ancient wisdom about what makes a good life, a good society, a good business.
I am a fan of the concept of Creative Destruction—that necessary process by which the old makes way for the new. But what I witnessed in Seoul, I see as more than that. I see it as Creative Reconstruction of Humanity.
Just recently in the Wall Street Journal article, I’ve seen signs of hope beyond Seoul’s borders. I was struck when Amazon’s chief executive, Andy Jassy, announced the company would cut 14,000 employees. His explanation? Not money. Not even AI. The layoffs were about “Organizational Culture Alignment.”
“The layoff announcement this week was not really financially driven, and it’s not even really AI-driven, not right now. It’s culture,” Jassy said in response to an analyst question on the company’s earnings call Thursday. This, mind you, from a company whose quarterly sales grew 13% year-over-year to $180 billion.
Think about what that means. One of the world’s most powerful technology companies, at a moment when AI could easily justify replacing humans with algorithms, chose instead to make decisions based on organizational culture—on the less quantifiable question of what kind of company it wants to be. Not on what the spreadsheet demands. Not on what the AI recommends. On culture.
But the question haunts me: Will this continue, or are these one-off moments before the inevitable tide of pure optimization sweeps them away?
I am a big fan of Amazon—not because of the super-fast deliveries, but because Jeff Bezos built it brick by brick, or perhaps pixel by pixel. He understands both sides: the real and the digital, the physical warehouse and the cloud server, the human customer and the algorithmic recommendation. This dual vision, I believe, is what enabled Amazon to become what it is.
It’s a small signal, perhaps. Or it’s a sign that even in the heart of technological capitalism, some leaders are beginning to ask different questions.
This may sound obvious, but it represents a profound departure from the technological determinism that has governed much of Silicon Valley’s thinking for the past two decades. In that worldview, progress is a river that flows in one direction, and our job is to ride it as fast as we can. Resistance is futile; hesitation is failure; ethics is a luxury we’ll address later, after the disruption.
But I’ve come to see our reality differently. We’re not in a calm river flowing in one direction. We’re in what I call Permanent White Water—constant rapids, unrelenting turbulence, perpetual disruption. In white water, speed alone doesn’t save you. You need a paddle. You need to steer. You need to make choices about which currents to ride and which rocks to avoid. You need, above all, intention.
Seoul offered a different model: one in which progress has a direction but not a destination, where the speed of innovation matters less than its direction, where the question is not whether we can build something but whether we should.
The irony, of course, is that this human-centered approach may prove more sustainable—and more profitable—than its alternatives. Companies built on extractive models eventually run out of things to extract. Platforms designed to addict rather than serve eventually exhaust their users. AI systems optimized for engagement rather than well-being eventually hollow out the communities they depend on.
But human-centered entrepreneurship, at its best, creates a different kind of economy: one where value is generated rather than extracted, where technology enhances human capacity rather than replacing it, where innovation serves human flourishing rather than demanding its sacrifice.
As I left Seoul, I found myself thinking about that initial question: Will AI serve humanity or control it? The answer, I realized, is that it will do whatever we design it to do. The technology itself is neutral. What matters is the worldview of the people building it.
And that worldview, in turn, depends on which story we tell ourselves about where we’re headed. If we believe we’re entering the Matrix—if we accept as inevitable a future where artificial intelligence makes us increasingly peripheral to our own lives—then that’s the future we’ll build. But if we see ourselves in Star Wars, as protagonists in a story where technology enhances rather than replaces human agency, then that future becomes possible too.
The choice is ours. In Seoul, at least, they’ve chosen to believe in hope, in collaboration, in building a future that puts humans first. Whether that vision spreads or remains a local anomaly will depend on choices made in conference rooms and coding sessions far beyond Korea’s borders.
The machines are getting smarter. The question is whether we will be wise enough to ensure they serve our humanity rather than diminish it. In Seoul, I saw people trying to answer that question with more than words. They were building the answer, line by code, decision by decision, choice by choice.
The future, as always, belongs to those who show up to build it.
It was sunny the day I left Seoul. It was a very good day.
