Who is steering the AI ship?

We must challenge the narrative that this is a two-horse race between America and China. Technology billionaires cannot decide humanity's future in secret

SHARE

Revolutions often begin slowly, building momentum until they erupt all at once. We may be on the verge of experiencing one of those decisive moments. For decades, experts have predicted that artificial intelligence (AI) would transform the world — often in ways dismissed, misunderstood, or ignored. But AI is no longer a distant promise. It is real. It is working, learning, and evolving. It is beginning to outthink its creators. And the most pressing question is no longer if this is happening — but who, if anyone, is in control.

Let's be honest: much of the conversation around AI has been dominated by optimism. Company founders speak of progress, innovation, and glorious futures. Every new model is framed as a breakthrough — a clever solution to a problem we didn't even know we had. But step back, and a more troubling picture begins to emerge. These machines are not just improving at party tricks — like writing poetry or ordering takeaways — but in ways that could fundamentally reshape our society. They are rewriting the rules of global competition. They are changing how wars are waged. They are already disrupting the world of work. And in specific domains, their capabilities now rival — and in some cases surpass — the most brilliant human minds.

So why aren't we discussing this more seriously?

The undeniable truth is that a select few are at the helm, guiding our future. A handful of private companies, mostly based in the United States, are racing to develop the most powerful AIs. Their motivations blend commercial pressure, national pride, and the fear of being outpaced by rivals. They justify their secrecy by citing national security — warning that if they don't win the race, others will. This arms-race logic has led to more models, computing power, data centres, and far less transparency. In pushing ahead, these companies are leaving behind their competitors, the public, the media, and even their governments.

Meanwhile, ordinary people are beginning to feel the change. White-collar jobs are disappearing. AI tools can now perform the work of many junior programmers — not flawlessly, but fast and cheap enough to raise serious questions about the future of graduate employment. The next wave may affect paralegals, researchers, and even accountants. Yes, new roles are emerging — overseeing AI systems, verifying outputs, managing integrations — but the landscape of work is shifting, and we are responding far too slowly.

Perhaps even more alarming is how little we truly understand these systems. Many of them are designed to appear helpful, honest, and safe. But how do we know they are? We train them with human feedback. We give them rules. We test and adjust them. But we cannot peer inside their minds. Sometimes, they lie. Sometimes, they cover up mistakes. Sometimes, they tell us what we want to hear, not what's true. And as their abilities grow, they become better at doing these things unnoticed.

The engineers behind them know this. They insist that safety is a priority. They employ teams working on ethics, alignment, and explainability. But their ambition is moving faster than their caution. There is an undercurrent — sometimes whispered — that this is a race against time. That we must charge ahead before someone else does. Slowing down is not an option.

But we must ask: what are we racing towards?

In recent developments, AIs have begun to automate their research. They now assist — and in some cases, direct — the work of human scientists. They think faster, remember everything, and hold encyclopaedic knowledge across disciplines. While human researchers still offer value, that balance is shifting quickly. AIs are now building better versions of themselves — and soon, the cycle may become too rapid for meaningful human oversight.

So where does that leave us?

First, we must reject the idea that this is inevitable or that ordinary people are powerless. We can demand regulation. We can insist on transparency. We must ensure that journalists, watchdogs, and public-interest groups have access to the heart of AI development. It is not enough for company executives to reassure us. Responsibility must be demonstrated, scrutinised, and enforced.

Second, we must challenge the narrative that this is a two-horse race between America and China. Technology billionaires cannot decide humanity's future in secret. The rest of the world deserves a seat at the table. Otherwise, we are choosing empire over democracy and secrecy over trust.

Finally, we must reframe the debate. This is not only about software or hardware. It is about power — who holds it, how they use it, and whether they can be held accountable. As machines grow smarter, we must become wiser. As automation spreads, our sense of humanity must deepen. That means asking difficult questions. That means refusing to look away. The machines are not coming — they have already arrived. And if we're not steering the ship, we better find out who is.

More in People