Who Will Govern AI? Its Consequences Will Shape Global Order
Humans have always pretended we can resist new inventions, from the printing press to electricity to computers, only to discover that the world shifts regardless. AI is different only in degree, not in pattern. It moves faster than our debates, scales faster than our regulations, and integrates faster than our instincts. The question is no longer whether AI will matter. It is whether we will matter in deciding how it is used.
The other day my grandson sent me a video of himself soaring through the sky like Superman, cape fluttering, skyscrapers shrinking below him, banking smoothly between clouds as though he had been born to defy gravity. I was fascinated; it was delightful. It was also mildly terrifying. For a fleeting moment I wondered whether the boy had discovered a secret family superpower that had somehow skipped my generation.
Before I could call to inquire about aviation training, my daughter intervened, laughing. “Appa, it’s AI-generated.” And just like that, reality adjusted itself.
What struck me was not the technology alone, but the ease of it. A few prompts, a few clicks, and an ordinary afternoon had transformed into cinematic fantasy. No studio. No stunt double. No wires. Just imagination, rendered instantly. The line between what happened and what could have happened had quietly dissolved and it did so not in a research lab, but in my grandson’s bedroom.
That was the moment I realised: AI is no longer a distant laboratory experiment or a corporate tool. It is play. It is entertainment. It is everyday magic. And like all magic, it delights and unsettles in equal measure.
Real Fun in the Lessons
I resisted for a long time, after all, I’ve been around the tech scene since 1982. I remember my first encounter with the Holorith card and TDC 316 in the 1980s, a mammoth and quirky system that promised to “change everything” but mostly tested my patience. Even then, I was captivated: the idea that a machine could think, even a little, and act, even clumsily, was irresistible. Watching computers, networks, and AI evolve from those rudimentary beginnings, I thought I could stay on the sidelines and just observe this AI chap supposedly taking over the world . But the pull was irresistible. I had to dive in.
And so here I am, roping Claude into my stock trading, not because I expect to get rich overnight, but to see how this over-caffeinated, virtual lab assistant actually thinks. It can crunch numbers faster than I can blink, rank options like a Wall Street oracle on Red Bull, and politely remind me that my “sure-win” idea is actually a slow-motion trainwreck. It’s like having a tiny, tireless MBA in a box, minus the ego, the coffee breath, and the bad PowerPoint slides. Just like I initially felt with Holorith decades ago, the thrill is in the experiment, not the outcome.
The real fun is in the lessons. Watching Claude wrestle with real market chaos is a mix of education and comedy, it hesitates, surprises, occasionally goes off the rails, and I end up laughing at my own human stubbornness. Every trade suggestion is a mini cliffhanger: will I follow, or will I veto? Diving in after all these years isn’t just curiosity, it’s living the experiment. And while I’m confining my tryst with the fun for now (AI moves too fast, thinks too fast, acts even faster for my laid-back “sip tea and watch the world” attitude), I’m convinced that womankind will soon be utterly incapable of surviving without an AI sidekick or, for those who prefer drama, an agentic AI overlord with impeccable taste in spreadsheets, whispering: “Don’t do that, human, try this instead.”
Who Sets the Limits?
Meanwhile, at the global scale, the tone shifts. In the United States, Anthropic, an AI research firm that openly declares it “puts safety at the frontier”, reportedly moving toward a contract with the United States Department of Defense is facing a jam. The friction is not incidental; it is philosophical. Anthropic does not wish its AI systems to be used for autonomous weapons or for mass surveillance of American citizens. The Defence establishment, understandably, is reluctant to accept operational constraints imposed by a private corporation when national security is at stake. Meanwhile, Elon Musk has reportedly offered xAI without similar caveats. This is not a corporate squabble. It is a structural moment.
For the first time in history, private entities are building systems capable of analysing battlefields, modelling escalation, filtering intelligence streams, allocating logistics, and potentially authorising responses all at machine speed. Governments, whose legitimacy rests on sovereignty and security, cannot afford technological hesitation. Corporations, whose legitimacy rests on market trust and ethical positioning, cannot afford moral abdication. Their incentives intersect, but they are not identical.
AI does not merely extend the arm of power; it accelerates the mind of power. Agentic systems, large language models wired into defence, financial, and civic infrastructure , can already synthesise data, weigh probabilities, generate operational options, and execute tasks in seconds that once required layered bureaucracies. If authorised, they can move from recommendation to action: approving, transferring, deploying, responding. The step from advisory tool to operational actor is not science fiction; it is a policy decision.
The question, therefore, is not whether AI will influence defence structures. It already does. The real question is: who sets the limits? The engineer who writes the alignment code? The corporation that owns the model? The elected government that commands the military? Or the blunt logic of strategic competition, where hesitation is punished and restraint is interpreted as weakness?
Will Humans Matter
For my grandson, it is fun. For my stock portfolio, it is earned. For nations, it is unmistakably the gun.
The same system that lets a child fly through digital clouds can optimise missile trajectories, prioritise targets, simulate escalation ladders, or manage autonomous swarms. The technology is indifferent. It does not possess conscience. It possesses capability. And that is why the “gun” cannot be reduced to a metaphor.
The coming decade may not be defined by artificial intelligence itself, but by the contest over who governs its deployment. The struggle will not be dramatic in appearance; it will be contractual, regulatory, technical - fought in boardrooms, policy papers, classified briefings, and code repositories. Yet its consequences will shape the hinge points of global order.
Humans have always pretended we can resist new inventions, from the printing press to electricity to computers, only to discover that the world shifts regardless. AI is different only in degree, not in pattern. It moves faster than our debates, scales faster than our regulations, and integrates faster than our instincts. The question is no longer whether AI will matter. It is whether we will matter in deciding how it is used.
(The author is an Indian Army veteran and a contemporary affairs commentator. The views are personal. He can be reached at kl.viswanathan@gmail.com )

Post a Comment