Big Tech shouldn’t be writing the rules for AI

To prevent powerful technologies, such as artificial intelligence, from being shaped solely by corporate profit incentives, democracies must establish the institutions needed to oversee them, writes Gabriela Ramos
Big Tech shouldn’t be writing the rules for AI

Earlier this month, the US defence department designated Anthropic a 'supply-chain risk' following the company’s insistence on safeguards preventing its technology from being used for mass surveillance of Americans or in fully autonomous weapons. File picture

The ongoing dispute between Anthropic and US president Donald Trump’s administration reveals something deeply troubling about the current state of AI governance. 

Apparently, a private company is more concerned about ethical guardrails than the world’s most powerful military.

Earlier this month, the US defence department designated Anthropic a “supply-chain risk”. 

The unusual move followed the company’s insistence on safeguards preventing its technology from being used for mass surveillance of Americans or in fully autonomous weapons. 

In response, the Pentagon placed Anthropic on a list typically reserved for foreign entities considered national security threats. Anthropic has since filed a lawsuit challenging the designation.

Whatever one thinks of Anthropic’s motives, this episode underscores how misaligned governance frameworks have become.

 When the responsibility for insisting on basic ethical limits falls to private companies, the systems meant to protect the public interest from potentially dangerous technologies have clearly failed.

Ethical AI deployment

Encouragingly, February’s AI Impact Summit in India showed that it is not too late to change course. 

Around the world, start-ups are developing systems designed explicitly for safe and ethical deployment, and civil society organisations are using AI to tackle pressing social challenges, including violence against women and girls. 

At the same time, the costs of AI applications have dropped by as much as 90% in recent years, while the growth of open-source ecosystems has made powerful tools accessible to smaller actors.

This is the AI revolution many of us have long hoped for, with technological progress guided by democratic values and respect for human rights. 

The same vision has informed my work on UNESCO’s Recommendation on the Ethics of AI — the first global framework of its kind — and on the OECD’s AI Principles.

India’s experience offers a useful model for countries seeking to harness AI in ways that serve the public interest. 

By investing heavily in digital public infrastructure — most notably the Aadhaar biometric identity system and the Unified Payments Interface — the country has shown how technology can be deployed at scale to meet citizens’ everyday needs.

AI governance

But the Anthropic dispute highlights a growing tension between sound AI governance and governments’ desire to attract investment. 

The business models of the handful of American companies that currently dominate the AI frontier are shaped by intense competition, both among themselves and with their Chinese counterparts, and policymakers are reluctant to impose rules that might drive them away.

That dynamic was evident during last year’s AI Action Summit in Paris, where media coverage focused on the investment commitments France had secured from Big Tech rather than on public-interest initiatives like Current AI or the Coalition for Sustainable Development Through Sport.

As a result, these summits increasingly serve as platforms for governments to announce investments and data-centre deals. Tellingly, the defining image of India’s AI Impact Summit was Prime Minister Narendra Modi surrounded by tech CEOs, including Alphabet’s Sundar Pichai, OpenAI’s Sam Altman, and Anthropic’s Dario Amodei.

The original purpose of these gatherings was to foster multilateral co-operation on governing transformative technologies. Their transformation into investment-promotion platforms illustrates how difficult sustaining meaningful oversight has become.

Policymakers have experimented with multiple approaches, from voluntary principles to binding legislation like the European Union’s AI Act. Yet geopolitical competition and commercial pressures continue to push governments into a race to the bottom.

The costs of AI applications have dropped by as much as 90% in recent years, while the growth of open-source ecosystems has made powerful tools accessible to smaller actors. File picture
The costs of AI applications have dropped by as much as 90% in recent years, while the growth of open-source ecosystems has made powerful tools accessible to smaller actors. File picture

To be sure, not every country needs to confront Big Tech firms on the global stage. But governments must put their own houses in order by setting clear rules and building the capacity to enforce them.

Public procurement offers one powerful lever, accounting for roughly 13% of GDP in OECD countries. 

Procurement contracts can require data localisation and algorithmic transparency and can establish effective mechanisms for challenging harmful algorithmic decisions. They can also mandate safety testing of high-risk systems before deployment, while rewarding companies that meet ethical standards and excluding those that do not.

But procurement alone is not enough; legislation must follow. One of the most consequential steps governments could take is to ensure that AI systems are never granted legal personhood, so that responsibility always rests with a human being or institution. 

They should also establish firm prohibitions on data extraction without consent, mass surveillance, and the use of AI for profiling and political manipulation.

Not every country can build its own foundational AI models, nor should they try. A more practical path is to invest in smaller, open-source models tailored to local languages, needs, and values. 

While such a strategy still requires investments, institutions, infrastructure, and appropriate incentives — the four “I’s” — it has the potential to deliver results at scale.

Technology is not above the law

Europe’s AI Act represents the most ambitious attempt so far to apply this approach. Critics dismiss it as bureaucratic and cumbersome, and there is growing pressure on the European Commission to delay its implementation. 

But the law simply reaffirms a basic principle: technology is not above the law. Pharmaceutical companies must meet safety standards before releasing new medicines, and construction firms must certify the structural safety of the bridges they build.

High-risk AI systems should be subject to the same scrutiny.

The pace of AI development underscores the urgency of this task. Countries that fail to build these foundations will not merely fall behind in today’s technological race; in a world where power increasingly determines outcomes and accountability becomes optional, they risk losing control over how new technologies are used.

The good news is that governments and consumers still have leverage. Access to markets gives countries real influence over how AI products are deployed, and civil society organizations have repeatedly shown that co-ordinated public pressure can change corporate behavior.

Democratic societies cannot outsource the defence of their values to private companies. They must build the institutions, laws, and capacities that make such reliance unnecessary before the cost of inaction becomes too high.

  • Gabriela Ramos, co-chair of the Task Force on Inequalities and Social-Related Financial Disclosures, is a former assistant director-general for social and human sciences at UNESCO, where she oversaw the development of the Recommendation on the Ethics of AI, and a former OECD chief of staff and sherpa to the G20, G7, and APEC.

Copyright: Project Syndicate, 2026.

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited