The global struggle over how to regulate AI

In March 2024, a Brazilian senator traveled on a commercial flight to Washington, D.C. Marcos Pontes, a 61-year-old former astronaut, had become a central figure in the effort to regulate artificial intelligence in Brazil — where a draft bill had proposed serious restrictions on the developing technology. Confident, loquacious, and a former minister of science, technology and innovation, Pontes felt he was uniquely qualified among his colleagues to understand the complicated issues surrounding AI.

The global struggle over how to regulate AI
Big AI companies like Google, Meta, OpenAI and Amazon, have come out hard against comprehensive regulatory efforts in the West — but are receiving a warm welcome from leaders in many other countries. This episode from Rest of World uses behind-the-scenes reporting from local journalists in India, South Africa, and Brazil, to give an in-depth look at the global struggle over how to regulate AI.
Written by Katie McQue, Laís Martins, Ananya Bhattacharya, and Carien du Plessis. Narrated by Jane O’Donoghue.
Original story:
Pontes had worked with NASA, attended the Naval Postgraduate School in California, and was less skeptical than many other senators about the major U.S. companies that dominate the AI race. “We cannot restrict technology,” he’d said at one of the AI bill’s early hearings, expressing caution about legislating AI tools that are still developing. Joining him in D.C. was Laércio Oliveira, a fellow conservative member of the Senate committee drafting the AI bill. The two were part of a delegation organized by a Brazilian congressional initiative that engages with the private sector. Its purpose: to have a series of meetings about the drafted AI bill with representatives of the U.S. government and Silicon Valley companies.
The drafted bill was one of the most comprehensive to date outside the West. It proposed a new oversight authority on AI, copyright protections for content used to train AI, and protections of individual rights, with anti-discriminatory checks in biometric systems and the right to contest AI decisions with significant human impact. It banned autonomous weapons and tools that could facilitate the creation and distribution of child sexual abuse material, and put stricter oversight on social media algorithms that can amplify disinformation. Global advocates for AI regulation saw Brazil as a potential model for other countries. But Pontes believed the bill could stifle investment and innovation — and saw it, he later told Rest of World, as “based on fear.”
Pontes didn’t name the Americans he met, but social media posts showed the delegation visiting members of the U.S. government, think-tank staffers, and executives from three major AI companies: Amazon, Google, and Microsoft. Pontes said the bill was a focus of the discussions. “We asked them to analyze our legislation,” he said, “and give us some feedback, tell us what they think.”
Three months later, on the day when the bill was expected to be put to a vote, Pontes submitted 12 amendments, and Oliveira another 20 — helping to trigger a delay. Pontes then opened a series of hearings on the bill, saying it needed more public debate. Regulation advocates claimed that Big Tech representatives were allowed undue time and influence on the discussions that followed. Critiques of the bill came in from domestic industry groups, with one warning that it would lead the country into “technological isolation.” A weakened version of the bill ultimately passed the Senate this past December.
Brazil’s AI bill is one window into a global effort to define the role that artificial intelligence will play in democratic societies. Large Silicon Valley companies involved in AI software — including Google, Microsoft, Meta, Amazon Web Services, and OpenAI — have mounted pushback to proposals for comprehensive AI regulation in the EU, Canada, and California.
Hany Farid, former dean of the UC Berkeley School of Information and a prominent regulation advocate who often testifies at government hearings on the tech sector, told Rest of World that lobbying by big U.S. companies over AI in Western nations has been intense. “They are trying to kill every [piece of] legislation or write it in their favor,” he said. “It’s fierce.”
Meanwhile, outside the West, where AI regulations are often more nascent, these same companies have received a red-carpet welcome from many politicians eager for investment. As Aakrit Vaish, an adviser to the Indian government’s AI initiative, told Rest of World: “Regulation is actually not even a conversation.”
A first wave of regulation
“New technology often brings new challenges, and it’s on companies to make sure we build and deploy products responsibly,” Meta CEO Mark Zuckerberg told the U.S. Senate in 2023, making the case for self-regulation around AI. “We’re able to build safeguards into these systems.”
“Self-regulation is important,” OpenAI CEO Sam Altman said during a visit to New Delhi that same year, as hype over ChatGPT was building globally. Though he also cautioned that “the world should not be left entirely in the hands of the companies either, given what we think is the power of this technology.”
AI’s proselytizers say it will revolutionize industries, supercharge scientific research, and make many aspects of life and work more efficient. On the other side of the debate, politicians, tech experts, representatives of industries affected by AI, and civil society advocates argue forcefully for far stricter regulations. They want early implementation of rules around copyright, data protections, and fair labor practices, as well as uses that affect public safety such as generative deepfakes, the creation of chemical and biological weapons, and cyberattacks.
The most ambitious AI law passed to date, the EU’s 2024 AI Act, offers a template for more restrictive regulation. It bans the use of AI for social scoring purposes, imposes restrictions on the use of AI in criminal profiling, and requires labels on content generated by AI — a move aimed at enhancing transparency and fighting disinformation. It also creates a range of special requirements for developers of AI systems that are categorized as posing a high risk to health, safety, or fundamental rights.
The U.S. has no proposed law for comprehensive AI rules under serious consideration in Congress. At the state level, California has been the most forward-leaning on AI regulation, with its governor recently signing 17 AI bills into law. Their provisions range from protections of the digital likenesses of performers to prohibitions on election-related deepfakes. Canada, meanwhile, is attempting to take a similar approach to the EU, with its governing party proposing to standardize the design, development, and use of AI through the Artificial Intelligence and Data Act (AIDA), which has yet to pass.
Regulatory efforts in the U.S. and the EU have triggered pushback from tech companies as lobbying activity over AI has spiked. In Canada, executives from Amazon and Microsoft have publicly condemned the legislative effort, calling it vague and burdensome. Meta has suggested it might withhold the launch of certain AI products in the country.
A landmark California bill that would have required safety measures on AI software to guard against rogue applications was vetoed by the governor last year after a campaign by venture capitalists and AI developers that included buying ads and writing newspaper op-eds. OpenAI’s chief strategy officer wrote a public letter warning the bill could “slow the pace of innovation” and lead entrepreneurs to “leave the state in search of greater opportunity elsewhere.”
Tech giants also lobbied aggressively against the EU bill. OpenAI was reportedly successful in arguing to reduce the law’s regulatory burden on the company, and Altman has threatened that OpenAI could leave Europe if he deems regulations too restrictive. Zuckerberg co-authored an op-ed in August describing the EU’s regulatory approach as “complex and incoherent,” warning it could derail both a “once-in-a-generation opportunity” for innovation and the chance to capitalize on the “economic-growth opportunities” of AI.
“We have been clear that we support effective, risk-based regulatory frameworks and guardrails for AI,” an Amazon spokesperson told Rest of World, noting the company has signed voluntary pledges with the White House and the EU for responsible AI development, and has said it welcomes “the overarching goal” of the EU’s AI Act. OpenAI and Google did not respond to multiple requests for comment for this article. A Microsoft spokesperson shared a company publication calling for “balanced efforts to develop laws and regulations” to encourage public trust and AI adoption, as well as “interoperability and consistency” of AI rules across countries. A Meta spokesperson sent a previous company statement on the EU’s AI Act: “It is our priority to ensure that AI is developed and deployed responsibly — with transparency, safety, and accountability at the forefront. We welcome harmonised EU rules. … We also shouldn’t lose sight of AI’s huge potential to foster European innovation and enable competition.”
In many nations outside the West, policy discussions are still developing. Chile is one of the few countries attempting to enact a comprehensive AI law. Its landmark draft bill imitates components of the EU’s risk-focused approach and vows to boost the development of AI while safeguarding democratic principles and human rights. South Korea’s AI Basic Act, passed in December, promotes AI’s role in economic growth while also echoing some of the EU’s ethics, safety, and transparency guardrails.
Other governments have set out policy frameworks that prioritize commercial interests over stringent regulation. Taiwan’s AI Basic Act, for example, states that future legal interpretations concerning AI regulation “should not hinder the development of new AI technologies or the provision of services embedded with AI technology,” according to an analysis by the IAPP, a nonprofit that has tracked AI laws and regulations worldwide. Japan has touted its light approach to AI regulation, prioritizing attracting business and investments. Singapore, another global economic powerhouse striving to become an AI hub, is yet to enact any AI policies, but the government has indicated a preference for targeted sector-specific rules rather than a more sweeping approach.
“Innovation will happen. Regulation will follow.”
In some non-Western countries, the priority isn’t regulation at all, but rather courting large AI companies to make massive investments.
India, for example, is home to the world’s largest internet base outside of China and has an extensive history of regulating Big Tech companies. Prime Minister Narendra Modi’s administration has put Apple, Google, Amazon, and Meta under fire for anti-competitive practices, and, in a 2021 Digital Media Ethics Code, laid out several controversial mandates for social media platforms, such as adding traceability to encrypted messages and honoring government takedown requests. With AI, though, the administration has signaled a different attitude, positioning India as a magnet for Silicon Valley funds.
“This is the era of AI, and the future of the world is linked with it,” Modi declared in the fall of 2024, embracing a Big Tech state of mind. When he joined Silicon Valley CEOs at a roundtable in September, he urged them to “co-develop, co-design, and co-produce in India for the world,” according to a government press release. The government has devoted $1.2 billion to an initiative called IndiaAI, designed to build out the country’s AI capabilities.
Industry sources involved in policy discussions with the government told Rest of World the current climate for Big Tech on AI in India is warm and regulation-free. Large U.S. firms are making their influence felt by channeling AI investment into domestic companies and government projects while deploying AI products around the country. OpenAI has promised to support the IndiaAI initiative by heavily investing in the developer community. Meta has vowed to partner with it to “empower the next generation of innovators … ultimately propelling India to be at the forefront of global AI advancement.” Amazon has earmarked millions of dollars to back Indian AI startups, billions more to grow its data center footprint, and forged a multiyear AI collaboration with the government-run Indian Institute of Technology (IIT) Bombay. Microsoft recently committed $3 billion to AI training, and cloud and AI infrastructure. Google is “robustly investing in AI in India,” its CEO has said, “and we look forward to doing more.”
Aakrit Vaish, the adviser to the IndiaAI initiative, told Rest of World regulation isn’t seriously being discussed at the moment in government and industry circles. “A lot of the conversation just in the building is about building AI for India, and everybody wants to be involved in that.”
India’s own AI industry relies on Silicon Valley companies, noted Sangeeta Gupta, who leads strategy at Nasscom, the country’s top IT lobby. Most homegrown AI startups “are building AI models on top of [U.S.-made] platforms rather than building something that is totally, totally ground-up,” Gupta told Rest of World. One of India’s biggest AI startups, Sarvam, for example, builds on Meta’s set of large language models, Llama, and has partnered with Microsoft’s Azure. Google, Amazon, and Microsoft are pouring millions into young Indian AI companies like Sarvam.
“Some of these Big Tech players do influence government policy. And it’s not just here in India. It’s almost across the globe,” Salman Waris, a lawyer focused on regulation and AI who advises the Indian government’s IT ministry, told Rest of World. “Just because they have access, and their sheer size … and the fact that when they try and make an investment, it’s a big advantage to the government.”
The government so far has been open to “a soft-touch approach to AI,” Waris added, but a clearer sense of its mindset should come with a draft of the Digital India Bill expected this year. There has been dissonance in the past in which the Modi government has signaled a pro-business mentality publicly, Waris said, but when regulatory proposals are ultimately released, “they seem to be having a very different approach.”
India’s minister of electronics and information technology did not respond to a request for comment. He had said in December that while the government was open to regulating AI, it would take a “lot of consensus,” stressing that India should remain “at the forefront of ethical AI development.”
“Some of these Big Tech players do influence government policy. And it’s not just here in India. It’s almost across the globe.”
In South Africa, another country eager for tech investment, AI regulations are still in early discussions. Facing fundamental issues such as poverty and violent crime, South Africa’s leaders have touted AI as a potential driver of growth. In December, when South Africa assumed the presidency of the G20 forum of the world’s largest economies, President Cyril Ramaphosa named AI as a top priority. Africa has experienced a dramatic surge in internet connectivity and cellphone adoption over the last decade, fueling a digital revolution and drawing attention from global tech firms that see potential for market expansion. South Africa is the continent’s largest economy, and Meta established its first African office in Johannesburg in 2015. Google opened its first Africa-based cloud region in Johannesburg in 2024, while Amazon established its Africa headquarters in Cape Town the previous year. In May, Microsoft announced a 1.3 billion rand ($69 million) investment in South Africa focused on readying the country for an “AI transformation.”
South Africa’s Department of Communications and Digital Technologies released an AI policy framework in August, outlining proposals for ethical guidelines, privacy protections, infrastructure development, and AI talent cultivation. The framework was criticized by some experts as incomplete. Mlindi Mashologu, the deputy director-general in the department who is closely involved in the process, told Rest of World the framework has since received input from 25 organizations, including major players like Google, Microsoft, and China’s Huawei. A draft policy that incorporates these consultations will be released publicly at a future date, Mashologu said, adding that he expects proposals for stricter rules to emerge in public hearings. Regulating AI, he said, is a challenge: “We need to develop a policy while the technology is developing.”
“My fear across the African continent is that we have so many challenges that feel basic: access to power, access to health care, access to agriculture and food, but we also need to be having the conversations about AI, both because we don’t want those inequalities and lack of infrastructure to be amplified, but also because AI can help solve some of these problems,” Joy Basu, deputy assistant secretary for the U.S. State Department’s Bureau of African Affairs, told Rest of World in an interview at the Africa Tech Festival in Cape Town in November. “Even if we don’t know exactly how AI might be manifesting, we do know from history that we need to be thinking about these things. And there is a certain role of government … in which we allow for human rights to be respected and innovations of this new sector to take us where they will.”
In both India and South Africa, meanwhile, AI technology is already being rolled out — shaping the future in which more substantive regulation discussions might take place. Shweta Rajpal Kohli, a former public policy head with Uber and Salesforce in India and South Asia who now runs an interest group for domestic startups, told Rest of World the current focus for industry and government alike on AI is development and adoption. She summed up the dynamic with a longstanding Silicon Valley maxim: “Innovation will happen. Regulation will follow.”
India’s powerful state governments have been forward-leaning in adopting AI tools that feed on the massive amounts of data India stores about its citizens. India has the world’s largest biometric ID system, and authorities use granular resident data to distribute benefits and assist police surveillance. Such efforts are now intertwined with AI. The southern tech hub of Telangana, for example, has disclosed plans to use AI in health care, agriculture, manufacturing, and governance, in collaboration with Amazon, Microsoft, and Meta. “We want to unlock the full potential of AI while taking into account the anticipated risks,” Jayesh Ranjan, Telangana’s special chief secretary of commerce and IT departments, told Rest of World, adding that while “unregulated AI could lead to misuse, over-regulation could stifle innovation.”
“A lot of these endeavors with the big companies are happening at state levels … because at state level, the data is much richer,” Disha Verma, a tech-focused researcher, told Rest of World. “When you have such readily available data and no real law to set up walls around it, of course companies are going to come in.”
In South Africa, one early example of a government embrace of AI tools is playing out in Gauteng, the country’s most populous province. Premier Panyaza Lesufi won reelection in May while touting his crime-fighting through video surveillance. A domestic surveillance company called Vumacam has partnered with Lesufi’s government to run more than 6,000 cameras around the province, monitoring the footage using license plate recognition and unusual behavior detection — two AI technologies. The partnership has drawn critique from some civil liberties advocates, who say such technology is unproven in stopping crime. And with no specific regulation on AI in the country, companies “kind of operate in the Wild West,” Emile Ormond, a South Africa-based policy analyst with a Ph.D. in AI ethics, told Rest of World.
Vumacam told Rest of World it complies fully with South Africa’s data protection and privacy laws and supports “surveillance-sector specific AI regulation.”
Big Tech weighs in
Brazil has emerged as a champion of tech regulation among emerging economies. It enacted an Internet Bill of Rights in 2014, which established protections on net neutrality, freedom of expression, and privacy. In 2023, a sweeping “Fake News Bill,” as it was popularly called, attempted to force more transparency from social media platforms and hold them accountable for misinformation. The bill passed the Senate — but later died in Brazil’s lower house of Congress after intense public pressure from Google, Meta, and the messaging app Telegram.
Brazil’s comprehensive AI bill was introduced in the Senate in May 2023. Its proponents hoped it would be as forceful as the EU’s legislation, while also providing “a model which is debated around the world,” Estela Aranha, a special adviser on AI to Brazil’s president who was closely involved in the process, told Rest of World.
Aranha, who is also a member of the U.N.’s High-Level Advisory Body on Artificial Intelligence, added that global coordination is essential when it comes to AI regulation. Countries need to be free to create their own legislation, she said, but there also needs to be a consensus on essential rules so they can be enforced effectively. “AI is a tool that surpasses borders, and you cannot have legislation that is completely disconnected from the international scenario,” Aranha said. “You need to have some convergence, at least, in order to have these global players in these very powerful oligopolies comply with the law.”
“AI is a tool that surpasses borders.”
But Pontes, the former astronaut who led pushback against the bill in the Senate, told Rest of World his initial concerns that the bill was too strict were reinforced by his March 2024 trip to the U.S.
After he returned, he put forward proposals that included weakening the system for contesting AI decisions; loosening copyright protections; and narrowing the types of AI systems that would be regulated. Oliveira, the senator who’d accompanied him to the U.S., made many of the same proposals, sometimes verbatim. (Oliveira’s office did not respond to multiple requests for comment.) Pontes defended the hearings he held on the bill as an effort to include all concerned sectors: “We cannot simply impose a regulation from the top-down.” But civil society advocates saw them as biased. In a rare move, two American lobbyists were invited to address the Brazilian Senate, both hailing from Washington-based interest groups who count Amazon, Google, Meta, Microsoft, and other large tech companies as members.
Paula Guedes, a digital rights-focused lawyer, was one of the pro-regulation advocates who participated in the Senate debate on the bill, in her capacity as AI lead for the Rights in Network Coalition, a network of civil society groups and academics. She told Rest of World she believed industry groups had more ease in accessing senators and participating in hearings than civil society representatives: “Everything has already been set up for the private sector.”
In response to questions on Google’s role in discussions about the bill, a company spokesperson acknowledged the Brazil delegation’s March 2024 visit to its Washington, D.C., headquarters, adding, “We reinforce our commitment to ongoing dialogue with society, companies, and authorities to discuss central issues, such as the responsible adoption of Artificial Intelligence to promote development and the solution of relevant challenges for Brazilian society.” Microsoft and Meta did not comment. “Amazon is proud to have invested in Brazil since 2011,” a company spokesperson said. “As with all global companies, we regularly participate in discussions with policymakers, industry [and] associations, and think tanks to share knowledge and contribute to the development of solutions on topics relevant to our customers.”
The version of the bill that passed the Senate in December included compensation to copyright holders when their content is used to train AI systems; an AI oversight authority; and an autonomous weapons ban. Other regulatory measures remained but were watered down. The removal of rules on algorithms that moderate and recommend social media content, as well as a provision on disinformation, were considered major losses by regulation advocates. The bill next moves to the lower house, where the Fake News Bill was defeated. “It will demand action from the start,” Guedes said, predicting that efforts to shape the bill will continue.
As Brazil joins Chile and Canada in awaiting its AI bill’s fate, regulation efforts in many other countries are still taking shape. Dave Willner, who previously headed policy teams at OpenAI and Facebook, told Rest of World that an influential Big Tech role in shaping AI policy discussions is unavoidable due to these companies’ power and size. Companies fear that excessive or poorly designed rules will drain resources, he said, while governments can lack expertise on the AI tools they’re regulating, providing another motivation for companies to engage in policy. “It’s obviously a conflict of interest and problematic, and I don’t know how you would get around it for a newly emerging field,” Willner said. “Because it is true that the specifics matter, and how things actually work matters — and that has to be incorporated into the law.”
But Willner, until recently a non-resident fellow at Stanford University’s Program on Governance of Emerging Technologies, also believes that companies cannot be relied upon to enforce their own safety standards, which often come at the expense of innovation and revenue. “You need to raise the bar for everybody using the law,” he said, so that “nobody is disadvantaged.”
Many people in these companies, he added, including senior leaders, understand that regulation is needed: “On some level that’s earnestly believed by some folks in the space. Some are worried about where this stuff goes and what it might mean if it spirals out of control.” But then, “For very large public companies, the immediate logic of capitalism and quarterly returns kicks in.”
link