How will tech titans like Elon Musk push Donald Trump on AI
In 2020, when Joe Biden won the White House, generative AI still looked like a pointless toy, not a world-changing new technology. The first major AI image generator, DALL-E, wouldn’t be released until January 2021 — and it certainly wouldn’t be putting any artists out of business, as it still had trouble generating basic images. The release of ChatGPT, which took AI mainstream overnight, was still more than two years away. The AI-based Google search results that are — like it or not — now unavoidable, would have seemed unimaginable.
In the world of AI, four years is a lifetime. That’s one of the things that makes AI policy and regulation so difficult. The gears of policy tend to grind slowly. And every four to eight years, they grind in reverse, when a new administration comes to power with different priorities.
That works tolerably for, say, our food and drug regulation, or other areas where change is slow and bipartisan consensus on policy more or less exists. But when regulating a technology that is basically too young to go to kindergarten, policymakers face a tough challenge. And that’s all the more case when we experience a sharp change in who those policymakers are, as the US will after Donald Trump’s victory in Tuesday’s presidential election.
This week, I reached out to people to ask: What will AI policy look like under a Trump administration? Their guesses were all over the place, but the overall picture is this: Unlike on so many other issues, Washington has not yet fully polarized on the question of AI.
Trump’s supporters include members of the accelerationist tech right, led by the venture capitalist Marc Andreessen, who are fiercely opposed to regulation of an exciting new industry.
But right by Trump’s side is Elon Musk, who supported California’s SB 1047 to regulate AI, and has been worried for a long time that AI will bring about the end of the human race (a position that is easy to dismiss as classic Musk zaniness, but is actually quite mainstream).
Trump’s first administration was chaotic and featured the rise and fall of various chiefs of staff and top advisers. Very few of the people who were close to him at the start of his time in office were still there at the bitter end. Where AI policy goes in his second term may depend on who has his ear at crucial moments.
In 2023, the Biden administration issued an executive order on AI, which, while generally modest, did mark an early government effort to take AI risk seriously. The Trump campaign platform says the executive order “hinders AI innovation and imposes radical left-wing ideas on the development of this technology,” and has promised to repeal it.
“There will likely be a day one repeal of the Biden executive order on AI,” Samuel Hammond, a senior economist at the Foundation for American Innovation, told me, though he added, “what replaces it is uncertain.” The AI Safety Institute created under Biden, Hammond pointed out, has “broad, bipartisan support” — though it will be Congress’s responsibility to properly authorize and fund it, something they can and should do this winter.
There are reportedly drafts in Trump’s orbit of a proposed replacement executive order that will create a “Manhattan Project” for military AI and build industry-led agencies for model evaluation and security.
Past that, though, it’s challenging to guess what will happen because the coalition that swept Trump into office is, in fact, sharply divided on AI.
“How Trump approaches AI policy will offer a window into the tensions on the right,” Hammond said. “You have folks like Marc Andreessen who want to slam down the gas pedal, and folks like Tucker Carlson who worry technology is already moving too fast. JD Vance is a pragmatist on these issues, seeing AI and crypto as an opportunity to break Big Tech’s monopoly. Elon Musk wants to accelerate technology in general while taking the existential risks from AI seriously. They are all united against ‘woke’ AI, but their positive agenda on how to handle AI’s real-world risks is less clear.”
Trump himself hasn’t commented much on AI, but when he has — as he did in a Logan Paul interview earlier this year — he seemed familiar with both the “accelerate for defense against China” perspective and with expert fears of doom. “We have to be at the forefront,” he said. “It’s going to happen. And if it’s going to happen, we have to take the lead over China.”
As for whether AI will be developed that acts independently and seizes control, he said, “You know, there are those people that say it takes over the human race. It’s really powerful stuff, AI. So let’s see how it all works out.”
In a sense that is an incredibly absurd attitude to have about the literal possibility of the end of the human race — you don’t get to see how an existential threat “works out” — but in another sense, Trump is actually taking a fairly mainstream view here.
Many AI experts think that the possibility of AI taking over the human race is a realistic one and that it could happen in the next few decades, and also think that we don’t know enough yet about the nature of that risk to make effective policy around it. So implicitly, a lot of people do have the policy “it might kill us all, who knows? I guess we’ll see what happens,” and Trump, as he so often proves to be, is unusual mostly for just coming out and saying it.
We can’t afford polarization. Can we avoid it?
There’s been a lot of back and forth over AI, with Republicans calling equity and bias concerns “woke” nonsense, but as Hammond observed, there is also a fair bit of bipartisan consensus. No one in Congress wants to see the US fall behind militarily, or to strangle a promising new technology in its cradle. And no one wants extremely dangerous weapons developed with no oversight by random tech companies.
Meta’s chief AI scientist Yann LeCun, who is an outspoken Trump critic, is also an outspoken critic of AI safety worries. Musk supported California’s AI regulation bill — which was bipartisan, and vetoed by a Democratic governor — and of course Musk also enthusiastically backed Trump for the presidency. Right now, it’s hard to put concerns about extremely powerful AI on the political spectrum.
But that is actually a good thing, and it would be catastrophic if that changes. With a fast-developing technology, Congress needs to be able to make policy flexibly and empower an agency to carry it out. Partisanship makes that next to impossible.
More than any specific item on the agenda, the best sign about a Trump administration’s AI policy will be if it continues to be bipartisan and focused on the things that all Americans, Democratic or Republican, agree on, like that we don’t want to all die at the hands of superintelligent AI. And the worst sign would be if the complex policy questions that AI poses got rounded off to a general “regulation is bad” or “the military is good” view, which misses the specifics.
Hammond, for his part, was optimistic that the administration is taking AI appropriately seriously. “They’re thinking about the right object-level issues, such as the national security implications of AGI being a few years away,” he said. Whether that will get them to the right policies remains to be seen — but it would have been highly uncertain in a Harris administration, too.
link