AI Strategy for a New American President
What can we expect for the next few years of U.S. AI policy?
Welcome to AI Pathways, a new tech blog that aims to see the future of AI first and make it more publicly legible. In 2025, AI progress is faster than ever, and yet public awareness of where we are headed significantly lags behind the private insider consensus. The possibility space narrows, and it feels like the right time to articulate a unifying view of the likely pathways for this transformative technology, looking at both the latest trends in technical AI research and the latest developments in AI policy. However, I will endeavour not to take a purely predictive viewpoint—we are active participants in this future and it is important to see ourselves as capable of shaping it with our imagination.
Two months ago, the political board in the U.S. was flipped, and many assumptions in the world of policy that felt like constants are no more. We’re in a New Year, and in this, the first piece of AI Pathways, I aim to answer the question: what can we expect from the next government’s AI policy? And what might this imply for the future development and deployment of the world’s most advanced AI systems?
My main message is this: the strategic landscape has dramatically shifted. After spending the last two months chatting to AI researchers and tech policy folk around Washington, D.C., San Francisco, New York, London, and Brussels, I'm convinced that almost everyone outside of D.C. is strongly underrating the implications of the election. Many exciting new opportunities have opened up for both AI research and tech policy, while some Biden-era policy paradigms (e.g., pre-deployment evaluations and risk assessment as a dominant form of governance for frontier AI) are on their way out.
I’ll briefly list the main takeaways here, before going into specifics. The main policy motifs I see from the new administration are:
An increased focus on China as a technological adversary and effort to maintain American advantage in AI. This is a core motivation behind many potential policy actions, including boosting domestic energy production, onshoring of more semiconductor manufacturing, export controls on chips, and building out AI applications for national security.
One of the most underappreciated effects in AI policy of the U.S.–China dynamic is greater interest in using AI for national security and military applications. There are two sides to this coin:
building defense-specific AI applications, which will likely lead to the government conceiving of AI as increasingly a defense technology;
securing existing frontier AI systems and IP from foreign corporate espionage or other attacks. As long as the capabilities gap with China is maintained (an open question given DeepSeek’s impressive progress), there will be strong incentives for tighter security partnerships between the U.S. government and the AI labs.
A potential conflict in the new admin is between those favoring tighter government control over the security of frontier AI model weights & IP, motivated by viewing AI through the U.S.–China competition lens, and those arguing in favor of minimal security restrictions & boosting the open-source AI sector to make adoption throughout the economy easier. Senior figures in the new admin’s tech policy, including David Sacks and Sriram Krishnan, are strong advocates for open-source, and there are good arguments in favor of their views given the likely inevitable diffusion of frontier AI capabilities over time. I strongly believe there is a viable way to unify these two perspectives—the securitization view and the open-source view—and hope to articulate a path forward in a future piece.
A key concept here is that as capabilities grow, we should increasingly expect governments to view AI as a strongly dual-use technology. Up until now, AI has been an almost entirely civilian-first technology, but government interest in it seems likely to grow along with capabilities that provide more possible applications to defense.
This leads to some guesses for Trump’s AI-relevant policy moves:
Moves to boost AI datacenter growth, by unblocking permitting, or funding and incentivizing the construction of new energy supply and transmission capacity. U.S. energy demand forecasts for the next 5 years have increased rapidly, largely due to AI datacenters but also because of greater than expected growth in manufacturing. The electricity grid is underprepared for this, and only speedy action can prevent economic growth being bottlenecked by power.
Scaling up of initiatives within the federal government and the defense sector to use frontier AI in national security or military applications. We see steps towards this outside government in the burgeoning Palantir-Anduril defense tech consortium, OpenAI’s partnership with Anduril, and Palantir’s partnership with Anthropic & AWS. I expect defense-specific finetunes of leading LLMs to be ultimately sold as products to the government, which strongly motivates a need to secure these model weights from cyber attacks. We may see export controls on model weights to prevent some models being deployed outside of U.S. datacenters.
Testing and evaluation work within the federal government is likely to move to more national security focused agencies and teams. For example, USAISI may become a much smaller share of this work, and could be moved out of NIST into a more relevant place like the Department of Energy (DOE).
Trump’s administration seems likely to maintain and tighten export controls on semiconductors, to slow down Chinese AI efforts. Their effectiveness will depend on how well the multiple different agencies working on export controls, including BIS in Commerce, can actually be coordinated and run efficiently. DOGE may play a part in the latter effort.
Deeper partnerships between frontier AI labs and the U.S. Intelligence Community to improve the cyber and info-security of lab development & deployment. This may involve a so-called public-private partnership (PPP) between e.g., the DOD/DOE and one or more leading AI companies, including potential government security experts embedded in the labs. Due to Elon’s involvement, x.ai is an obvious lab to watch closely for potential partnerships with the government.
A greater unknown is how to deal with the patchwork of currently-proposed state-level AI bills. The more insane the patchwork appears (*cough* Texas *cough*), the greater the incentive for the federal government to pre-empt states by passing some federal level AI bill that overrides them. This might be a light-touch AI bill (to avoid slowing down developers, particularly open-source) aimed at reducing uncertainty for businesses deploying AI systems and boosting economic growth. But it is currently unclear how high this will be on the priority list for Congress, especially given the slim Republican majority.
In summary: if you’re working on AI policy or on technical research which you think will be useful for policy (particularly on anything in the bucket sometimes called technical AI governance), then you should seriously consider the vibe shift and its implications. Some people like to make In/Out lists for the New Year, so my guess is that in terms of usefulness for AI policy, we’re looking at In: work on AI security, ensuring robust & reliable AI for defense applications, and forecasting the likely economic impacts of AI agents; Out: AI bias evals, pre-deployment third-party safety evals, safety cases, adversarial robustness and jailbreaking, risk assessment frameworks, and mechanistic interpretability. I will expand on these takes in a separate piece.
Background
There are already several good pieces you can read to get a sense of Trump’s existing statements on AI, as well as hints from those working with the Trump campaign. The main explicit policy action we know of is the existing commitment to repeal Biden’s AI Executive Order. This seems likely to be replaced with a Trump AI Executive Order at some point.
The Project 2025 policy agenda is also a useful an indicator of interest from a group connected to the transition team, and contains a variety of AI-relevant suggestions (findable by searching “AI” or similar). However, Project 2025 should be viewed more like a list of potential directions, considering the document was drawn up many months ago now.
Elon Musk—A Wildcard
Of course, much also depends on Elon Musk—how influential will he be in the new administration, and what will his preferred AI approach be? Will he maintain a good relationship with Trump for the entire term?
This is currently very uncertain, since we both don’t know what work Elon will spend most of his time on (for the short-term it is clearly DOGE), and Elon’s own preferences for AI policy are unclear, although he has taken a strong interest in its development for many years.
Many in Silicon Valley AI circles hope that Elon’s deep familiarity with the AI industry and influence in the Trump administration will help shape the next government’s policy. So far, this seems likely to play out, potentially to the benefit of x.ai.
More broadly there are, as we have seen with the recent immigration discourse, strong tensions between the populist right side of the Republican party, and the “tech right” (encompassing Elon, Silicon Valley Republicans & libertarians, e/accs, etc). Whether Elon maintains influence depends to a large degree on how these political tensions shake out.
The U.S. AI Safety Institute
The fate of the U.S. AI Safety Institute (USAISI)—a small team of technical experts & policy staff within NIST in the Department of Commerce—is a question on the minds of many in AI policy this month. Trump has promised to repeal the Biden AI Executive Order, part of which concerns pre-deployment safety evaluations of frontier AI systems (USAISI’s main activity). The later Biden National Security Memorandum also directs USAISI to act as the central point within the government for frontier AI work & engagement with the labs.
USAISI has attracted attention from influential anti-regulation Republicans in 2024 for its collaborations with UKAISI and other non-governmental AI safety organizations. This particularly includes the November International Network of AISIs event in San Francisco, with a side event on safety frameworks (e.g., regulatory mechanisms) partly run by UKAISI. When I speak to observers in AI policy, many express concern that USAISI simply hedged insufficiently for a Trump, and is now trying to re-work its agenda and focus to be closer to the expected desires of the new administration. A narrowing of focus onto AI for national security seems like the right path for them—they have recruited some valuable technical experts, whose abilities will be a useful resource for other government teams seeking to use frontier AI.
As a result, a full winding down of USAISI seems unlikely. Although they are small (their technical team is a handful of people, the rest are policy staff), they have accumulated a number of well-known experts at the working level with strong track records. However, both hiring new talent and retaining existing experts will likely be even more challenging than previously. USAISI are known in the community to have a very restrictive conflict of interest policy that nixed at least one potential ex-industry-lab hire (which generally does not bode well for government’s ability to hire top AI talent).
Going forward, USAISI seems likely to still serve a useful function as an advisory body for various parts of the federal government with a stake in frontier AI, as well as a convenient point of engagement with the AI labs for some parts of pre-deployment testing.
Another rumored possibility, and perhaps a way to alleviate hiring inefficiencies, is that USAISI may be moved outside of Commerce. In many ways USAISI’s place in DOC is a historical accident born of Gina Raimondo’s interest in AI—given that much of USAISI’s testing work and coordination has been on national security risks, the Department of Energy is a potential natural home. The DOE has a much larger budget, significant computing expertise within the national labs, the existing DOE-DOC testing collaboration, and is separately running tests with several leading AI labs. Either the DOE or DOD would fit nicely as a home for USAISI, given the increasing interest in the testing and use of frontier AI for national security applications & risks.
International Collaboration
What then, for international collaboration on AI? Under Biden, the international AI governance landscape saw a great proliferation of mostly symbolic fora, agreements, dialogues, summits, policy frameworks, and commitments. Examples include the Hiroshima Process (via the G7), the OECD AI Principles, the UN AI Advisory Body & report, the AI Safety Summits in the UK and Seoul, and the International Network of AISIs.
Some of the most significant for frontier AI were the AI Safety Summits (although I am biased, having had a small hand in organizing the first), which were able to secure commitments from the leading companies & many countries to collaborate on testing for frontier AI systems. Despite appearing vague and high-level, these kind of major international commitments can serve as a form of social pressure, since they can be cited in any engagement between countries and companies as a motivator for why the frontier AI labs should do, for example, joint safety testing.
This works fine if your goal is purely imposing political cost on companies that do not engage with government interest in AI deployments. However, there is nothing binding here—these commitments do not either force companies to take action or offer additional options (e.g., additional types of deployment mode or form factor), they simply raise the political cost for a company to follow a path that a government may disagree with, a cost that may be worth the price in a more high-stakes situation.
From an America First perspective, the motivation to engage significantly in internationalizing frontier AI governance looks quite weak given that all the main companies involved are U.S.-based, and so I do not expect as much engagement on AI evaluation & testing as under Biden (whose administration was more ideologically motivated to pursue that project).
The International Network of AISIs
The International Network of AISIs is an informal coordination mechanism between the governments of several countries to collaborate on the testing and evaluation of frontier AI systems. I say informal, because it is fundamentally bound by Memoranda of Understanding (MoUs) and other forms of non-hard law agreement.
The Network was preceded by the MoU between the U.S. and U.K. AISIs, who collaborated on joint safety for Anthropic’s Claude 3.5 Sonnet, and OpenAI’s o1 model. These two AISIs were the first and are, as far as I know, the largest. As a demo of testing amongst a larger group, this pair recently ran a joint safety testing project on Meta’s Llama-3 405B together with Singapore AISI.
The Network’s activities are likely to be significantly affected by USAISI’s fate, the Network’s Chair. This is because structurally, AI labs based in the U.S. have a strong incentive to work primarily or only with the U.S. federal government on safety testing of their models—the U.S. government is the most natural partner here, especially when it comes to testing on national security risks like CBRN. The Network acts as a mechanism to transfer this incentive, via U.S. AISI, to other countries who have the capability to do safety testing, setting a precedent towards internationalization of this testing.
However, this mechanism only happens via the scope of testing USAISI conducts; if USAISI loses their mandate from the Biden AI EO to act as the central point for safety testing of frontier AI within the U.S. government, we should expect more testing to be done in other parts of the federal government, outside the scope of the International Network. This is doubly true since the Biden White House acted as a forcing function for this centralization to happen, often proactively nudging parts of the government interested in frontier AI to collaborate with U.S. AISI.
Therefore, it is likely we’ll see less international collaboration for safety testing on frontier AI models. This has pros and cons: in some sense the Network can be viewed as an attempt to reduce centralization of governance over frontier AI, but on the other hand it may also bring increased security risks or cause confusion by adding too many additional actors with different definitions of safety.
But will national AI Safety Institutes other than the U.S. or U.K. be relevant to the global trajectory of AI? This move to internationalize evaluation-based AI governance faces a fundamental problem: the United States is the base for of all of the world’s leading frontier AI labs, except for (arguably) DeepSeek in China. In regulation and governance, other jurisdictions rely purely on the desire of foreign tech companies to make profit from deploying there (e.g., the so-called Brussels effect, significantly overhyped by the EU Commission).
Almost none of the other countries in the network have so far built sizeable safety testing teams staffed with technical experts, or shown intention to do so. Singapore AISI, for instance, is primarily focused on encouraging more safety research in neglected directions via academia and building evaluation tools for the ecosystem. Other countries (Canada, Australia, France, Japan) have announced or begun setting up their own AISI, yet these look more like a gesture towards AI’s importance from their respective governments rather than efforts likely to lead to highly productive research organizations. And does Kenya really need an AI Safety Institute? To what extent is pushing an international consensus on things like best practices for misuse risk evaluations actually useful, given the rapidly shifting state of evaluations research?
Finally, it’s worth noting the inclusion of the EU AI Office in the International Network, which is in many ways the “odd one out”. Unlike the other AISIs, the Office is a regulatory body within the EU Commission, tasked with implementing the EU AI Act. This has given NIST a delicate line to navigate, since the Network thus ties NIST to a foreign regulator, even if it is for informal collaboration purposes.
Securitization
The securitization of AI is increasingly a hot topic in the tech policy world, which I define as work to improve the cyber and information security of frontier AI training and deployment—particularly model weights and IP. The last year has seen numerous public calls to improve this security, including Leopold’s Situational Awareness essay, and the RAND securing model weights report. The primary motivation for this goal is to maintain the U.S. lead in AI capabilities by ensuring that other actors, including nation-state adversaries, cannot hack or otherwise obtain frontier AI capabilities, under the assumption that these capabilities will become increasingly important for national security.
As many observers have noted, the effectiveness of this strategy is much reduced when both China and open-source are close behind the frontier AI capabilities of “closed” industry labs like DeepMind, and where that capabilities gap may credibly narrow. Nonetheless, my personal intuition is that securitization is still very valuable to pursue, because it provides much greater optionality for long-term U.S. AI strategy. Many China hawks with connections to the Trump administration are also advocates for this view.
Relatedly, we see a strong desire to apply frontier AI systems for national security and military applications, including moves like OpenAI’s partnership with Anduril, Palantir’s partnership with Anthropic & AWS, and OpenAI’s board appointment of an ex-NSA general. Indeed, fine-tunes of leading frontier LLMs for defense applications are a likely way that frontier AI labs end up increasing their security in the short-term. The U.S. government’s interest in securitization of frontier AI is likely strongly correlated with the utility of these defense applications—and given the rapid progress in AI development towards autonomous agents, it is plausible that some applications could soon become a significant national security capability.
This overall direction also coincides with Silicon Valley’s increasing recognition, in the making since the Ukraine invasion, of the necessity of tech’s involvement in maintaining American security and military leadership, and the stakes of the geopolitical tensions with China. In stark contrast to the anti-military vibe of several years ago, defense tech is finally cool in SF—I attended a recent Palantir party where guests cheered at the host’s description of the “Stanford to crypto to AI to defense tech” pipeline.
Early moves by the government to encourage this security may include creating stronger partnerships between the national security world and the frontier labs. For example, this could include embedding government cyber and info security experts into labs to beef up their measures.
The DOE has the potential to play a role here, given that it contains the National Labs, which posesses several AI compute clusters (both classified and unclassified) as well as significant high-performance computing expertise. If high-security AI training or deployment clusters are needed to support national security applications, the DOE is a natural home for their construction, in partnership with industry labs.
Energy, Semiconductors, and Compute
For context, there are already several excellent policy proposals arguing the case for serious investment in energy for U.S. datacenters, and in the compute build-out itself. In this, the new administration is likely to support similar high level industrial strategy themes to the Biden administration: building more energy supply, onshoring more chip manufacturing, and building more compute capacity. President Trump’s tariff agenda may become relevant here, since he floated the idea of putting tariffs on chips from Taiwan on the Joe Rogan podcast.
Even if AI capabilities stagnate, we still expect energy demand to increase drastically over the next 5-10 years—driven by demand from batteries, semiconductors, and other manufacturing being onshored. Adding AI on top, the potential of increasingly capable AI agents to create vast deployments throughout the economy, using many times more compute than now, means that the U.S. is in danger of severely underbuilding its power supply and energy transmission infrastructure. Much recent industrial policy from both Democrats and Republicans recognizes this and is advocating for huge investment in energy.
The Trump administration seems likely to continue this trend. One indicator is the proposed appointment of Jacob Helburg to undersecretary of State for economic growth, energy, and the environment. Helburg is a prominent Silicon Valley Republican with deep connections to the AI industry, who has recently advocated for facilitating energy investment via permitting reform, including oil, gas, and nuclear, as well as reshoring manufacturing of all elements in the AI supply chain.
The CHIPS Act, which subsidizes domestic chip manufacturing, has received criticism from some Republicans, primarily due to provisions around union labor and climate research. However, its core idea of boosting U.S. chip production still receives bipartisan support. Given the urgency and looming supply bottlenecks, further legislation to incentivize private-sector investment in energy, semiconductors, or AI datacenters could be on the table for the new administration.
A Note on DOGE
DOGE’s focus is likely not particularly relevant to AI policy, though it may have some impact depending on which teams and offices it is directed to focus on by Elon and Vivek. To briefly summarize, the latest rumor about DOGE is that it:
is likely to operate analogously to Palantir: a talented team of software engineers, many of whom will be “forward-deployed” and embedded inside government agencies. These embedded SWEs will be able to see how things work, solve problems, and build tools within the bureaucracy, but also have rapid lines of communication to the White House, in case executive action can help fulfil DOGE’s mission;
will primarily focus on efficency, both by slimming down staffing at key government offices, but also by building efficient technology to let government employees be more productive.
Most of the parts of the government DOGE could focus on to achieve the greatest financial savings are not that relevant to AI, but one suggestion might be this: dedicate some DOGE staff to the Bureau of Industry and Security (BIS) in the Department of Commerce. The BIS handles (a part of) the implementation and enforcement of export controls on semiconductors, and its function can plausibly become significantly more effective with DOGE assistance.
If you have any thoughts or disagreements with this piece, do post in the comments section or on Twitter, and let me know what you think! You can also reach out at mail [at] herbiebradley.com for a chat.