The US Department of Defense is planning to let AI companies like OpenAI and xAI train models directly on classified data inside secure, accredited data centers.
A US defense official confirmed to MIT Technology Review that the Pentagon plans to allow AI companies to train LLMs on classified data inside accredited secure data centers. The DoD has already reached agreements with OpenAI and xAI to operate models in classified settings. This goes beyond existing fine-tuned government models like Anthropic's Claude Gov, marking the first indication that LLM training — not just inference — will occur on classified data. The Pentagon will first evaluate model performance on unclassified data before proceeding.
This signals a coming bifurcation in AI model lineages: classified-trained variants of frontier LLMs will exist that civilian developers will never access or benchmark against. The architecture of secure training pipelines — air-gapped data centers, clearance-gated access, DoD-owned data — creates a parallel model development track with no public eval visibility. If you're building on OpenAI or xAI APIs, your model's civilian counterpart may increasingly diverge from its defense-trained sibling in ways you can't measure.
If you're building on OpenAI APIs for any government-adjacent use case, audit your contract terms this week to understand whether your application could qualify for FedRAMP or IL4/IL5 environments — this is where the next wave of high-value government contracts will land.
Go to sam.gov and search 'AI large language model' filtered to contracts awarded in the last 90 days. In under 5 minutes you'll see the dollar volumes and agency names driving this procurement wave — useful for scoping where classified AI deployment is already happening.
Tags
Signals by role
Also today
Tools mentioned