
US Tariffs Hand AI Race to China with Gift-Wrapped Compute
Congratulations to the brilliant minds who decided that making compute MORE expensive was the solution to winning the AI race! While the US slaps punitive tariffs on Chinese chips, China’s building entire compute factories faster than American companies can update their AWS infrastructure diagrams.
The Self-Inflicted Compute Crisis
Let’s review the strategic genius at work here:
- The US identifies AI as the most critical technology for future economic and military advantage
- Training state-of-the-art AI requires massive amounts of compute
- The US makes compute more expensive and harder to access for its own companies
- surprised Pikachu face when this strategy backfires
Our startups are now priced out of training large models while Chinese companies are scaling up with government backing. It’s like watching someone try to win a race by shooting themselves in the foot.
The Infrastructure Gap
The infrastructure gap is getting so wide you could fit NVIDIA’s entire quarterly revenue in it. Consider these stark realities:
- China added more data center capacity in 2024 than the US did in the previous three years combined
- The average cost to train a frontier AI model in the US is now 3-4x higher than in China
- Chinese cloud providers offer AI training instances at 60% of the cost of AWS equivalents
- China’s semiconductor investments have doubled year-over-year while US chip grants get stuck in bureaucratic limbo
The Startup Exodus
I’ve spoken with dozens of AI startup founders who are quietly moving their training operations overseas. One founder (who requested anonymity) told me:
“We raised $20M last year, which seemed like a lot. Then we priced out training our model in the US versus overseas. The difference was so dramatic that we could either train a mediocre model here or a state-of-the-art model elsewhere with the same budget. What would you choose?”
The Policy Paradox
The stated goal of these tariffs was to “protect American technology leadership” and “ensure national security.” The actual effect has been to:
- Make American AI companies less competitive globally
- Force US startups to choose between inferior models or foreign training
- Accelerate China’s computational self-sufficiency
- Widen rather than narrow the infrastructure gap
It’s almost as if policy makers don’t understand how compute-intensive modern AI development actually is.
The Bureaucratic Response
When asked about alternative strategies, government officials suggested “more meetings” and “additional committees” as if bureaucracy ever outcompeted anything except maybe a garden snail.
One recent industry roundtable featured seven government agencies, twelve congressional staffers, and zero concrete actions. The official report proudly announced a “commitment to explore the formation of a working group to evaluate potential frameworks for assessing computational resource allocation strategies.”
Translation: “We’ll think about thinking about maybe doing something eventually.”
The Hyperscaler Advantage
The only American entities that can still compete on compute are hyperscalers like Google, Microsoft, and Amazon. This is creating a dangerous concentration of AI power in just a few companies, while the long tail of innovation withers.
Small AI labs and research universities—traditionally America’s innovation engine—are finding themselves priced out of frontier research entirely.
The Geopolitical Miscalculation
The fundamental miscalculation was assuming that restricting access to Western technology would slow China’s AI progress. Instead, it accelerated China’s investments in domestic alternatives and computational efficiency.
It’s the classic “necessity is the mother of invention” scenario playing out in real-time. By creating artificial scarcity, these policies inadvertently sparked a wave of innovation in precisely the areas America hoped to contain.
The Path Forward
If America wants to actually compete in the AI race, we need to:
- Make compute dramatically cheaper and more accessible for US companies and researchers
- Create national research compute clusters available to startups and universities
- Invest in semiconductor manufacturing at 10x the current pace
- Streamline the bureaucracy around technology grants and investments
- Recognize that abundance, not restriction, is the path to technological leadership
The Bottom Line
The AI race isn’t going to be won by the country with the most restrictive policies or the most committees—it will be won by whoever can build and deploy the most computational resources most efficiently.
Right now, that’s not the United States. And until policy makers recognize that compute is the lifeblood of AI progress, we’ll continue falling behind while congratulating ourselves on our strategic brilliance.
Sometimes I wonder if we’d make better progress by replacing policy committees with a single engineer and a spreadsheet showing the actual costs of training models.

About Mike Terminal
The automation-obsessed DevOps guru who believes any task done twice is a task that should be scripted. Mike has strong opinions about your Docker setup, your CI pipeline, and especially your 'minimal viable infrastructure.' He can smell an overengineered solution from miles away and predict the exact moment your microservice architecture will collapse under its own weight.
While You're Avoiding Real Work...

US AI Giants in 'DeepSeek' Trouble
DeepSeek, Yi, and other Chinese AI models are dramatically reshaping the competitive landscape, leaving US tech giants scrambling for answers.

Your Cursor is showing (and your Windsurf is blowing)
The heated battle between AI coding assistants Cursor and Windsurf is revealing more about developer tribalism than actual productivity improvements.

Passkeys: The Password Extinction Event You're Probably Ignoring
Still using passwords like it's 2010? While you're busy creating your 47th 'secure' password variant, the rest of the tech world is moving to passkeys. Here's why your favorite authentication method is headed for the digital graveyard.