
An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple
Key Takeaways
- Trainium chip developed at Amazon's lab central to AWS OpenAI investment deal.
- Private tour provided a look at the lab's chip development tied to the deal.
- Lab work targets lower cost AI inference and challenges Nvidia's dominance.
Tour scope and strategic deal
Shortly after Amazon CEO Andy Jassy announced AWS’s $50 billion investment deal with OpenAI, Amazon invited me on a private tour of the Trainium chip development lab in Austin.
“Shortly after Amazon CEO Andy Jassy announced AWS’s groundbreaking $50 billion investment deal with OpenAI, Amazon invited me on a private tour of the chip development lab at the heart of the deal, at (mostly*) its own expense”
The piece frames Trainium as central to AWS’s strategy to lower AI-inference costs and challenge Nvidia, noting that AWS has pledged 2 gigawatts of Trainium computing capacity to OpenAI and that Anthropic and Bedrock already rely on Trainium chips.

It also mentions a Financial Times report suggesting Microsoft believes OpenAI’s deal with Amazon may violate its own agreement with OpenAI, and it highlights that OpenAI’s Frontier builder could become a key part of OpenAI’s business.
The article documents that there are 1.4 million Trainium chips deployed across all generations, and that Anthropic’s Claude runs on over 1 million of the Trainium2 chips, underscoring Trainium’s current role in major workloads.
Tech design and performance
Amazon says Trainium offers a cheaper alternative to Nvidia GPUs, with Trainium3 UltraServers and new Neuron switches enabling a mesh in which every Trainium3 chip can talk to every other chip, reducing latency and boosting price-per-power.
Trainium3 is a 3-nanometer design produced by TSMC, with Marvell providing other key components, and the team notes that PyTorch is now supported by Trainium with essentially a one-line change and recompilation to run on it.

The broader Trainium family includes 1.4 million chips across all generations, and Trainium2 currently handles the majority of inference traffic on Amazon’s Bedrock service.
AWS has also announced a Cerebras Systems partnership to integrate Cerebras’ inference chip on Trainium servers, promising superpowered, low-latency AI performance.
The article also highlights that Trainium originated with a focus on training but has shifted toward inference as a major workload driver.
Lab environment and bring-up culture
The lab sits in a chrome-windowed building in Austin’s The Domain district, and the actual silicon bring-up space is a noisy, workshop-like room where engineers describe the process as a 24/7 push for three to four weeks around each bring-up.
“Shortly after Amazon CEO Andy Jassy announced AWS’s groundbreaking $50 billion investment deal with OpenAI, Amazon invited me on a private tour of the chip development lab at the heart of the deal, at (mostly*) its own expense”
The team describes milestones with a flurry of activity, including a case where the prototype Trainium3 was previously air-cooled and required a last-minute modification—engineers ground metal in a conference room to fit the heatsink.
There is a welding station for tiny integrated components, and a private data center nearby used for testing and quality assurance, with strict security protocols to enter the building and access Amazon’s area.
Andy Jassy publicly praised the lab in December, and leadership maintains ongoing oversight as the team pursues mass production.
Industry context and partnerships
Project Rainier, one of the world’s largest AI compute clusters, went live in late 2025 with 500,000 chips and is used by Anthropic.
The article notes that Apple praised Trainium’s predecessors, Graviton and Inferentia, in 2024, signaling industry interest in Amazon’s in-house chips.

The OpenAI deal makes AWS the exclusive provider of that model maker’s new AI agent builder, Frontier, creating a distinctive dynamic in the AI cloud market and drawing attention to Microsoft’s own dealings with OpenAI.
Overall, the piece portrays Trainium as part of a broader strategy to provide cheaper, scalable AI infrastructure and to challenge Nvidia’s dominance in AI inference, while reinforcing that Anthropic and OpenAI deployments anchor Trainium’s real-world use.
More on Technology and Science

UN Says 2025 Heat Will Last Thousands of Years
11 sources compared

UN Warns 2025 Heat Record Will Endure for Thousands of Years
14 sources compared

Hawaii Suffers Its Worst Floods in Over 20 Years, Prompting Thousands to Evacuate
46 sources compared

Kona Low Storms Strike Oʻahu and Maui, Hawaii Faces Worst Flooding in 20 Years
24 sources compared