
Meta unveils four new in-house AI chips—MTIA 300, 400, 450, and 500—developed with Broadcom on RISC-V architecture and manufactured by TSMC. The MTIA 300 deployed weeks ago for content ranking, while the remaining three inference chips ship between early and late 2027, releasing approximately every six months.
Why it matters
Meta's accelerated chip development cycle directly challenges the traditional 2-3 year semiconductor timeline, addressing the mismatch between AI model evolution and hardware availability. The move provides pricing leverage and supply chain diversification beyond Nvidia and AMD, a critical strategy as enterprise AI infrastructure costs continue escalating—particularly relevant given Meta's simultaneous massive GPU purchases from those same vendors.
What to do
Evaluate your organization's AI chip vendor concentration risk now, especially if heavily dependent on single-source GPU suppliers. Consider hybrid infrastructure strategies that blend commercial accelerators with custom silicon for high-volume, stable workloads like recommendation engines where performance-per-dollar matters more than cutting-edge capabilities.