News
Feb 25, 2026
News
Startups
Americas
NewDecoded
3 min read

Image by matx
MatX, a semiconductor startup founded by former Google TPU architects Reiner Pope and Mike Gunter, has successfully raised $500 million in a Series B funding round. The investment was led by Jane Street and Situational Awareness LP, marking a significant milestone for the Mountain View company. This capital is earmarked for finalizing the development and scaling the manufacturing of their flagship AI processor, the MatX One.
The MatX One chip represents a total redesign of AI hardware, prioritizing the extreme throughput required by frontier research labs. Unlike traditional chips, this silicon uses a splittable systolic array to maintain high efficiency even when processing complex, irregular matrix shapes. By placing model weights in SRAM and Key-Value caches in HBM, the architecture achieves generation speeds exceeding 2,000 tokens per second for massive models.
Strategic focus remains a core tenet for the MatX team as they intentionally ignore smaller workloads to optimize for large-scale performance. The company has confirmed that its hardware will not support legacy architectures like convolutional neural networks or recommendation systems. This decision allows the chip to deliver the highest FLOPS per square millimeter currently available in the industry according to official research. With tape-out scheduled for the coming year, MatX is rapidly expanding its 100-person workforce to meet global demand for open positions. The funding round included notable participants such as Spark Capital, Andrej Karpathy, and the founders of Stripe. This group of investors highlights the industry belief in specialized silicon as the next stage for training and deploying the world's most capable artificial intelligence.
The emergence of MatX signals a definitive shift from general-purpose AI acceleration toward extreme specialization. While Nvidia remains the dominant force due to its established software ecosystem, MatX is betting that elite labs will trade flexibility for raw performance. By solving the memory wall through a hybrid memory approach, they are positioning themselves as a vital alternative for the 100-layer models that define current AGI research. This could lead to a fragmented market where general enterprise AI remains on GPUs while the most advanced models migrate to purpose-built, high-throughput silicon.
Related Articles