Meta is in negotiations with Google to spend billions of dollars on the company’s artificial intelligence chips for future data-center expansion, according to a report by The Information. The discussions reflect a growing push by major technology firms to secure alternative chip suppliers amid intense global demand for advanced AI hardware.
Under the talks, Meta would begin renting Google’s tensor processing units (TPUs) through Google Cloud as early as next year, with a larger plan to purchase significant volumes of the chips for its data centers starting in 2027. TPUs are Google’s custom-built processors designed to accelerate large-scale AI tasks and have been pitched as a more cost-effective option to Nvidia’s industry-leading chips.
Google has reportedly outlined ambitions for its TPU business to eventually reach 10% of Nvidia’s annual revenue, positioning the chips as an option for companies seeking to expand computing capacity while maintaining tighter security controls. The pitch comes as Nvidia continues to face supply constraints, leaving major AI developers searching for alternative sources of processing power.
Google Cloud already offers its TPUs for rent, attracting firms that need immediate access to large-scale AI computing without waiting months for Nvidia hardware. Securing Meta as a long-term buyer would mark one of the most significant wins for Google’s chip division to date.
Meta, Google and Nvidia did not respond to requests for comment on the report, and Reuters said it could not independently verify the details.
Meta has been rapidly increasing its investment in AI infrastructure as it races against rivals such as OpenAI, Microsoft and Google to build the next generation of large-scale models. Earlier this year, the company announced plans to invest $600 billion in U.S. infrastructure and jobs over the next three years, with a major portion allocated to AI data centers.
Since 2022, Meta has been one of Nvidia’s most prominent customers, reportedly accumulating a vast fleet of graphics processing units to support the training and deployment of its growing suite of AI models. The company relies on high-performance systems to serve more than 3 billion users across Facebook, Instagram, WhatsApp and Threads.
Shifting part of its future hardware pipeline to include Google TPUs would diversify Meta’s supply chain at a time when demand for AI computing has outpaced global production capacity. Analysts say such deals reflect an evolving competitive landscape in which major technology companies are not only rivals in software and services, but increasingly in semiconductors that power the AI boom.
Negotiations between Meta and Google are expected to continue as both firms explore long-term strategies to meet rising requirements for scalable, energy-efficient AI infrastructure.
