In an exclusive interview, Google’s new AI infrastructure chief says the tech giant has a “significant investment” planned. At current levels, Forbes projects it could be a very big number indeed.

Amin Vahdat, Google’s new chief of AI infrastructure
Chris Shum
On an earnings call last month, Google CEO Sundar Pichai said that the company would spend up to $185 billion on capital expenditures related to AI this year — an eye-popping figure that’s more than double the $90 billion it spent in 2025. But that could just be the start. Over the next several years, the tech giant’s data center spending will add up to a “significant investment,” Amin Vahdat, Google’s newly-minted Chief Technologist for AI infrastructure, tells Forbes in his first interview in the freshly-created role.
“Just in simple numbers, if there’s a 10 year quote, and we’re at $175 to $185 billion this year, one could imagine, assuming it’s not going to go down, that this could extend to some big number over 10 years,” he says.
We did the math. At $185 billion a year, in eight years, Google would be spending $1.5 trillion, slightly more than OpenAI has committed to spend over the same time period. Extend that out to 10 years, as Vahdat noted, and Google would be spending $1.9 trillion.
Vahdat is clear that this is “not a promise” that Google would spend that much over the next 10 years. But the decade-long view he takes suggests the scope of Google’s bet. “The point here is that we are, at Google, investing at the highest levels,” he says.
There’s a big difference between Google’s data center ambitions and OpenAI’s: Google is a money-making machine. In the fourth quarter, Google parent Alphabet raked in $113 billion in revenue; for the full year, sales topped $400 billion for the first time in the company’s more than 25 year history. By comparison, OpenAI is spending at similar levels and only brought in about $13 billion in revenue last year — a tiny fraction of Google’s revenue, and less than half of Google’s cash reserves.
The seemingly insatiable demand for compute has been the central economic force of the AI era. It has shot Nvidia’s market cap up to an eye-watering $4.5 trillion. Project Stargate, an effort by OpenAI, SoftBank and Oracle to build $500 billion worth of AI infrastructure in the U.S., has been a marquee tech initiative to kick off President Trump’s second term, though progress on the effort has reportedly stalled. All told, big tech could pour an estimated $500 billion into AI data centers and chips this year alone, according to a report by Goldman Sachs.
“It is fair to say that the demand for cloud TPUs has been unprecedented.”
The infrastructure build-out is so vast that it’s important to think about it over a long time horizon, Vahdat says. Building a single data center can take multiple years. Power needs to be procured far in advance. Some of that spend will go immediately to chips and data processing equipment in existing data centers, he says, while some will finance new sites. Last week, for example, Google inked deals with AES and Xcel, two utility providers to supply energy to its data centers across the country.
A 15-year Google veteran, Vahdat joined the company after a career in academia as a researcher and professor, which included stops at Duke, the University of Washington and UC San Diego (and he had an early internship at Xerox Parc, the legendary Silicon Valley research lab). He joined Google in 2010 to work on computer networking, and rose up the ranks to take the reins of the company’s TPUs, or Tensor Processing Units, the tech giant’s custom AI chips. In December, he was promoted to oversee strategy for AI infrastructure, which includes chip development and optimization, data center buildout and energy investments — and reports directly to CEO Sundar Pichai.
Google’s TPUs previously were only used in house for Google’s own infrastructure — to power consumer apps like Gmail and YouTube, and eventually train self-driving cars and develop and run AI models like Gemini. Now, they’re one of the industry’s go-tos: maybe not as popular as Nvidia’s top of the line Blackwells, but still useful for pretraining and operating AI models at scale. Google first started selling access to them through a cloud service in 2018, letting other companies rent out processing power. But more recently, Google has inked high profile deals, like a big contract with Anthropic, and has reportedly been in talks with Meta to use its chips. In December, Morgan Stanley estimated that TPUs could generate $13 billion for Google by 2027. “It is fair to say that the demand for cloud TPUs has been unprecedented,” Vahdat says, particularly in the last few years.
Core to the data center buildout is procuring the energy needed to power them — often a major target for critics. In August, Vahdat, Google Chief Scientist Jeff Dean, and 10 other researchers and execs at the company, co-published a paper aiming to contextualise AI’s power guzzling.
The paper says that the median prompt for Google’s Gemini AI model uses the same amount of energy it takes to power 9 seconds of television and consumes around five drops of water, which they write is “substantially lower than many public estimates.” (One report says large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by up to 50,000 people.)
Because of backlash, other AI giants have pledged to pay more for electricity: Last month, AI rival Anthropic, maker of the well-regarded Claude chatbot, pledged to estimate and cover the costs of consumer energy price hikes that may come from its power usage. “I was really pleased to see that announcement from Anthropic,” Vahdat says. He added, “We’ll be saying more on our position on this shortly.”
The biggest challenge Google faces, Vahdat argues, is not simply scaling up but redesigning how infrastructure itself is built. Over the next five years, he expects data centers to shift away from bespoke construction toward more modular, repeatable designs — standardised blueprints capable of being replicated globally at unprecedented speed. That’s the kind of bet that could help cement Google’s place as a primary competitor in the AI race for years to come.
This story was originally published on forbes.com and all figures are in USD.
Want to see more Forbes articles on your feed? Tap here to make Forbes Australia a preferred source on Google.
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.