Investing

UK AI data center infrastructure – if they build it, will they come? Learnings from the US


The AI data center investment in the UK announced as part of its AI Opportunities Action Plan is essential for plans to bolster the UK’s hopes to become an AI superpower. Data center vendors, including Vantage, Kyndryl, and NScale, have announced £14 billion to build more AI out in the boonies. This pales compared to the hundreds of billions of US cloud vendors pouring into new AI infrastructure, not to mention the $0.5 trillion OpenAI and Softbank have committed to investing.

This will revitalize the Midlands, make the UK a leading AI country, and create many jobs. At least 13,250, the UK Government declares. And then I read the fine print. In Vantage’s case, that’s 10,000 construction jobs that go away in a few years and 1,500 permanent jobs way down the road.

This massive investment in physical AI infrastructure may be good for the UK to support more sovereign AI, create local testbeds for new kit, and be purpose-built for new AI innovations. It might also raise water and power prices and increase power outages. At least they don’t pollute much like a traditional factory or steel mill.

However, I decided to consider this matter from three lenses that have come into view lately. What if the AI infrastructure is indeed a highly perishable commodity? How do the three AI-scaling laws apply to planning, building and using this proposed infrastructure? What lessons can the UK learn from Northern Virginia’s experience as ground zero for the data center experiment?

A perishable commodity?

I recently interviewed Jeffrey Emanuel, who weighed in on the financial market case and why he believes AI-chip heavyweight Nvidia is overvalued. The upshot is that the latest and greatest AI chips are highly perishable goods since next year’s chips are increasingly more efficient and cost-effective than today’s.

One implication is that if cloud and data center vendors can keep them busy in the short run, the investments in installing new data infrastructure with the latest chips could pay off before the value depreciates too much. However, investing in chips that are not fully used could mean higher operating costs than new competitors that take advantage of better chips or architectures in the near or distant future.

One shiny point in the UK AI action plan is a call to build out these facilities to increase AI Research Resource (AIRR) capacity 20 times. The AI Action plan is cautious that this will not exacerbate budget shortfalls:

Given trends in hardware performance, this would not mean a 20x increase in investment if the government procures smartly. Such expansion is needed to keep up with the expected increases in computing power that we should assume will be required for AI workloads. This is unlikely to slow down; we must “run to stand still.” As part of this, the government should ensure that the public computing ecosystem hosts a range of hardware providers to avoid vendor lock-in and provide value for money.

This seems quite promising. With different efforts to quantify the benefits of AI data centers, there are a lot of other metrics to judge how well you are doing or planning to do. From a political perspective, this includes dollars or pounds invested, and jobs are directly created. The latter are ephemeral, as previously noted. The data center community tends to focus on megawatts or gigawatts of capacity. This is more meaningful when considering the amount of electrical infrastructure or heat dissipation a given facility is specced for and maps to operational costs.

There is something profoundly telling about trying to distill and organize around a metric for how much AI value a given level of pounds, jobs, and megawatts map that could help to prioritize efforts to improve all these things. However, it should be observed that enhancing the ratio of pounds or jobs per unit AIRR is probably political suicide, so we’ll respect the polite fiction underpinning that here.

Scalability laws

Shortly after the DeepSeek R1 launch, NVIDIA’s top brass alluded to multiple scaling laws that might run counter to some of the observations in the Emanuel essay. The short answer is they don’t directly since most of his essay focused on the various forces exploring ways to breach the multiple Nvidia moats that have given it a virtual lock on the AI chip market.

But these scaling laws are worth considering, especially since NVIDIA recently posted a deeper dive into three scaling laws that drive smarter and more powerful AI and apply to the need to consider diverse approaches in building data centers.

  1. Pre-training scaling – This is the original law discovered by OpenAI that by increasing training dataset size, model parameter count, and computational resources, developers could expect predictable improvements in model intelligence and accuracy. Google DeepMind researcher subsequently wrote a research paper on this with more details. One missing bit is all the money spent chasing down promising ideas that don’t work and aren’t reported on too much because it’s a bad look.
  2. Post-training scaling – These foundation models get better when you fine-tune, distill, or use reinforcement learning on them after the fact. Nvidia estimates that people spend thirty times as many resources on the best models post-training as required for the original ones.   
  3. Test-time scaling – These are their words and not mine. I prefer runtime scaling. These are all of the things you can do when running these AI models in production, technically called inference, but then across many agents, mixtures of experts and other things to improve results. The Nvidia blog mentions chain-of-thought prompting, sampling with majority voting, and search. All this enables ‘reasoning.’ However, the important thing not mentioned is that this also requires scaling and improving the efficiency of an entirely separate infrastructure for inference than the one for training.

The big takeaway is that we will need many more chips and data centers to house them. However, enterprises and AI data center providers will see the best results using various chips and architectures optimized for a given use case. Two years ago, it was all about training. A year ago about post-training. DeepSeek reminded the industry that we must start including the inference aspect or ‘test-time scaling’ into the mix.

The last bit is particularly interesting because the requirements for better infrastructure for running AI in production will need to grow far more dramatically than the others either in terms of better chips or better algorithms. This is a tremendous opportunity for UK innovators to figure out how to do this more efficiently, whether on nearby consumer data centers or enterprise edge infrastructure.

Northern Virginia

I grew up in Fairfax County, Virginia, which, for simple referencing’s sake, is essentially one county away from Washington DC and one county away from Loudon County, which has become the epicenter of data center construction in the whole world. During my youth, the one-lane road became a two-lane highway, and the farmlands transitioned from Civil War cemeteries and farms to the world’s largest concentration of data centers.

Let’s reference this. According to Data Center Map, the US has 3195 data centers compared to 401 in Germany, 326 in China, 374 in the UK, and 136 in London. Northern Virginia has 536. That’s a lot of data centers for an area that does not have much economically going for it besides many data centers. It’s geographically about as big as Lond.

Fascinatingly, Northern Virginia has one of the world’s lowest data center inventory by market at only .9% despite a significant recent increase. In contrast, Europe is at 10%. Also, it has a higher net absorption, which measures how many additional megawatts of power customers bought per year. It was 400 MW in Northern Virginia last year, compared to 125 in London.

Here are some interesting takeaways from Ashburn Supervisor Mike Turney for Loudon County. He reports that this “Data Center Alley” has the world’s highest concentration of data centers, with 200 data centers built and 117 in the development pipeline. However, with the rise of AI, the County’s energy use increased 240% from 1 to 3.4 gigawatts of power consumption, and an estimated 11.59 will be needed by 2028.

Meanwhile, Dominion Energy, the local electricity provider, struggles to meet demand.  It’s looking at firing new power plants and installing expensive direct current energy connections with other areas.

Against this backdrop, the formerly mostly agrarian community now sees an estimated $895 million in data center and personal property tax revenue, which accounts for the vast majority of its $940 million operating budget. As a result, it has the lowest real estate property tax, which is 25% lower than neighbors. For reference, a data center costs the county about $0.04 per $1 of tax revenue, compared with $.25 for other businesses and they put very few cars on the road or kids in schools. That said, energy costs may rise for residents, since data center owners pay relatively higher grid interconnection costs than consumers.

My take

When I first started writing this article, my thesis was that incentivizing more data center construction in the midlands was not good. Politicians overemphasize the short-term job gains and value of making it easy for vendors to build more AI data centers and then gloss over the details.

This is a mistake. Building traction requires surfacing these things. On further analysis, I don’t feel like the net long-term effects are all that bad, even in the local communities. They may pay a bit more for power, but far less in taxes. Plus they get better-funded city services and schools to boot. It’s not like these things create pollution just more ugly buildings, and that sort of thing is easy to fix with a few cosmetic upgrades.

The upside is that this kind of data infrastructure creates more opportunities for brits to try their various ideas to improve AI and the world. Besides, there might be more things to enjoy on the long ride from Cambridge to Oxford if the backers include a few more artistic architects. (NVIDIA’s new HQ is quite stunning.)



Source link

Leave a Reply