Data Centers Should Fully Pay Their Own Way
Post 3 of 10: The Potential Structural Transformation of the U.S. Electric Utility Industry
Here is the thing most people miss when they debate data centers and electricity costs:
The grid that exists today is largely adequate for the people who are connected to it. The substations, the distribution lines, the transmission corridors, the transformers, all of it, were sized over decades to serve homes, commercial buildings, factories, hospitals, and schools. Those customers are already there. Yes we have maintenance and normal replacement, but the infrastructure largely works for them.
The reason utilities are announcing capital plans in the tens of billions of dollars each is not because the existing grid is broken. In aggregate, utility capital plans across the sector now exceed $1 trillion over five years according to Morningstar. It is because an entirely new class of customer is showing up. Customers that in some cases consume as much electricity as a mid-sized city, and want to be served at the same rate structure designed for people who run dishwashers and air conditioners.
We are not debating whether to modernize a failing grid… we are debating who should pay for a new grid being built to serve new customers.
The answer is not complicated. The new customers should pay their own way.
Endless Waiting
Uber would never tell a driver trying to join the platform: “We appreciate your interest. Our network is currently at capacity in your region for a few hours each year. Please plan for a 7-year wait and we will notify you when capacity becomes available.” That is an absurd sentence. Uber’s entire financial logic depends on matching supply with demand quickly and at a price. The price signal is the coordination mechanism. If a driver or rider wants to get on the road tomorrow, Uber says yes, and prices reflect current conditions.
But utilities tell new customers about a multi-year wait every day. In major load hubs like Northern Virginia and Columbus, Ohio, the time from an interconnection request to an energized site now spans four to seven years according to industry data. Berkeley Lab’s annual interconnection queue analysis found that the median time from request to commercial operation has more than doubled since 2008, reaching nearly five years for projects completed in 2023. The wait varies by market: ERCOT in Texas processes large load interconnection in roughly a year or two. PJM, which serves the densest data center market in the world, is still working through years of queue backlog. Microsoft told FERC that timely access to power remains the single biggest obstacle to deploying advanced computing infrastructure in the United States.
So why does the utility behave this way? The answer is not cultural or operational. It is financial. And it connects directly to what this series has been building toward.
“Show Me an Incentive and I’ll Show You an Outcome” — Charlie Munger
The reason utilities resist both speed and cost discipline for new large loads comes down to a single fact established in Post 1: cost-of-service regulation allows utilities to earn returns on equity that substantially exceed the market cost of that equity. That excess spread, roughly 300 to 400 basis points at most major utilities today, is the engine that drives all the dysfunction on the grid.
Follow the logic:
Cost-of-service regulation allows excess returns on capital, currently around 9-11% authorized ROE against a true market cost closer to 6-7%
Excess returns make every dollar of rate-based capital valuable
So utilities are financially incentivized to maximize rate base, not to maximize outcomes
Maximizing rate base means building infrastructure for new large loads and socializing the cost to all customers, rather than passing costs directly to the causing customer outside of the rate base
Which is why utilities resist extending the Causation Principle (customer causing the cost pays the cost) beyond the direct connection point
Which is why interconnection queues back up: the utility has no financial incentive to move fast, and every incentive to build and monopolize the capital deployment
Which is why existing residential customers pay for infrastructure they did not cause and cannot control because the rate base is increasing excessively
Now run the reform scenario:
Market-based ROE eliminates the excess earnings on assets
The utility earns market-normalized returns whether the substation sits on its balance sheet or the hyperscaler’s
It is genuinely indifferent to whose capital is doing the work so long as the assets are coordinated as part of the network
The financial reason to resist cost causation disappears and utilities are suddenly open to hyperscalers paying their own way for all costs they create
What remains is a clean platform operator with an incentive to connect new customers fast, and get paid for doing it
This is why the reforms in Posts 1, 2, and 3 are not separate ideas. They are the same idea at three levels of abstraction.
Fix the ROE structure.
Pay for outcomes (speed to power).
Apply the causation principle.
Each one reinforces the others. And none of them work without the others.
It also reveals something important about the opposition utilities will mount to any of these reforms individually. The resistance is not principled. It is financial. And it dissolves when the financial structure of market-based ROE + PBR changes.
What PBR Should Reward
In my last post, I described the right sequence for reforming utility incentives: set a market-based ROE, then pay explicitly for the outcomes customers actually want.
Existing customers on the grid today care about three things:
Reliability
Affordability
Sustainability [sometimes]
Those are the dimensions of the regulatory trilemma that most state PBR frameworks are built around.
New large customers care about all of those. They also care about a fourth thing that existing customers are less focused on: Speed to Power.
A well-designed PBR framework should pay utilities directly for interconnecting new supply and demand to the system. Not as a side incentive. As a core payment mechanism. If a utility connects X megawatts of new large load, it earns a bonus. If it connects that load under a flexible service arrangement, where the customer accepts curtailment rights in exchange for a faster queue position, it still earns the bonus. Flexible connections count. The customer chose speed over firmness in a voluntary financial transaction. The utility delivered speed. That is a PBR outcome worth paying for.
The reason flexible connections matter is more than just queue management. When a new large load accepts flexible service, it becomes the first to absorb in a constraint event. The last customer to come online is the first to flex off. The utility can set a strike price, below which the large load will reduce consumption on demand. This is the same logic as a capacity market, applied at the distribution and substation level. The large load is now able to quantify the value of behind-the-meter batteries or generation to supplement their power delivery that is associated with speed to power. Or they can wait for firm power. Their choice. But that is a choice they do not have today.
What this unlocks is significant. More megawatt-hours flowing through the existing physical system, without new wires. The grid built for everyone else gets used more intensively, which spreads fixed costs over more units sold and puts downward pressure on rates for existing customers. The utility earns performance payments for making this happen. The new customer gets power faster than it otherwise would. Everyone’s interests are aligned.
And the power grid will be continue to be dysfunctional until the incentives get fixed.
The Causation Principle
FERC has a long-standing doctrine called the cost causation principle. It holds that costs should be allocated to those who cause them to be incurred and who benefit from them. The Commission reaffirmed this principle unanimously in its December 2025 order directing PJM to revise its tariff for large co-located loads, noting that co-located loads benefit from grid services and should contribute to those costs accordingly.
The DOE, in October 2025, directed FERC to initiate a rulemaking to standardize how loads above 20 megawatts interconnect to the transmission system. FERC’s proposed framework leans toward 100 percent participant funding, meaning large load customers pay the full cost of the network upgrades their projects trigger.
That is the right instinct. The current application is too narrow.
Most people in the industry know that when a hyperscaler connects to the grid today, the dedicated substation and the radial line running directly to the facility are already paid for by the hyperscaler upfront through a mechanism called a Contribution in Aid of Construction, or CIAC. Because the utility did not provide the capital, those assets are excluded from rate base, and the utility earns no authorized return on them. The causation principle is already being applied at the direct connection layer.
The gap is everything upstream and indirect: the transmission upgrades triggered by the new load, the generation capacity additions planned around data center demand forecasts, the distribution system reinforcement upstream of the dedicated connection point. Those costs go into rate base and get spread across all customers for 30 to 40 years - they shouldn’t.
The principled case for extending CIAC logic to the full stack is straightforward.Call it the counterfactual reliability test.
The question is whether the grid was meeting applicable NERC reliability standards before a specific load interconnection request was filed. NERC reliability standards are the legal definition of grid adequacy. If the answer is yes, then any upgrade required to maintain those standards after the new load connects is, by definition, caused by the new load. It is not remediating a pre-existing deficiency. It would not exist without the data center. That is causation in its purest form, and it is auditable using existing regulatory standards rather than requiring any new analytical framework.
This framing also dispenses with the strongest version of the “existing grid is adequate” objection. The argument is not that every circuit is in perfect condition. It is that the system was meeting its legal reliability baseline before the new load showed up. Normal maintenance and repairs should be rate based. Step changes should not.
FERC’s own Advanced Notice of Proposed Rulemaking (ANOPR) on large load interconnection acknowledges the tension directly. Proponents of socializing upstream costs argue that once built, transmission improves reliability, reduces congestion, and enables power flows for everyone. The CSIS analysis of the ANOPR frames this as reflecting the physical and economic reality of shared infrastructure. But this argument conflates incidental system benefit with causal responsibility. The fact that a substation built for a 300-megawatt data center campus also provides marginally improved voltage support to nearby residential customers does not make those residential customers the cause of the investment. They did not need the upgrade. They did not request it. The upgrade would not exist without the data center.
Utilities will argue that upstream upgrades provide broad network benefits that accrue to all customers, not just the causing load. Where a transmission upgrade genuinely improves reliability or reduces congestion for hundreds of thousands of existing customers, some cost sharing is defensible. But that test should be applied honestly and specifically, decomposing the actual beneficiaries, not used as a blanket justification for socializing every dollar of infrastructure that a hyperscaler triggered.
“The causation principle is advancing on two separate legal tracks simultaneously. At the federal level, in response to the ANOPR’s invitation for comment on a crediting mechanism, state commissions, utilities, and technology firms proposed an intermediate model for transmission costs: large loads would fund upgrades upfront but receive partial refunds or credits if those facilities later delivered system-wide benefits. At the state level, Virginia, Minnesota, and Oregon have each enacted or proposed mechanisms requiring large loads to bear distribution and substation costs directly. Together, these two tracks cover the full stack. Both validate the same premise: the new load is the primary cause, and system-wide benefit is an adjustment credit applied on top of that assignment. That is not socialization. That is an incidental benefit adjustment on top of a causation-based cost allocation. It is the right structure.
The companies building AI data centers are worth trillions of dollars. Capital is not the binding constraint for them. Time is. The lost economic opportunity from a delayed data center, delayed compute capacity, delayed model training runs, is worth more to these companies than the cost of the infrastructure required to connect. Google’s willingness to fund 1.4 gigawatts of wind, 200 megawatts of solar, and 300 megawatts of long-duration iron-air battery storage in Minnesota under a bilateral tariff structure is recent evidence of this. These companies will pay for speed. The regulatory structure just has to ask them to.
And as established above, under market-based ROE the utility has no financial reason to object. The excess return that made rate base growth worth fighting for is gone. What remains is an operator that gets paid to connect new customers well and fast, indifferent to whether the capital on the other side of the meter is its own or the customer’s.
Note, I am assuming that in most existing rate structures, incremental distribution and substation costs triggered by specific large industrial loads are socialized across all customers rather than recovered directly from the causing load. This is consistent with how traditional cost-of-service rate design works, though individual tariffs vary.
What Is Happening to Rates
The Virginia data is the most vivid illustration of what happens when the causation principle is not applied beyond the direct connection layer.
Dominion Energy has approximately 450 data center customers. Data center demand contributed to an 833 percent increase in PJM’s capacity auction price for 2025 to 2026 compared to the prior year. The state’s own Joint Legislative Audit and Review Commission concluded that data centers could drive Virginia residential bills up by $444 per year by 2040. Virginia residential electricity prices rose 13 percent. Illinois, another state with heavy data center concentration, rose 16 percent. Ohio, 12 percent. Oregon ratepayers are already seeing higher bills attributed at least in part to data center load growth, according to state consumer advocates.
The feedback loop described in my first post is already running in these states. Rate increases erode social permission. The incoming Virginia governor campaigned explicitly on making tech companies “pay their own way.” New Jersey’s governor made utility affordability an executive order on day one. I’m looking forward to seeing what actually is in scope though.
States are starting to act. Virginia’s SCC approved a new rate class in November 2025 requiring data center customers above 25 megawatts to pay a minimum of 85 percent of contracted distribution and transmission demand. Minnesota passed legislation in June 2025 requiring data centers to cover their full cost of service, including stranded asset costs if a facility shuts down before the infrastructure is paid off. Oregon passed similar protections. Ohio directed its largest utility to file a new tariff specifically for data center cost allocation.
The direction is clear. The question is whether it gets extended comprehensively enough and fast enough to keep consumer bills under control.
In Practice: The Minnesota Deal with Google
The Google and Xcel Energy deal announced in February 2026 in Pine Island, Minnesota illustrates what more responsible deal structuring looks like in practice.
Rather than connecting to Xcel’s existing rate base and asking existing customers to fund the needed generation and infrastructure, Google is paying for all of it directly through a new tariff structure called the Clean Energy Accelerator Charge. Under the agreement, Google will cover all costs of new infrastructure needed to serve the data center, including 1.4 gigawatts of wind, 200 megawatts of solar, and 300 megawatts of long-duration iron-air battery storage from Form Energy, which at 30 gigawatt-hours will be the largest battery by capacity ever announced. Xcel’s existing residential customers, who already pay 27 percent below the national average, will not see higher rates because of this project. The deal is still pending approval from the Minnesota PUC.
However, two risk points worth watching:
First, the agreement covers generation and storage. It is less clear from the public filings whether Google is also paying directly for the full distribution and substation infrastructure required to physically connect the facility to the Xcel system. The allocation of distribution-level costs is not explicit in the public record. That gap matters, because the causation principle should extend all the way.
Second, if Google decides in ten years that this data center is not competitive and moves its compute loads to newer, more efficient facilities elsewhere, who holds the long-term contract for these assets? The Minnesota law protects ratepayers from stranded asset risk. But the contractual mechanics for enforcing that protection through a real exit have not been tested. And unless the Causation Principle is linked to all distribution and substation assets, stranded costs remain. That risk belongs to the data center customer and, if the infrastructure remains a utility asset, to utility shareholders. Hyperscalers are creditworthy enough for a Parent Company Guarantee.
The Rate Base Problem
When a utility builds infrastructure for a new large load and puts it in the rate base, three things happen simultaneously.
The utility earns a guaranteed return on that capital for the 30 to 40 year depreciable life of the assets, regardless of whether the load that justified the investment is still there in year 10.
If the load exits before the infrastructure is paid off, the stranded cost lands on existing ratepayers who had no say in the original decision.
And there is no post-investment audit requirement. Once a project clears prudency review and enters rate base, it earns a return for its full life whether or not the customer it was built to serve is still operating.
The asymmetry is rarely stated plainly. The utility has every incentive to build for new large loads because doing so grows rate base and guaranteed earnings. The downside risk sits with ratepayers.
The right structure is bilateral. If a utility wants to build infrastructure for a new large load without requiring that load to directly fund the capital, the utility’s shareholders need to bear the stranded asset risk. Not ratepayers. Cost-of-service regulation does not check, ex post, whether the right things were built. The prudency standard asks whether the decision was reasonable at the time. Not whether it turned out to be correct. The ratepayer is the backstop for every wrong guess even if the assumptions were wild. Of course utilities will always need to make investment decisions with best information, but the scale of these rate cases are meaningfully different than normal day-to-day operations.
Given the nature of scale for the proposed upgrades, and that it is to benefit a single type of customer, if a utility genuinely believes a large load customer is durable and wants to finance the infrastructure on its own balance sheet, it should be free to do so. But at its own risk, on its own equity. Not socialized.
What This Requires from Utilities
The model described here asks something of utilities that cost-of-service regulation never required: selectivity.
Under cost-of-service, every asset that clears the prudency review earns a return. The incentive is to build. Selectivity has no value in that model. Under performance-based regulation with a market-based ROE, the calculus shifts. A utility can serve a new large load quickly through a bilateral structure, earn a performance payment for the speed and quality of that service, and avoid the regulatory friction of a rate case on new capital. Speed and quality become valuable. Capital for its own sake becomes less so.
Under that model, the utility can finally answer a new customer the way Uber answers a new driver. Not “wait four to seven years.” Not “we will study your request and schedule a stakeholder process.” Show up tomorrow. Here is the price. Follow the signal.
The grid is already built for the people who live here. New customers who want access to the grid should pay for it if it materially changes the grid. We need to give data centers what they want: timely access to the gird. But there is no principled reason everyday customers should carry the datacenter risk on their bills purely because of an antiquated utility incentive model.
In my next post, I will make the case that the model described in this series, if taken to its logical conclusion, produces something that looks less like a traditional utility and more like something the industry has never seen before.
Utility Transformation Series, Post 3 of 10. Next: While Everyone Is Talking About Data Centers, We Are Forgetting the Distribution Grid
Sources
FERC, Docket No. EL25-49-000, Order directing PJM to revise co-located load tariff (December 18, 2025)
FERC cost causation principle, PJM co-location order fact sheet — ferc.gov
DOE letter directing FERC to initiate large load interconnection rulemaking (October 23, 2025) — whitecase.com
FERC ANOPR large load interconnection, 100% participant funding model — CSIS analysis, csis.org
Berkeley Lab, “Queued Up: 2025 Edition,” interconnection queue wait time data — emp.lbl.gov
Load interconnection timelines in major hubs, 4-7 year range — LandGate analysis, landgate.com
ERCOT large load interconnection averaging approximately one year — Wood Mackenzie, woodmac.com
Google / Xcel / Form Energy Pine Island, Minnesota deal (February 24, 2026) — Latitude Media
Minnesota HF16, data center ratepayer protections (June 2025) — Citizens Utility Board, cubminnesota.org
Virginia SCC GS-5 rate class approval (November 2025) — scc.virginia.gov
Virginia JLARC data center analysis, $444/year residential bill impact projection — jlarc.virginia.gov
Data centers and rising electricity prices by state — CNBC (November 2025)
PJM capacity auction price increase (833%) — American Action Forum
Duke Energy $103 billion five-year capital plan, February 2026 earnings — Utility Dive
Southern Company $81 billion capital plan — T&D World
Grid Strategies, National Load Growth Report 2025 — gridstrategiesllc.com
Morningstar DBRS, utility capex exceeding $1.1 trillion 2025-2029
EEI / Morningstar utility capex projections, cited in Post 1 of this series
Assumption note: The claim that the existing grid is “largely adequate” for current customers is a simplification. Many distribution circuits have deferred maintenance and reliability challenges independent of new large load growth. The argument is specifically that incremental capital triggered by new large loads should be funded by those loads, not that no other investment is needed. The assertion that the Google / Xcel deal may not cover full distribution costs reflects genuine ambiguity in the public record; the tariff filing has not yet cleared the Minnesota PUC as of the time of writing.
Posts in this series:
The Utility Business Model Is Built for a Different Era. Regulators Are Starting to Notice
Performance-Based Regulation: The Incomplete Fix and What Should Come Next
While Everyone Is Talking About Data Centers, the Distribution Grid is the Big Opportunity
I’m Bullish On DERs. I’m Bearish On the Infrastructure Around Them

