Author

admin

Browsing

Three years after the disaster in East Palestine, Ohio, Congress has brought back the Railway Safety Act. It’s also focused on the wrong priorities.

The issue isn’t whether Washington can add another loud rail-safety mandate. It’s whether the bill steers investment toward the technologies and operational improvements that are actually, quietly, reducing risk.

On that test, too much of the act falls short. Three pieces of research — two new ones offering a broad insight about the economics of shipping, and an older one laying out the implications for safety — explain why.

In the first new study, Bentley Coffey, Pietro Peretto and I develop an economic growth model that treats transportation not as a side sector but as part of the innovation process itself. In most growth models, goods move to market as if by magic. In the real economy, they do not. Most everything you consume was shipped at least once, if not multiple times. Manufacturers can improve products and processes, but if getting goods to customers is too expensive, the gains from innovation eventually hit a wall.

The flip side is encouraging. When innovation includes transportation, growth becomes self-reinforcing. Better transportation expands markets and raises the return to manufacturing innovation. Better manufacturing raises the value of improving transportation. 

Policies that raise transportation costs therefore do more than burden one industry. They slow the spread of innovation through the whole economy. And that includes innovations that increase safety, like autopilot did for commercial aviation in the 1980s.

A companion paper asks what regulation does to that process in the real world. Using decades of data across air, rail, truck and water freight, we find that regulatory accumulation functions like a compounding tax on moving goods. It lowers labor productivity in every freight mode.

When it comes to the railroads Congress is targeting with this bill, more regulation also significantly depresses fuel and capital productivity. In our simulations, a five percent increase in rail regulatory restrictions caused rail unit costs to rise by 2.3 percent and rail volumes to fall by 4.1 percent in the first year alone. And because productivity growth is slower, the damage does not disappear in year two. It persists and compounds.

Crucially, these higher transportation costs do not simply reshuffle freight from one mode to another. The pie gets smaller. Total freight activity falls. That means policymakers should be even more cautious than usual about adding regulation to rail and other freight modes. The costs do not stay inside the targeted sector. They ripple through supply chains and the broader economy.

My earlier study with Jerry Ellig helps explain why all of this matters for safety as well as growth.

Ellig and I found that the Staggers Act, which removed some economic regulations of US railroads, was associated with improved railroad safety. Meanwhile, subsequent expansions in safety regulation made only marginal contributions to safety once railroads were freer to allocate capital. Accidents fell from more than 11,000 in 1978 to 1,867 in 2013 even as revenue ton-miles doubled.

The most plausible reason is also the most intuitive one. Railroads with healthier finances and more operational flexibility could invest more in track, equipment, maintenance, and technology.

Taken together, these papers point to an uncomfortable conclusion for supporters of the Railway Safety Act: safety and productivity are often complements, not tradeoffs.

The same investments that make railroads more efficient — better defect detection, better track and equipment, better logistics, more reliable operations — also make them safer. And any policies that siphon resources into compliance-heavy mandates leave less capital for those safety-enhancing investments.

That should shape how Congress thinks about this bill. Some parts of the act move in the right direction. Its defect-detection provisions (especially the requirement for risk-based plans for hot-bearing and related detection systems) are closer to what modern research would recommend. So are measures that improve hazardous-material information and emergency response. Those provisions target identifiable failure modes and improve the underlying system. 

Other provisions look like mere theater: visible, politically attractive, and not connected to actual risk reduction. The bill’s blanket two-person crew mandate is the clearest example. No sound evidence justifies it, as the Federal Railroad Administration itself admitted in 2016 when it could not “provide reliable or conclusive statistical data to suggest whether one-person crew operations are generally safer or less safe than multiple-person crew operations.” And there’s a reason for that: When railroads make changes to operations, such as reducing crew size on specific routes, they evaluate the overall system’s safety. When they reduce crew size, it is because they made investments in other safety layers, such as positive train control, that permit the same or even better safety performance with a smaller crew.

The new studies sharpen that point. Even when the safety benefit of a staffing mandate is uncertain, the cost is not. In this industry, higher labor and compliance costs mean less money for wayside detectors, acoustic bearing monitors, predictive maintenance, track renewal, and other investments that directly target accidents and actually improve safety.

The same logic may apply to the bill’s more prescriptive inspection mandates, including designated inspection locations and extra daily locomotive inspections. Of course inspections matter — as long as they are needed inspections and Congress is not just mandating a process. Without strong evidence of a safety payoff, it may satisfy Washington’s taste for visible action while undermining the capital deepening and technological upgrading that have historically delivered both better performance and better safety.

Not all rail safety regulation is misguided, but the burden of proof should be much higher than what Congress usually assumes. If transportation is a system-wide input into growth, and if regulatory accumulation’s effects on growth compound over time, lawmakers should favor rules tightly tied to actual performance and that preserve room for investment and innovation. They should be skeptical of prescriptive mandates that raise the cost of moving freight without comparable evidence of benefit.

The Railway Safety Act is mostly the latter — regulations that would impose costs without improving safety.  If it passes, these new studies indicate that the economic and safety consequences will be much larger than the compliance costs imposed on railroads.

At the end of April, New York City Mayor Zohran Mamdani proposed delaying pension plan contributions to help close the Big Apple’s budget deficit.  

The problem is that while delaying pension payments could free up $1 billion in the short term, the budget gap is $5.4 billion. This flawed strategy highlights a much larger problem: the Big Apple’s biggest budget pains are self-inflicted. 

Kicking the can down the road on mandatory pension contributions still leaves a massive hole in the budget while hurting public employees (many of whom helped propel Mamdani into office) and placing greater burdens on New York’s shrinking tax base. 

If Mamdani does not make the spending fixes on his own terms, markets will force him to do it when the City can no longer find willing investors.  

Why Pensions Matter  

A pension liability represents a financial retirement benefit promised to a public employee. Unlike Social Security, these benefits are prefunded: when a public employee retires, the plan should have on hand the total amount needed to purchase a lifetime annuity on that employee’s behalf. 

Pensions are funded through contributions from public employees and taxpayers, as well as investment returns. Public employee contributions are tied to a fixed percentage of payroll, so when investment returns come up short, taxpayers are compelled to cover funding gaps. These benefits are calculated using a formula, including a public employee’s final average salary.  

In most states, including New York, public employees can also use overtime and unused sick days to increase their final average salary. This practice, known as pension spiking, often results in pension payments that exceed the salaries public employees received while working.  

Publicly promised benefits have legal protections that vary state to state. New York guarantees public pension benefits through the New York State Constitution, as well as other state statutes and legal precedents that include pensions as part of a contractual relationship between employers and employees. Benefits can only be revoked if a potential beneficiary is convicted of a felony.  

In other words, these promises are rock-solid. The strength of those promises, however, also means that spending on pensions gets priority over other expenditures, including other public services that are deemed “core government functions.” That means taxpayers see higher tax burdens while the government becomes more bloated and ineffective. 

The only way New York State can change pension benefits without a constitutional amendment is by changing the benefits offered to future hires, which gave rise to the tier system. One’s tier is determined by when one was hired. The more recent the hire, the more the employee must pay into the system and the later they can retire. 

This has not stopped unfunded liabilities from growing. Public pension liabilities matter because they are one of the largest sources of long-term debt that state and municipal governments face. Massive pension liabilities are a leading contributor to recent fiscal crises, including those experienced by Detroit, Puerto Rico, and municipal bankruptcies across California. 

Currently, New York City owes over $40 billion in pension benefits not covered by current assets. That is just under $4,600 per person and a larger liability than the Empire State’s aggregated average of $2,681 per person and the national average of $1,475 per person.  

That burden will fall on a shrinking number of taxpayers, who cannot seem to escape New York fast enough. The city lost thousands of residents across all income levels in 2025, and New York State is on track to have a decade of population decline. 

Now, public employees throughout the Empire State are pressuring state officials to roll back the Tier 6 reforms in 2012, which would promise greater benefits from a city that is increasingly unprepared to pay for them. 

Back to the 70s? A Familiar Fiscal Pattern

In late April, Mamdani declared a fiscal emergency due to structural budget deficits. While his administration inherited a fiscal mess, his own ambitious spending plans only dig New York deeper into the fiscal hole. 

Many are quick to compare Mamdani’s New York to Mayor John Lindsay, whose similar spendthrift approach led to the 1975 fiscal crisis under his successor, former city controller Abraham Beame.  

While New York City is not currently in 1975, it may be in 1965. Much like Mayor Mamdani, Mayor Lindsay positioned himself as an outside urban reformer who grew government during a period of rising welfare costs, labor pressure, middle-class flight, crime, and weakening fiscal discipline. He also blamed his predecessor, Mayor Robert Wagner, for leaving him with massive budget deficits.  

Much like today, markets were skeptical of New York City’s ability to pay its debts. Unlike today, however, the Big Apple does not appear to be as dependent on short-term bonds as it was before the 1975 crisis. The recent pension contribution deferment, however, resembles the same attitude of short-term debt to cover current spending. This time, however, the city is effectively borrowing against the retirement security of public employees (who were, again, among Mamdani’s top campaign supporters) while leaving the bill to future taxpayers. 

Economist John Phelan notes that the crisis occurred when the city could not find a willing underwriter, a securities broker or dealer that purchases bonds to resell to investors, for its bonds. This is because news emerged that the city did not have the tax receipts necessary to cover the proposed debt. 

The Municipal Assistance Corporation (MAC) was created in 1975 after New York City lost access to credit markets. It served as an emergency financing vehicle, issuing bonds backed by state-controlled revenue streams to help the city meet obligations while forcing budget discipline. MAC also helped shift control away from ordinary city politics and toward state-supervised fiscal management. Although MAC itself was dissolved after its bonds were retired, its legacy remains.  

New York’s post-crisis guardrails now include the Financial Control Board, balanced-budget rules, limits on short-term borrowing, quarterly monitoring, and four-year financial plans. Those guardrails are weaker than direct crisis control, but they can still tighten if conditions deteriorate. 

Mayor Mamdani has already shown a willingness to pressure Albany for additional taxing authority. He pounced on the governor’s approved pied-à-terre tax on secondary residences, prompting the departure of Ken Griffin and other business-owners from New York. The recent budget deal reached in Albany further highlights the city’s ability to bully the rest of the Empire State into going along with the city’s desired policies.

While Mamdani’s New York still has willing investors, the past still provides a stark warning. If these recurring promises grow faster than revenues and if higher taxes accelerate outmigration of businesses and high-income residents, today’s structural gaps could harden into a deeper fiscal crisis.  

Has the Big Apple Gone Rotten? 

New York City still has much to recommend it to residents and investors, but the warning signs are increasingly difficult to ignore. New York’s future depends on whether its leaders can impose discipline before markets do it for them. If officials continue to squeeze a shrinking tax base, rely on pension gimmicks, and use short-term fixes to close long-term gaps, the city risks repeating the very mistakes that once pushed it to the brink of collapse.

Across the United States, people are fleeing “Blue States” with their high taxes, spending, and regulatory burdens, for “Red States” which offer greater economic freedom.

Rather than heed this lesson, several “Blue States” — and some which are only pale blue — are doubling down, offering even more of their losing formula. Minnesota is one of these states. 

Ope, Just Gonna Tax Ya There

Minnesotans are some of the most heavily taxed citizens in the United States.

The Land of 10,000 Lakes has the sixth highest top rate of state personal income tax and this rate kicks in at a relatively low level of income; only Oregon has a higher rate that kicks in at a lower level. And Minnesota doesn’t just tax “the rich” heavily; for 2025, the average-earning, single-filing Minnesotan handed over a share of their wages to the state government in income tax greater than in 43 out of 50 other states. Altogether, Minnesota’s per capita tax burden ranks eight out of 50 states. 

These high taxes are necessary to fund a level of General Fund spending which is higher, in per capita terms, than in 45 other states. Minnesota exemplifies exactly the “Blue State” policies that Americans are fleeing. 

Minnesota’s Proposed Wealth Tax 

Yet, even with taxes high and rising, spending has outpaced them. Minnesota’s state government has spent more than it has collected in revenue in every year since 2024 and is forecast to continue doing so until at least 2029. 

To help plug this gap, Minnesota’s Democrats have introduced a bill to enact a state wealth tax. This proposal would establish a one-percent tax on all “taxable wealth” over $10 million, with “taxable wealth” comprising the sum of a taxpayer’s real or personal, tangible or intangible property located in Minnesota, minus the sum of all debts and financial obligations owed by the taxpayer.  

The state’s Revenue Department estimates that the tax would hit about 5,600 people annually and raise $288.3 million in Fiscal Year 2027, or 0.8 percent of total revenues, with that number increasing by about $2 million annually in subsequent years. 

Let me be clear: Minnesota’s wealth tax will not raise $288 million.  

For starters, that estimate doesn’t include the cost of administering the tax. The liability would be calculated in the same manner as for the federal estate tax, but, unlike the estate tax — a state version of which Minnesota already has — it will be assessed annually. The usual problems of valuing certain assets such as “fine art, wine, antique cars, jewelry, and other collectibles [where] there is often not a liquid market that can be referenced for valuation purposes” will be greatly multiplied, as will the difficulties in valuing intangible assets, like patents and copyrights. The authors of the bill have no idea how much this will cost.  

More importantly, the estimate assumes that nobody responds to avoid the tax. This is highly unlikely, and there are several options open to those wishing to avoid it.  

Those Minnesotans targeted might move. This is often dismissed as a “myth.” However, a 2020 paper published by the American Economic Association suggests otherwise.

The paper found “growing evidence” that taxes influence where people live — within and across countries — adding another efficiency cost policymakers must consider.

More specifically: “This body of work has shown that certain segments of the labor market, especially high-income workers and professions with little location-specific human capital, may be quite responsive to taxes in their location decisions.”   

Minnesota’s proposed wealth tax would provide a strong incentive to move. Someone with $11 million “taxable wealth” yielding a return of five percent would see a 16.8 percent increase in their total tax bill which, in Minnesota, includes a “Net Investment Income Tax” on top of the high personal income tax. And this hike would be larger if the return fell; by 27.9 percent at a return of three percent.  

Those Minnesotans targeted don’t even have to move to avoid paying this tax. They could keep reported wealth just below that $10 million threshold by liquidating their investments to finance consumption spending. This would reduce savings, which, in an open economy, might be offset by increased foreign capital inflows, resulting in a  larger trade deficit and/or  lower long-run economic growth. 

Either way, the effect is the same: the base of the wealth tax shrinks. 

Recent events in Washington State, which has proposed a one percent tax on tradable net worth above $250 million, reveal the problem with state projections from wealth taxes. State economists had projected $3.2 billion in new revenues from the tax. But as the Tax Foundation’s Cristina Enache wrote, “$1.44 billion, almost 45 percent, would have been collected from Jeff Bezos.” Unfortunately for state lawmakers, the Amazon founder had other ideas. He moved to Florida, taking nearly half of that estimated revenue with him.

Bezos may be exceptionally wealthy, but the phenomenon applies everywhere. In California, for example, the base of the proposed wealth tax comprises just 200 people — out of a population of nearly 40 million. Many of these individuals will likely exercise the power of exit, as Bezos did. 

This is why most jurisdictions that had wealth taxes have ditched them. Thirteen OECD countries imposed wealth taxes in 1965, but only three did as of 2025. 

Compounding Policy Effects 

Let us stick with the comparison of estate taxes and wealth taxes a moment longer. A 2023 paper by economists Enrico Moretti and Daniel J. Wilson looked at “the effect of taxes on the locational choices of wealthy individuals by examining the geographical sensitivity of the Forbes 400 richest Americans to state estate taxes.” 

First, they found that “their residential choices are highly sensitive to these taxes, as 35 percent of local billionaires leave states with an estate tax. This tax-induced mobility causes a large reduction in the aggregate tax base.” 

Second, they found that “the revenue benefit of an estate tax exceeds the cost for the vast majority of states.” But Minnesota is not among this majority. Moretti and Wilson found that “the benefits of having [an estate tax] exceed the costs in all but three high-[Personal Income Tax] states: Hawaii, Minnesota, and Oregon.” 

There is a trade-off: You can have a high rate of estate or wealth taxation or a high top rate of income tax, but you can’t have both. Given this, another proposal from Minnesota’s Democrats, to impose a new, top, fifth income tax bracket of 11.45 percent on income above $600,000 (single) or $1,000,000 (joint) is fiscal masochism. 

During the last century, economists were provided with two incredible natural experiments: the division of Germany and the Korean Peninsula into countries with diametrically opposed economic models. The results were clear. America’s Blue States seem intent on running the experiment again.

The public shock upon the release of the Final Report of the Select Committee to Study Governmental Operations and Intelligence Activities (The Church Committee Report) in April 1976 is now a quaint memory. Its damning findings on the violation of American citizens’ constitutional and natural rights dismayed historian Henry Steele Commager, who bemoaned, “It is this indifference to constitutional restraints that is perhaps the most threatening of all the evidence that emerges from the findings of the Church Committee.”

The specifics of the report revealed that the CIA, FBI, NSA, and even the IRS engaged in intelligence collection against US citizens from the 1940s through the early 1970s. These agencies engaged in now-infamous projects such as multiple assassination plans and attempts on the lives of Fidel Castro (Cuba), Rafael Trujillo (Dominican Republic), and Patrice Lumumba of the Democratic Republic of the Congo. These are aside from COINTELPRO, which targeted perceived leftist domestic threats, and Mockingbird, which entailed active CIA recruitment of journalists to propagandize the American public. 

Frank Church holds a CIA-designed “heart attack gun” that fired an untraceable poisoned dart of shellfish toxins, for undetected assassination. Image: “Church Committee: 40 Years Later” on C-SPAN3’s Reel America, 2016.

The report could have served to expose and roll back such activities. Instead, the surveillance state has grown both exponentially and inexorably advanced. How has our citizenry responded? Public reaction has been comparatively muted and sheepish. In contrast, DC has behaved ravenously.

Indeed, contemporary Americans are now languid, perhaps even expecting federal agencies to surveil, snoop, and in some cases terminate those they deem to be threats to so-called US interests. One of the committee’s key findings was that there was a need for a permanent congressional intelligence committee to keep an eye on the executive branch and its abuses of natural rights. It would be up to them to watch the watchers. 

Front page of The New York Times on December 22, 1974.

This sentiment justified the creation of the Foreign Intelligence Surveillance Act (FISA) in 1978. Americans were told to rest assured, this legislation would prevent such abuses in the future. Nothing could have been further from the truth.

Seven years after the 9/11 attacks, Section 702 was enacted as part of the FISA Amendments Act. According to Rachel Miller, this move “broadened the scope of FISA, allowing the government to conduct foreign intelligence surveillance outside the United States without an individualized application for each target. The FAA garnered bipartisan support, notably from then-Senator Obama in 2008 and more recently former FBI director Christopher Wray.”

That bipartisan support may have been a harbinger of the findings of the Privacy and Civil Liberties Oversight Board (itself a bipartisan committee executive agency). It would surprise no one to learn that it has concluded that “incidental” collection of US citizens’ data in pursuit of foreign targets shows “no signs of intentional abuse.” 

Critics point out that in 2023 alone, over 57,000 of these so-called “backdoor searches” were conducted. As a result, the US District Court for the Eastern District of New York found that it is indeed a violation of the Fourth Amendment’s warrant requirement as a protection of US citizens’ rights.

Despite the fact that many of these abuses are well known to the American public, on April 29th, the House of Representatives voted 235 to 191 with 4 abstentions to renew the section along with the warrantless surveillance it permits. Unable to pass it before its expiration on April 30, the Senate inexplicably passed a 45-day extension, which was swiftly approved by the House by an even wider margin than the initial vote.* For the opposing minority of congressional members, some have called for a requirement that intelligence agents obtain probable cause warrants prior to querying the Section 702 data. Naturally, the intelligence community has pushed back, and the majority of Congress has fallen in line.

This complacency on the part of both the public and the political class and the brazen Constitutional gymnastics from various administrations leads Senator Mike Lee to lament, “the arguments go something like this: ‘Yes, there have been problems in the past. Yes, there have been abuses of FISA 702. But you need not worry because we now have procedures in place, administrative procedures that will fix the problem once and for all.’” 

By ignoring past abuses and ongoing privacy concerns, surveillance by association is set to stand for the foreseeable future. The sum of the findings from the Church Committee, however, demonstrated a need to shackle the intelligence agencies that violated Americans’ rights, all in the name of national security in the three decades after the Second World War. The current Congress, and all those since the implementation of the FISA legislation have failed to uphold the spirit and the letter of the Fourth Amendment’s acknowledgement of the people’s right to be secure in their “persons, houses, papers, and effects” (which in this author’s view includes all digital effects) against unreasonable search and seizure. This isn’t a mere suggestion. It’s the law of the land. 

Proponents of Section 702 are quick to deploy cost-benefit analysis. Seemingly without exception, the conclusion is reached that the alleged safety benefits always outweigh the costs of infringing on constitutionally-recognized human rights. However, a utilitarian worldview withers when scrutinized by the Constitution (not to mention the Declaration of Independence) itself, and its underlying assumptions about the sanctity of property and persons. 

The way out of this congressional quagmire is to uphold personal and property rights as our nation’s highest values. Until that happens, we can count on the warrantless surveillance (voyeurism?) to continue. Even if it is “incidental.”

* In an intriguing twist, Senate Majority Leader John Thune said the bill was dead on arrival because there was a provision that would outlaw the Federal Reserve from establishing a CBDC (Central Bank Digital Currency). Apparently, Thune can tolerate warrantless surveillance, but protecting Americans from digital, programmable, currency is beyond the pale.

“Spirit in the Sky” may be a song about departure, but Spirit Airlines’ demise was no natural passing. It is a warning about a government that first broke the market’s legs, then offered it a wheelchair. Washington blocked the private merger that could have kept Spirit’s planes, workers, routes, and customers inside a functioning carrier. Then, after the damage was done, Washington even considered whether taxpayers should help clean up the mess. 

Spirit was not a luxury airline. It was often mocked for its fees, cramped seats, and bare-bones service. That was precisely the point. Spirit served price-sensitive travelers who cared less about comfort than access. For many students and working families, Spirit helped make flying possible. It filled a niche that larger airlines had little incentive to serve with the same price discipline. 

On Saturday, May 2, 2026, Spirit CEO Dave Davis said, “For more than 30 years, Spirit Airlines has played a pioneering role in making travel more accessible and bringing people together while driving affordability across the industry.” The airline at its peak operated hundreds of daily flights and employed about 17,000 people. Although rising oil prices following the onset of the Iran conflict may have delivered the final blow, the fight to keep Spirit flying had been underway long before then.

In 2024, JetBlue and Spirit called off their $3.8 billion merger after a federal judge blocked the deal on antitrust grounds. Reuters reported that the merger would have created the fifth-largest airline in the United States, but the Biden administration argued that it would harm consumers by reducing competition and raising ticket prices. JetBlue paid Spirit a $69 million termination fee. But the failed merger left Spirit in a difficult position, with analysts already discussing bankruptcy risk. 

Senator Elizabeth Warren celebrated the government’s position at the time. On March 5, 2024, she wrote that she had warned for months that a JetBlue-Spirit merger would have led to “fewer flights and higher fares,” adding that DOJ and DOT were right to fight airline consolidation. She called it “a Biden win for flyers.” 

Roughly two years later, Spirit’s exit has left fewer low-cost flights, fewer ultra-low-cost seats, and a thinner market for consumers who once relied on its model. 

Warren’s argument, like the DOJ’s, rested on a fragile assumption. The government compared the merger to an idealized world in which Spirit remained an independent, viable low-cost competitor. The realistic comparison was different: merger, bankruptcy, liquidation, or bailout. A weak airline does not become competitive because regulators insist it remain independent. A grounded airline does not discipline fares. A bankrupt airline does not serve consumers. 

Both Noah Gould and Tarnell Brown reached the same basic conclusion: Spirit’s consolidation into JetBlue threatened the largest airlines more than it threatened consumers. The real threat was not that JetBlue-Spirit would dominate the market, but that it could create a stronger fifth competitor against Delta, American, Southwest, and United. Even after acquiring Spirit, JetBlue would not have become dominant. The court found that the merged carrier would have become the nation’s fifth-largest airline with 10.2 percent of the domestic market. That is not dominance. It is scale enough to challenge dominance. If the purpose of antitrust law is to protect competition rather than competitors, why should the government prohibit a smaller airline from scaling up to challenge entrenched incumbents? 

This also explains why Delta or American did not buy Spirit. Their absence does not prove Spirit was worthless. It suggests Spirit made strategic sense for JetBlue in a way it did not for the giants. Delta, American, United, and Southwest already have national networks, major hubs, international routes, corporate travel customers, loyalty programs, and enormous scale. JetBlue needed Spirit to become a more serious national competitor. The Big Four did not need Spirit to become national competitors. They already were. 

In fact, the absence of a Big Four bid strengthens the case for the JetBlue merger. Why would a dominant airline buy Spirit’s debt, leases, labor obligations, and business-model problems if it could wait for distress and compete for its passengers, pilots, aircraft, gates, or routes later? The larger airlines did not need to acquire Spirit to benefit from its disappearance. They only needed to wait for Spirit to exit the market. 

Now that the crash landing has occurred, the consumer-protection case looks upside down. The merger was blocked because regulators claimed it might mean fewer flights and higher fares. Yet Spirit’s collapse produced canceled flights, stranded passengers, fewer ultra-low-cost seats, and less pressure on the remaining airlines. AP reported that United, Delta, JetBlue, and Southwest offered $200 one-way flights for passengers holding Spirit ticket confirmations. Other airlines also said they would help stranded Spirit employees and give them preferential employment applications.

This market failure is a direct result of government meddling in the airline industry. Had the JetBlue deal been allowed, Spirit’s customers may not have been left facing a sudden wind-down, disappearing customer service, and emergency rebooking. The Spirit brand may have disappeared, but its aircraft, workers, routes, and customers could have been integrated into a functioning carrier. Instead, the government chose antitrust purity, and passengers were left with the consequences. 

The Trump administration’s reported bailout interest only completes the irony. The National News Desk reported that Trump said his administration had given Spirit Airlines a “final proposal” as the carrier considered ceasing operations, amid debate over a possible taxpayer-backed rescue. But that would have been the wrong response. JetBlue would have risked private capital. A bailout would risk taxpayer capital. Spirit did not need Washington to become its owner. It needed Washington to stop blocking its buyer. 

Using American money to rescue Spirit after blocking private capital would be especially absurd because the US government cannot even manage its own balance sheet. The Congressional Budget Office projects a $1.9 trillion federal deficit in fiscal year 2026, rising to $3.1 trillion by 2036. It also projects debt held by the public rising from 101 percent of GDP in 2026 to 120 percent in 2036. A government running chronic deficits should not pretend to be a disciplined capital allocator for failed airlines, especially after the same government denied a private merger. Washington helped create the problem it later claimed only public intervention could solve. 

Washington’s meddling in airline markets is not an isolated episode. It is the latest example of a broader interventionist turn across American industry. In semiconductors, Washington has become a shareholder, with the US government taking a 10 percent equity stake in Intel by converting public grants into stock. In steel, the same logic appears through governance rather than equity. The US government secured veto power over key US Steel decisions as part of Nippon Steel’s takeover, including a non-economic golden share and presidential authority to name a board member. 

In chips, government becomes shareholder; in steel, government becomes veto holder; in airlines, government blocks a private merger and then considers taxpayer-funded public rescue. That is not neutral regulation. It is government inserting itself directly into corporate decision-making.  

Spirit should not be saved by taxpayers. But it should have been allowed to seek survival through private capital. The tragedy is not that the government refused to rescue Spirit at the end. The tragedy is that the government helped block the market’s rescue before the end came. From JetBlue and Spirit to Intel and US Steel, the lesson is clear: when government enters the market as planner, owner, veto-holder, or rescuer, it does not make firms stronger. It makes capitalism weaker. Spirit Airlines did not need Washington to buy it. It needed Washington to let capitalism work.

The notion that artificial intelligence at full bloom might eliminate the need for money reflects a deep confusion about what money is and does. Money is not merely a barter-avoiding convenience layered onto an otherwise frictionless world. It is a solution to fundamental problems of exchange, profound difficulties in coordination, and comparison of alternatives under scarcity. Even in a hypothetical future defined by extraordinary productivity gains and broadly collapsing prices, those underlying problems do not disappear. Instead they change form, and for as long as scarcity, tradeoffs, and uncertainty persist in any domain, so too will the need for money.

To begin with the most basic point: scarcity is not abolished by abundance. It is displaced. AI may dramatically reduce the cost of producing many goods and services, particularly those that are digital or easily replicable. But large swaths of economic life remain governed by constraints vastly beyond the power of computation. Land is fixed. Location is inherently scarce. Prime real estate in places like New York City or Tokyo will not become abundant simply because construction costs fall precipitously. The same holds for proximity to infrastructure, culture, or social networks. These are rival, excludable goods, and in such conditions, exchange requires a mechanism for allocating access. Money remains the most efficient one yet developed.

Time is another irreducible constraint. Human attention, especially in its highest-value forms, cannot and will not scale infinitely. The time of a skilled surgeon, an experienced trial lawyer, or a sought-after performer remains finite, tentative, and rivalrous. Even if AI were to augment their capabilities, it does not eliminate the fact that their attention must be allocated among competing uses. The same applies to live experiences: concerts, events, one-on-one advisory relationships, and so on, where presence itself is scarce. In such contexts, prices are not a relic — they are a reflection of incontrovertible limitations.

In fact, abundance often amplifies the importance of scarcity. As mass-produced goods become ever cheaper, a premium will shift toward what cannot be easily replicated. Consider status goods, fixed positional assets, and signals of taste. Luxury brands, rare collectibles, and authenticated works derive value precisely from their limited supply and provenance. If AI floods the world with high-quality substitutes, the value of the original or source item may increase, not decrease. Money, in this sense, becomes a way of expressing relative preference over increasingly differentiated manifestations of scarcity.

Physical systems themselves impose limits. Energy, for example, may become cheaper on average, but it inexorably faces capacity constraints, especially during peak demand periods. The same is true of certain materials, bandwidth, and computational resources in periods of congestion. Even highly advanced systems must allocate finite capacity across competing uses, and money prices remain an extraordinarily efficient, indeed elegant, way of doing so. Without them, the hallmarks of rationing — queues, quotas, and administrative fiat — appear, none of which eliminate, but rather contend with and obscure, scarcity.

The unavoidable force of uncertainty is perhaps the most decisive argument against the obsolescence of money. Risk does not vanish amid colossal gains in productivity and output; if anything, complex, tightly coupled systems generate new forms of it. An explosion of goods and services will tax resources, time, and human capital, which will in turn generate new forms of insurance, hedging, and credit to transfer and price risk. Those functions require not just a medium of exchange, but a unit of account to operate effectively. The idea that AI could eliminate uncertainty is as implausible as the idea that it could eliminate time. 

Institutional realities reinforce the former point. Anywhere one finds government or governance, excludability inevitably follows. Property rights, regulatory approvals, access to bespoke networks, and enforcement mechanisms all create domains in which access is controlled. Money is readily suited to become (or, in fact, continue to be) the means by which access is negotiated, transferred, or prioritized. In a world inundated by output, trust and verification become more valuable. Certification, auditing, and reputation systems all rely on mechanisms of exchange that presuppose some form of monetary unit.

Money also plays a central role in coordinating urgency and priority. When resources are scarce in time versus in quantity (faster service, guaranteed delivery, dedicated capacity) money allows individuals to signal how much they value immediacy relative to others. Absent money such decisions do not disappear; they are made through other, often less transparent means.

An AI-driven deflationary boom would likely compress the prices of many goods and services. Perhaps dramatically so. But that would not, and could not, eliminate the need for money. It would shift the domain in which prices and calculations operate toward the non-replicable, capacity-limited, and institutionally governed. 

Money does not disappear in the face of abundance; it instead follows scarcity wherever it emerges.

Thus (reportedly) spake Steve Jobs in the late 1990s. It was a take on the role of corporations that would become decidedly out of consensus within a few short years, although Jobs seemed to stick with it. “Some people have said that I shouldn’t get involved politically because probably half our customers are Republicans,” he asserted in 2004. “There are more Democrats than Mac users, so I’m going to just stay away from all that political stuff.”

Fast forward two decades, as Apple brings a new captain to the helm of perhaps the world’s most recognizable brand. As Tim Cook departs, new CEO John Ternus, former senior VP of hardware engineering, has a chance to take the company back to the Jobs principle of corporate political neutrality. Getting there, however, is going to require undoing a good bit that’s happened in recent years against the spirit of that principle.

Where is Apple at when it comes to political signaling? The company doesn’t seem to be at Target or Bud Light levels of aisle-chasing. But there is also no denying that the brand isn’t perceived as neutral. While the company has had amazing nonpolitical moments (more on that later), it has also been part of a phenomenon we’ve witnessed with many, many companies in recent years: corporate partnerships that started one way and ended another. Many began as ostensible risk mitigation and gradually morphed into sources of risk themselves.

Two glaring examples stand out. One, particularly in light of the DOJ’s recent indictment, is the company’s previous support for the legal nonprofit known as the Southern Poverty Law Center (SPLC). Apple dropped a $1 million donation to the SPLC in 2017, and then-CEO Tim Cook furthered the move by announcing donation matches to the organization. This isn’t hindsight bias, either. The SPLC has been dropping credibility as a neutral source (and gaining credibility as a corrupt left-wing activist outfit) for years, with the indictment being the latest in a long line of public controversies. Right now, we’re in a moment where many major companies, including Salesforce and Texas Instruments, are backing away from any relationship with the SPLC. To put it nicely, right now it’s decidedly unclear whose pockets SPLC donation money is actually ending up in — and Apple would do well to consider this as a moment to reestablish political neutrality in its charitable contributions.

The second, and more concerning, example is Apple’s current platinum-tier corporate partnership with the Human Rights Campaign (HRC). At the risk of getting too mixed up in acronyms, HRC is one of the leading purveyors of gender ideology in corporate America. Getting a perfect score on the organization’s Corporate Equality Index indicates that a company likely covers highly controversial medical interventions, including hormone regimens and gender transition surgery. Apple gets that perfect score — and the company’s not merely scoring high on activist rating systems but funding the activists outright. It doesn’t signal neutrality, particularly when 65 percent of Fortune 500 companies have cut ties with the HRC, many of them concerned over the group’s increasing politicization and association with the wildly controversial dictates of gender ideology, particularly with regard to children. What started as an activist group urging nondiscrimination protections (something no serious investor or company would oppose) has gradually morphed into a radical activism organization demanding something different. Apple would do well to realize this devolution and reconsider the partnership.

For what it’s worth, there are also signs that the company is at least listening to serious presentations of the concerns. In response to shareholder engagement from Bowyer Research, on behalf of the Christian nonprofit American Family Association, the company announced implementation of stronger anti-CSAM protocols and age limit enforcement in its App Store — a win delivered for the sake of thousands of innocent iPad and iPhone-using children. The company reportedly removed ESG modifiers from its executive compensation, another step toward neutrality we’re also seeing at other major brands like Goldman Sachs. The balance of voices at Apple’s annual shareholder meetings, once entirely slanted to the ESG and DEI-aligned left, is now shifting to reflect a real investor base that may be much closer to Jobs’ 50/50 estimation than many corporate activists ever realized.

This is an opportunity, with a fresh face at the head of the Apple brand, to reclaim that perception as a company that cares about phones and processors, not partisan signaling. There’s trust to be rebuilt, a fact that we’ve explained to the ~100 companies we’ve engaged with this season on behalf of investors. For Apple, we’ve asked for answers about membership in net zero activist coalitions, controversial charitable partnerships, and other incidents like delisting religious apps in its Chinese App Store to appease the CCP. As privacy and free speech concerns swirl around the EU’s Digital Services Act, it remains to be seen what part Apple will play there. But trust, and a reputation for political neutrality, can be rebuilt — and it’s crucial that CEO Ternus sees that opportunity.

One of the most heartwarming Apple moments in recent years came in late 2024 when the company advertised a hearing-impaired father having his life changed by Apple’s AirPods Pro 2 technology. This is Apple at its best. The company does not need to chase applause from political activists. Its technology has brought untold good to the world, and its mission of innovation and thinking differently to solve challenges is a noble one. The free enterprise system rewards companies that meet the world’s genuine needs, from assisting the disabled to creating one of the most widely adopted tech ecosystems on Earth. As it happens, Apple is a firm that does both of those things. 

Apple doesn’t need DEI initiatives to make its mission good for humanity. It already is. Getting rid of the political clouds that obscure that mission is a bright path forward, a move of genuine cultural and business leadership, and a vindication of Jobs’ belief that “in strong companies, the best ideas win.”

US public debt has reached 100 percent of GDP (gross domestic product) for the first time since the aftermath of World War II. Just because we have been here before, and we managed, doesn’t mean we will do so again. This time is different in important ways that are underappreciated by both policymakers and the public. 

In 1946, the United States emerged from a global war with high debt, but also with a young population, strong growth prospects, and a political commitment to fiscal restraint. Today, America faces the opposite: an aging population, structurally rising entitlement spending, and persistent deficits with no credible plan to rein them in. 

After World War II, crossing the 100 percent threshold marked a turning point. Debt peaked at 106 percent of GDP and then declined rapidly as growth surged and spending fell. This time, crossing the threshold reflects the opposite dynamic: not the end of a temporary emergency, but the continuation of a multi-decade spending and debt binge, driven by unsustainable entitlement promises. 

Do Debt Thresholds Matter? 

Economists have long debated whether there is a specific tipping point at which public debt begins to harm economic growth. While estimates vary, a broad body of research suggests that the risks become more pronounced as debt rises beyond roughly 80 percent of GDP for advanced economies. Sustained debt levels above this range are associated with slower economic growth, reduced investment, and diminished fiscal flexibility. 

The United States has now moved well beyond the range where research suggests debt begins to weigh on growth. High debt levels gradually erode economic performance through crowding out private investment, increasing borrowing costs, and limiting the government’s ability to respond to future crises.

The United States is also different from other advanced economies due to the unique role that the US dollar plays in global financial markets. As the issuer of the world’s dominant reserve currency and a primary supplier of safe assets, the US benefits from what economists call an “exorbitant privilege.” This enables the US government to sustain higher debt levels than other countries. 

Even this privilege is not without limits, however. Estimates suggest that the dollar’s status may expand the US government’s debt capacity by roughly 20 percent of GDP, putting the US threshold where debt begins to weigh on growth closer to 100 percent of GDP than 80. 

And “exorbitant privilege” is not a permanent entitlement, either. It depends on investor confidence, the depth and liquidity of US financial markets, and the absence of credible alternatives to US dollar dominance. Should that confidence weaken, because of political dysfunction, fiscal irresponsibility, and the rise of competing safe assets, the US advantage could erode.

Counting on privilege as a substitute for discipline is a risky strategy. And allowing higher debt to depress economic potential reduces long-term income growth and Americans’ opportunities. 

Unsustainable Debt Growth 

US debt is not just high, it is on a steep and unsustainable trajectory. Under current policies, federal debt is projected to continue rising indefinitely, reaching levels that would have been unthinkable the last time the US enjoyed a budget surplus in fiscal year 2000.  

The primary drivers are well known: the growth in Social Security, Medicare, and Medicaid, combined with rising interest costs, accounts for the overwhelming share of future debt increases. 

Penn Wharton Budget Model projections show debt approaching 190 percent of GDP in fewer than 20 years, by 2050, at which point markets may no longer be willing to absorb additional Treasury borrowing at any price. According to congressional testimony by Penn Wharton Budget Director, Dr. Kent Smetters: “Without major changes to current US fiscal policy, […] the US government will have to default explicitly by not making interest payments, or default implicitly, through debt monetization (inflation), or some combination.” 

Debt that exceeds a country’s economy and is on an upward trajectory signals to investors, businesses, and households that fiscal policy is adrift. Borrowing has become the default, rather than the exception. 

A vicious cycle can ensue. As debt rises, interest costs consume a growing share of federal revenues, leaving less room for productive investments and increasing pressure for further borrowing. Higher interest rates can accelerate this dynamic, creating a feedback loop that is difficult to reverse, where higher debt drives up interest rates, which drive up the need for further borrowing. 

Already, the federal government spends more on servicing the debt than on protecting the nation against foreign threats. The United States debt is the single greatest threat to our national security.  

The Cost of Complacency 

The greatest risk posed by crossing 100 percent of GDP is not immediate crisis. It is complacency. 

The absence of a clear tipping point makes it easy for the government to rationalize continued borrowing. If 80 percent did not trigger a crisis, why worry about 100? If 100 is manageable, why not 120? When legislators are unwilling to course correct unless the country hits a fiscal cliff, a fiscal crisis becomes a question of not if, but when. 

History shows that fiscal crises rarely arrive with advance warning. They tend to emerge suddenly, when investor sentiment shifts and borrowing costs spike. Countries that believed they had ample fiscal space often discover, too late, that their margin for error has vanished. 

Crossing 100 percent of GDP should serve as a wake-up call, not because it marks a precise tipping point, but because it signals that the United States is on an unsustainable fiscal path. Even absent an immediate fiscal crisis, legislators should slow the growth in the debt, because high and rising debt carries real economic costs, and those costs grow over time. 

The United States still has time to stabilize its fiscal path. But delay will only raise the cost, and increase the risk that inevitable adjustments come through crisis rather than choice. 

A recent cyberattack on the University of Mississippi Medical Center shut down clinic operations for nine days, disrupting appointments and access to care across Mississippi. According to the center’s own official system update, scheduling, communications, and clinical workflows were all impacted.

Nine days without normal access to care is not just a cybersecurity problem. It is a market structure problem.

The University of Mississippi Medical Center is not simply another hospital. It is Mississippi’s only academic medical center and serves as the state’s primary hub for specialty care, physician training, and complex services. By its own description, it provides levels of care “unavailable anywhere else in the state.” That concentration means when UMMC goes down, much of Mississippi’s advanced care capacity goes down with it.

In a competitive system, that should not happen.

When a major provider in most industries goes offline, others step in. Capacity shifts. Customers reroute. The system bends but does not break. In Mississippi, it broke.

A System Built to Concentrate

That fragility is not an accident. It is the result of policy.

Mississippi has long enforced certificate-of-need laws that require government approval before new hospitals, surgical centers, or major medical services can open or expand. These laws are often justified as cost-control measures. In practice, they limit entry and protect incumbents.

Mississippi’s version is among the more restrictive. Applications can cost tens of thousands of dollars, and existing providers are allowed to challenge potential competitors. The effect is predictable. Fewer entrants. Slower expansion. Less redundancy.

Policy analysis by the Mississippi Center for Public Policy found that, without CON restrictions, Mississippi could have supported 30 percent more rural hospitals and 13 percent more ambulatory surgical centers, thereby increasing access in underserved areas. A comparable state without such restrictions would have roughly 165 hospitals, compared with Mississippi’s 116, a difference of more than 30 percent in total capacity in 2017.

That missing capacity matters most when something goes wrong.

Fragility Has Consequences

The cyberattack did not create Mississippi’s access problem. It exposed it.

When a single institution serves as the backbone of a state’s healthcare system, any disruption becomes systemic. Patients do not simply go elsewhere. In many cases, there is nowhere else to go.

That means delayed diagnoses, postponed treatments, and worsening conditions. It means longer wait times in an already strained system. And in extreme cases, it can mean preventable harm.

Across the country, wait times for physician appointments are already rising, particularly for primary and specialty care. Systems with limited competition are less able to absorb shocks, making those delays even more severe when disruptions occur.

This is what lack of competition looks like in practice. Not just higher prices, but reduced resilience.

The Financing Problem

Market structure is only part of the story. The way healthcare is financed amplifies the problem.

Most healthcare dollars do not flow through patients. They flow through insurers, employers, and government programs. That disconnect weakens the most important signal in any market: price.

When patients are not paying out of pocket, providers compete less on value and more on navigating reimbursement systems. Administrative costs rise. Innovation slows. Capacity becomes rigid rather than responsive.

This is the core issue identified in the Empower Patients framework. Healthcare in the United States is dominated by third-party control rather than patient decision-making.

The result is a system that is both expensive and fragile.

What Competition Looks Like

When competition is allowed, the results differ.

Transparent providers such as the Surgery Center of Oklahoma publish prices upfront and often deliver care at significantly lower cost than traditional hospital systems. Direct Primary Care practices offer faster access, longer visits, and predictable pricing by operating outside insurance billing.

These models do more than reduce costs. They add capacity. They create alternatives. They make the system more resilient.

If one provider goes offline, others are available.

Mississippi has fewer alternatives because policy has limited their growth. Even when regulators approved a new hospital in Biloxi, the process revealed how difficult it is to add capacity. The state issued a certificate of need in 2012 for a replacement facility, but incumbent hospitals sued to block the project, delaying it for years, arguing it was not a true replacement. That prolonged fight stemmed from the original plan to build a new hospital to replace Gulf Coast Medical Center after it was destroyed by Hurricane Katrina. In short, even obvious community needs can be slowed by legal challenges from existing providers. 

The pattern continues: recent consolidation has further strengthened dominant systems on the Gulf Coast, and policymakers pursue only incremental changes to certificate-of-need laws, while others call for a broader overhaul of the state’s restrictions.

A map depicting states where an incumbent competitor may object to a new facility. Image credit: The Mississippi Center for Public Policy.

A Warning for Policymakers

The Mississippi cyberattack should be viewed as a warning, not an anomaly. It revealed how vulnerable a healthcare system becomes when competition is restricted and capacity is concentrated. What looks efficient on paper can be fragile in practice.

Mississippi is not an outlier. 35 states and DC operate under certificate-of-need laws that limit the number of new providers and expansion. States have been working to improve their CON laws, reflecting a growing recognition that the current structure is too rigid. But incremental reform will not solve a structural problem.

A Better Path Forward

A more resilient healthcare system that empowers patients requires more than cybersecurity upgrades. It requires policy change.

First, remove barriers to entry that prevent new providers from entering the market. In Mississippi, the state could support 30 percent more rural hospitals and 13 percent more ambulatory surgical centers, meaning more options for patients and more capacity when disruptions occur.

Second, shift financing toward patient control. When individuals manage their own healthcare dollars, they have an incentive to seek value, compare options, and demand better service.

Third, reduce regulatory burdens that divert resources from care to compliance.

These changes would not only lower costs. They would make the system stronger.

The Real Lesson

Mississippi’s healthcare system did not fail because of a cyberattack alone. It failed because it lacked the flexibility and redundancy to respond. One hospital system should never be a single point of failure for an entire state.

The way to prevent that is not more centralization. It is more competition, more capacity, and more patient control. That is the lesson Mississippi offers — and it is one policymakers across the country should take seriously.

The so-called “AI race” is propelling stock markets to new highs even as geopolitical turbulence rattles investors. Artificial intelligence may prove to be the rare technological revolution capable of generating real growth despite the headwinds of tariffs and misguided industrial policy. Yet the data centers powering this next generation of innovation have become a flashpoint for public anxiety. Maine has outright banned new large data center construction, and average Americans are increasingly convinced that these facilities are to blame for rising electricity bills. 

The statewide data, however, tell a different story. Newly published research finds no meaningful link between the number of data centers in a state and its electricity prices and points instead to a far less glamorous culprit: bad state energy policy. 

A March 2026 study from the Institute for Energy Research (IER) examined whether data centers are responsible for rising electricity prices across the United States. The answer, based on state-level data, is no. Across all 50 states, there is no statistically significant relationship between the number of data centers and electricity prices. The top ten data center states averaged 14.46 cents per kilowatt-hour in 2025, virtually identical to the 14.39 cents average across all other states.  

Perhaps the study’s most counterintuitive finding is its strongest: states where electricity sales grew faster actually paid less for electricity. High-growth states averaged a 20 percent price increase from 2015 to 2025, while low-growth states averaged nearly double that at 39.4 percent. Unlike most goods, electricity is priced by spreading high fixed costs like transmission lines, generation capacity, and long-term contracts across every kilowatt-hour consumed, meaning the more power that flows through the grid, the cheaper each unit becomes in the long run. Data centers, by driving demand up, actually spread fixed grid costs across more kilowatt-hours, which results in a per-unit rate decrease for everyone. 

So why are so many Americans convinced otherwise?  

Because in the short run, at the local level, the story is more complicated. A Bloomberg analysis of wholesale electricity prices across 25,000 grid nodes found that prices have risen as much as 267 percent since 2020 in areas near major data center clusters. More than 70 percent of nodes recording price increases were located within 50 miles of significant data center activity.  

In these regions, data centers create a surge in demand on local grids. When transmission capacity is constrained and new generation has not yet come online, prices spike. Those higher wholesale costs can then filter into retail bills, at least in the short run, and local consumers bear the brunt of this regional electricity demand.  

The discrepancy in these two studies indicates a timing problem. The long-run economics favor more demand, but the short-run reality is that infrastructure takes years to build, and consumers near data center hubs can be left paying for that gap. The IER study measures retail prices averaged across entire states; Bloomberg measured wholesale prices at specific grid nodes near data center hubs.  

State averages mask local effects. Northern Virginia’s price pressure gets diluted when blended with rural Appalachia, for example. Both findings can be simultaneously true: data centers are not driving broad statewide price divergences, but they can create localized grid strain where infrastructure and regulatory frameworks have failed to keep pace with demand. 

That distinction matters enormously for policy. Concentrated price spikes are not evidence that data centers are inherently incompatible with affordable electricity; they are evidence that grid infrastructure and cost allocation rules haven’t kept up. Oregon’s POWER Act, which requires large electricity users to bear the costs of infrastructure built specifically for them, is a model worth watching. By creating a separate rate class for data centers and requiring long-term contracts so they pay for the grid upgrades they demand, the law moves closer to a core market principle — prices that reflect true costs. However, it still relies on regulators rather than competitive markets to set those prices. These are targeted, incremental steps toward aligning prices with actual costs, far preferable to blunt restrictions that distort markets and stifle investment. 

The deeper problem, as a growing body of research makes clear, is state energy policy itself. A Charles River Associates report found that rate increases are heavily driven by local regulatory conditions, particularly policy environments in California and the Northeast. A Lawrence Berkeley National Laboratory study identified renewables portfolio standards, particularly in states with costly incremental renewable supplies, as a consistent driver of rate increases. The common thread is that electricity prices are fundamentally a product of institutional design, not data center headcounts. 

America is not going to win the AI race by making it harder to build the infrastructure AI requires. The moment calls for the opposite instinct: streamlining permitting for new generation and transmission, updating interconnection queues, ensuring large electricity users pay for their full cost to the grid, and getting out of the way of the demand growth that tends to lower per-unit costs over time. The appropriate response to rising electricity costs near data center hubs is to reduce barriers to grid expansion and allow energy supply to scale alongside demand, preserving both affordability and competitiveness.