America’s fiscal and monetary problems look like two separate crises. They aren’t. Runaway government spending and an unruly Federal Reserve are two sides of the same coin. When Congress spends beyond its means, it creates pressure on the central bank to print money and paper over the debt. When the Fed operates without clear rules, it becomes the silent enabler of fiscal recklessness. Fix one without fixing the other and you haven’t solved anything. That is where we find ourselves today.
As I argued in my first book, the Fed has a rule problem: It doesn’t have one. For decades, monetary policymakers have operated under broad discretionary authority, adjusting interest rates and the money supply based on their judgment about what the economy needs. The results have been disappointing.
The case against discretionary monetary policy runs along two tracks: one about competence and one about legitimacy.
Start with competence. Central bankers face serious information problems. The economy is vast and complex, and the signals it sends are noisy. Policymakers receive data that is incomplete, revised, and often contradictory. By the time the Fed diagnoses a problem and adjusts policy, the underlying conditions may have already changed. Discretion sounds like flexibility. In practice, it often means groping in the dark.
But information problems are only half the story. Incentive problems compound them. Bureaucracies develop institutional interests of their own. The Fed, like any government agency, responds to political pressures, professional norms, and the priorities of its leadership. Monetary economists — the experts who advise the Fed and evaluate its performance — constitute their own interest group. They have professional stakes in a powerful, discretionary central bank. And then there’s perhaps the biggest incentive problem of all: the looming threat of fiscal dominance. It’s time to stop thinking about monetary policy in a vacuum.
There is a deeper question here, as was recognized almost 50 years ago by economists Thomas Sargent and Neil Wallace: are fiscal policymakers or monetary policymakers in the driver’s seat? When Congress and the Treasury spend freely and accumulate debt, they create pressure on the central bank to monetize that debt. If the fiscal authority “moves first” and the Fed “follows,” then monetary policy becomes an instrument of fiscal control, not an independent check on inflation. That is precisely what happened after 2020. The government spent at wartime levels even as the emergency receded, and the Fed soon accommodated. Inflation naturally followed.
So the problem is not simply that the Fed made mistakes. It is that the institutional structure invites those mistakes. A discretionary Fed embedded in a debt-heavy fiscal environment will tend to prioritize the short-term over the long-term, accommodation over restraint, and political convenience over monetary discipline.
The solution is a Fed regime change. We need actual legislation to change the central bank’s mandate. Administrations change. Personnel change. But laws can become, as James Buchanan put it, “relatively absolute absolutes.” If Congress replaces the Fed’s current mandate, which includes employment and interest rate targets alongside price stability, with a single, clear mandate for price stability, the Fed can credibly commit to refrain from underwriting future deficit spending. Congress can’t count on the Fed bailing it out if the Fed’s price level target limits the printing press.
The goal is not to make the Fed powerless but to make its power legible and therefore predictable. A rule-bound Fed, focused solely on price stability, empowers planning by businesses and households. It rewards saving. It discourages the kind of speculative boom-and-bust cycles that discretionary policy tends to produce. And it will force fiscal policymakers to get their mismanaged affairs in order.
Other proposed solutions won’t work. First, we should reject presidential control over monetary policy. Giving the executive branch direct authority over interest rates would politicize money even further. Second, simply appointing more “conservative” central bankers offers no durable fix. Hawkish Fed chairs come and go; without a reformed mandate, the institutional logic reasserts itself.
Inflation has cooled from its recent peaks and deficits are not as high now as during the COVID period, yet the underlying institutional dysfunction remains. The Fed is still improvising, still subject to fiscal pressure, still operating without the kind of clear rules that would make its behavior predictable and its decisions defensible. Monetary policy by bureaucratic fiat is not good enough. To prevent money mischief and fiscal folly, only the discipline of rules will do. The solution is a single mandate: price stability alone.
America has spent more than $20 trillion on fighting poverty since the introduction of President Johnson’s Great Society program in 1964. Sixty years later, how are we doing?
That depends, as it turns out, on how you measure it.
Last month, Senator Kennedy (R-LA) introduced a bill that would require the Census Bureau to report a new poverty metric as an alternative to the Official Poverty Measure (OPM) by including both cash and non-cash welfare benefits in its calculations.
As Kennedy points out, this is a much-needed fix. The OPM’s methodological weaknesses are well documented. Most notably, it ignores the hundreds of billions of dollars the government spends each year to assist low-income families through tax credits like the Earned Income Tax Credit and in-kind transfers such as Medicaid, food stamps, and housing subsidies. It also overstates inflation and relies on outdated assumptions about food spending. In short, the OPM paints an egregiously inaccurate picture of material poverty in America.
When one includes taxes and transfers, as economists Richard Burkhauser and Kevin Corinth did in a recent paper with the National Bureau of Economic Research, the “full-income” poverty measure sat at just 3.7 percent in 2023 — 1.6 percent after including employer-provided health insurance — a far more optimistic look than the OPM’s 11.1 percent from the same year.
That sounds like a triumph. But Burkhauser and Corinth take it one step further and use their “full-income” measure to track changes in the poverty rate dating back to 1939.
Contrary to popular belief, they find that the greatest era of poverty reduction happened before Johnson declared war on it.
From 1939 to 1963, absolute full-income poverty plummeted by 29 percentage points, from 48.5 percent to 19.5 percent. Then, despite the government pouring trillions of taxpayer dollars into combating poverty, poverty fell by only 15.7 percentage points from 1963 to 2023. Barely half the progress in more than twice the time.
But the stagnating decline is only half the story. The more consequential difference is what drove it.
Before 1964, the main engine of poverty reduction was increases in market income — a measurement that includes wages, salaries, and other forms of income from employment. From 1939 to 1959, market income poverty fell by 26.1 percentage points, nearly all of the 27.3 percent decline in full-income poverty among working-age adults over the same period. In short, before the rapid expansion of the welfare state, most people were earning their way out of poverty.
After 1964, that engine stalled. Market income poverty fell by just 3.9 percentage points from 1967 to 2023, while post-tax, post-transfer poverty fell by 10 percentage points. Even though poverty has continued to decline over the past six decades, most of that was due to the ever-expanding generosity of government transfers.
While low-income Americans were benefiting from the biggest poverty reduction in the country’s history, the percentage of working-age adults relying on government transfers for more than half their income decreased from 2.9 percent in 1939 to 2.7 percent in 1959.
By 2023, this number had nearly tripled to 7.6 percent, even reaching as high as 15 percent in some years.
As Mercatus scholar Jack Salmon put it: “The War on Poverty changed the how of poverty reduction, but it didn’t accelerate the how much.”
If anything, by changing the former, it may have blunted the latter. A 76 percent increase in real median income, paired with rising employment and higher productivity, all driven by rapid postwar economic expansion, pulled more people out of poverty in 24 years than trillions of dollars in government-imposed wealth redistribution have done in 60.
Some may argue that this trend is to be expected. After all, reducing poverty from 48 percent to 20 percent is arithmetically easier than reducing it further because there are simply fewer people left below the poverty line, and those who remain tend to face the most entrenched barriers to self-sufficiency.
Fair enough. But as Burkhauser and Corinth point out, full-income poverty largely stagnated starting in the 1970s — right as welfare spending was ramping up dramatically. In short, taxpayers have been paying for a multitrillion-dollar boondoggle that has yielded increasingly diminishing marginal returns.
So, what was the main driver behind the pre-1964 miracle? Simple: Economic growth.
The pre-1964 record, along with centuries of evidence, suggests that nothing has worked better than economic growth in helping individuals, especially those at the bottom of the income ladder, to achieve a higher quality of life. Across the world, economic growth driven by liberalization helped pull almost one billion people out of extreme poverty from 1990 to 2010.
Here at home, the pattern still holds. The Fraser Institute’s research shows that North American states with higher and increasing levels of economic freedom tend to have higher income growth and employment, more income mobility, especially among low-income households, higher economic growth, less homelessness, and lower levels of food insecurity.
The fruits of economic growth are visible in ways that poverty statistics fail to capture, especially for America’s poor. As Joseph Heath points out, 95 percent of American households below the poverty line have electricity, indoor plumbing, a refrigerator, a stove, and a color television. More than 80 percent have an air conditioner and a cell phone, and two-thirds own a washing machine and dryer. Economic growth, not government programs, is what helped make these once-luxury goods unavailable to many wealthy households now accessible to nearly everyone. It continues to bear fruit today — wages for typical American workers are at all-time highs.
The most powerful anti-poverty program had no enrollment forms, caseworkers, or spending bills. It was a growing economy that helped millions of people earn their way to a better life. As such, subsequent efforts should focus on removing government-created barriers to economic growth, occupational opportunities, and job market entry rather than adding another layer of expensive, inefficient wealth transfers.
Senator Kennedy is right to say we need a more accurate measure of poverty. When analyzing the best ways to combat poverty, policymakers should reflect on whether the welfare state was ever the right tool for the job.
The extended partial government shutdown has led to long lines of frustrated passengers at airports nationwide as unpaid Transportation Security Administration (TSA) agents walk out. Officials even warn that small airports may shut down due to the absences. If we for a moment disregard the Washington Monument syndrome likely also at play, the lesson to be learned here is not the importance of funding government services — but the exact opposite.
The TSA has a long history of failing to such a degree that it could never survive had it not been run by and within the government. Costing taxpayers and travelers $10 billion annually, not counting the inconvenience and time lost, the agency fails even on its own terms. The failure rate in 2015 was over 90 percent. The same in 2017. If these data seem dated, it is because they are. Instead of fixing the problems, the results of the agency’s internal testing were classified. In the absence of data, the only reasonable interpretation is that the agency remains a catastrophic failure to this day.
The recent airport chaos stresses how the security theater has become an unbearable bottleneck. It also stresses how dysfunctional government services become problematic beyond the waste of resources and the inconveniences they cause. The difference between government services and market solutions offered by businesses is stark. A private business that fails to deliver loses customers, and therefore both revenue and market share. Its failure is its own problem, which is a strong incentive to fix it.
As a government agency, the TSA’s failure is not its problem but is instead shifted onto travelers (their “customers,” as it were), who are, in some cases, left waiting six hours in line to get through the security checkpoint. In fact, this failure can easily be construed as a benefit for the TSA, which now — because the government requires all passengers to pass through its bottleneck — has leverage to demand more funding. As a result, the destruction wrought by dysfunctional government becomes an argument for more of it, and taxpayers are left with the bill.
The arguably zero value added by the TSA’s security theater thus becomes a self-enforcing bloating of the bureaucracy, making the agency an ever-expanding jobs program that burdens taxpayers while harassing travelers.
Imagine if security had instead been the responsibility of airlines. Rather than cause constant delays and inconvenience, it would be in the airlines’ interest to streamline the process and make it as unobtrusive as possible. A failure to staff security functions would not be travelers’ (customers’) problem but the airlines’, who benefit only when we fly — and remain liable to keep travelers safe. The TSA has no such responsibility.
But a government service is worse than what can be explained by destructive operative incentives. We often fail to realize that what exists in the present is a result of developments in the past and that the future too will be different. In other words, the market economy as well as society overall are processes in constant flux, not a static state. Privately provided security would, just like any other service offered in the market, be subject to constant innovations — creative destruction, as economist Joseph Schumpeter called it.
Creative destruction is the power of disruptive entrepreneurship to cause leaps of improvement. As entrepreneurs introduce innovations that bring great benefit, consumers abandon the solutions they previously chose to use. For example, when Henry Ford introduced the Model T, people flocked to the affordable automobile — the greater value — and stopped relying on horses and carriages. The automobile became the new, higher standard for transportation. Automobile manufacturing and gas stations replaced horse breeders and stables.
We would thus see continuously improved security measures provided at lower cost — taking less time and being more convenient for travelers. The value to airlines is that it benefits their customers. It’s a competitive advantage and a value-add.
The very opposite is true for government services such as the TSA. They have nothing to benefit from providing the service they are tasked with effectively and efficiently. In fact, the very opposite is true: if the TSA would find ways of reducing the cost, the agency’s budget would likely be cut in response. They would effectively be penalized for improving.
And therein lies the crux: government agencies have little or no incentive to serve the users of their service. But private businesses stand and fall by providing customers with value. It is no surprise, therefore, that airport security is a hassle and inconvenience — and that it is expensive. The TSA is a bureaucracy and a jobs program that does not keep us safe.
Recognizing this fact helps us understand the chaos at airports. More funding would do more harm than good.
The strains emerging in the roughly $3 trillion private credit market are no longer isolated anecdotes; they are coalescing into a coherent signal of tightening financial conditions at precisely the wrong moment for the broader economy.
As discussed previously, a growing list of developments has unsettled investors. Now, on top of those at Blue Owl Capital, Apollo Global Management, and Morgan Stanley’s North Haven Private Income Fund, JPMorgan Chase has begun marking down private credit loans. Concerns have gone international. These are not systemic failures, but they do mark the transition of private credit from a benign, yield-enhancing allocation into a market experiencing its first meaningful credit cycle. The sector, which expanded rapidly after the Global Financial Crisis as banks retreated from riskier lending, now faces the test of higher rates, weaker borrower fundamentals, and more discerning capital.
It is critical to mention (or reiterate) that this is not shaping up as a 2008-style solvency crisis. The private credit market is small, leverage is generally lower, and there is little evidence of the kind of widespread fraud or securitization opacity that defined the subprime mortgage crisis. But that comparison risks missing a more relevant dynamic: private credit is a tightening mechanism. Its problems do not need to trigger bank failures to matter. Instead, they transmit stress through funding channels, into refinancing constraints, and ultimately into valuation pressure. Banks’ exposure — variously estimated from under $100 billion to potentially near $1 trillion globally when commitments are included — creates a feedback loop whereby losses or even perceived risks in private credit lead to tighter lending standards broadly. The nature of that tightening is not to remain contained; it ripples outward to impact middle-market firms, consumer borrowing, and ultimately, aggregate demand.
The mechanics of that tightening are already visible. Higher yields increase borrowing costs directly, but they also operate indirectly by raising discount rates, lowering asset valuations, and making refinancing more difficult. Private credit funds, often reliant on bank revolvers and leverage to enhance returns, become more fragile as funding costs rise. Borrowers — especially highly leveraged, floating-rate borrowers such as software firms — face a double bind of rising debt service burdens and deteriorating business prospects, particularly in sectors now facing disruption from generative AI.
Estimates that 15 percent to 25 percent of private credit portfolios are exposed to such firms underscore the vulnerability, with some projections suggesting default rates could approach eight percent in stressed scenarios. Even absent widespread defaults, the marginal borrower is already being shut out, and that is the entire point: credit availability is shrinking.
Bank of America Private Credit Proxy (white), VettaFi Private Credit Index (blue), and Indxx Private Credit Index (orange), 2018–present
(Source: Bloomberg Finance, LP)
This tightening is unfolding against an increasingly unfavorable macro backdrop. Energy prices are rising, renewing inflationary pressure into an environment where disinflation had only recently begun to take hold. At the same time, yields across the curve have been moving higher, reflecting both inflation concerns and increased term premia. Rate futures markets, which had priced a steady path of easing, are now assigning a small but meaningful probability that policy rates could end the year higher rather than lower. That shift matters disproportionately for private credit, where floating rate structures and short-duration funding expose both lenders and borrowers to immediate changes in financing conditions.
The result is a reinforcing cycle. Higher energy prices push inflation expectations upward, keeping central banks cautious. That sustains higher yields, which tighten financial conditions directly and through channels like private credit. As private credit funds pull back, mark down assets, or restrict redemptions, confidence weakens and liquidity becomes more selective. This, in turn, constrains investment and hiring among other companies, but in some cases, the very firms that have come to depend on private lending. It is a quieter, more diffuse form of stress than in 2008, but consequential nevertheless.
Two factors are making the current moment particularly delicate. The first is that pressures are converging rather than offsetting. In prior cycles, falling energy prices or relaxing yields might have cushioned a credit tightening episode. Today, the opposite is occurring: energy, rates, and credit conditions are all moving in a direction that arrests growth. Private credit is not the epicenter of a crisis, but it is an increasingly important transmission channel through which macro tightening is being amplified.
The second is how much remains unknown. There is no centralized reporting, and visibility into indirect exposures is limited. In fact, there is no consistent definition of what the concept of private credit as an asset class ultimately encompasses. Also unclear is where the risks ultimately reside: would losses stay within private credit vehicles, migrate onto bank balance sheets, or into retail portfolios, pensions, and insurance structures that may not fully disclose their exposure? While the situation does not threaten a “Lehman moment” in scale or leverage, the lack of transparency means policymakers and analysts cannot confidently assess whether stresses will remain contained or propagate through tightening credit conditions, making the key risk not what is visible, but what remains hidden.
The emerging strains in private credit should be understood less as a harbinger of systemic collapse and more as an early indicator of economic deceleration. The asset class is doing what credit markets ultimately do in late-cycle conditions: becoming more selective, much more expensive, and far less forgiving. While far from inevitable, that process, especially when synchronized with rising input costs and a shifting rate outlook, is unlikely to be benign.
Two years after the European Union (EU)’s Digital Markets Act (DMA) took effect, the results have been mixed to negative. Promises about certainty, lower enforcement costs, and a more innovative and competitive digital ecosystem haven’t materialized.
Rather than learn from Europe’s mistakes, Californian policymakers and federal proponents of Sen. Amy Klobuchar (D-MN)’s American Innovation and Choice Online Act (AICOA) would import similar ideas to ostensibly help small businesses and hold tech giants accountable. The EU’s experience shows that DMA-style proposals aren’t just unlikely to achieve these goals. They’re also likely to harm consumers, competition, and innovation.
The DMA was intended to support “fairness” and “market contestability” for small businesses that rely on large digital “gatekeeper” platforms, like Amazon, Google and Meta, to reach customers. The “gatekeepers” are mainly American tech giants. The DMA bans them from engaging in certain business practices, even if those practices benefit consumers or competition
For instance, the DMA prevents Google from integrating its Maps, Flights and Hotel Ads tools into search results as this would be “self-preferencing” over third-party booking sites. Evidence shows that this ban has degraded the user experience by increasing the number of clicks required to see prices and make bookings, leading to reduced hotel bookings. Similarly, Apple is limited from excluding third-party apps and app stores from its App Store and iOS even though this has degraded security features, IP protections and trustworthiness in Apple’s products by increasing the proliferation of pirated, less secure and pornographic apps.
These mandates help some businesses, but harm others, including developers of apps aimed at children who rely on parental trust in the highly curated app store, and hotels that benefited from traffic directed through Google’s tools. Rather than upholding competitive markets, they let governments “pick winners” and undermine digital platforms’ ability to differentiate themselves or experiment to better meet consumer and business needs. This goes against American antitrust law’s focus on consumer welfare over punishing firms for size and success, or shielding businesses from competition- an ethos that has let the US produce leading tech firms that have eclipsed would-be European peers.
Like the DMA, AICOA bans large digital platforms from self-preferencing and from using third-party seller and service provider data to refine their own offerings or better serve consumers—even though such practices are routine in non-digital industries, like grocery stores. The bill also claims to provide legal certainty for businesses, yet its language is vague and grants regulators broad discretion. For example, it uses amorphous phrases like “materially harm,” which courts must interpret without precedent, and allows the FTC to define what constitutes an anti-competitive practice through guidelines.
In Europe, the DMA’s ambiguity about the conditions and costs a platform can impose on third-party services—intended to maintain security and ensure fair value—has led regulators to impose heavy, retrospective fines on Apple without providing clear instructions for compliance, all while soliciting feedback from competing app stores and developers on what Apple should do. This uncertainty has delayed the rollout of new features, including AI tools, for European Apple and Google users.
AI development depends on deploying new technology at scale to gather data, refine foundation models, and solicit user feedback. Rules like the DMA, which create legal uncertainty and impose arbitrary limits, can discourage AI infrastructure and software investments, stifle innovation, and undermine U.S. tech leadership, as well as the ability of small businesses that rely on AI-integrated platforms to compete.
Unlike AICOA and the DMA, recent California Law Reform Commission (CLRC) recommendations that could be adopted by that state’s legislature apply to even non-digital businesses and would dramatically lower evidentiary thresholds for market power. The reforms penalize broad swathes of conduct for firms deemed to hold “significant market power”, including self-preferencing, without need to show likely or actual consumer harm or weigh pro- and anticompetitive effects. By banning “predatory pricing” without need to show that alleged offenders would likely recoup their losses by raising prices later, the reforms discourage businesses from legitimately competing on price. The CLRC’s proposals radically pivot antitrust law from protecting consumers to protecting competitor businesses and stakeholders such as “trading partners.”
Such restrictions arbitrarily favor some businesses over others, leaving the competitive process at the mercy of government diktats instead of consumer demand.
Existing US federal and state antitrust laws already punish tech giants and platforms for anti-competitive behavior on a case-by-case basis that also allows judges to limit inadvertent restrictions to competition or harm to consumers that could result from legal fixes, as recent rulings against Google and Apple show. Existing laws can and should be strengthened only if there is a strong rationale supported by economic evidence. Importing flawed foreign competition policies would only empower government officials and some competitors at the expense of consumers, innovation and America’s global competitiveness.
A good few years before the AI craze, my Oxford lecturer gave a presentation on the shifting nature of work. An economic historian by trade, Judy Stephenson traced the arc of compensation from labor market considerations in early modern London, and wove a full-circle tale of being paid piecemeal (output) in the nineteenth century, to hours (input) for most of the twentieth century, and then back again in the twenty-first-century gig economy.
The delivery and ride-sharing services were the major concerns of the intelligentsia during the 2010s. You were paid not for your time but for the output you quite often physically delivered, with resulting debates over unions, safety, and wages.
Stephenson accounted for the changes on very Coasean terms: In the assembly-line work of a century ago, it wasn’t worth the transaction costs of figuring out exactly whose contribution was worth how much, so you just roughly averaged out everyone’s hours with some extra perks for responsibility or long service. And compared to the at-home weavers of the century before, it was much less clear who was responsible for the exact value-add. Put differently, the loss of efficiency associated with time-based pay (free-riding, monitoring, slacking off, or shirking work) might have been less than the costly efforts necessary to constantly re-establish rates for specific tasks.
Economic textbooks, heavy on the modeling, might imply that performance pay is moreefficient since it aligns incentives and minimizes free-riding. Enter computers and digital markets matching supply and demand, plus standalone gig workers entirely responsible for their own output. Those institutional changes shifted the bargaining power and the Coasean transaction costs involved — making the real world so much more like the sketched model of an economics textbook.
Easily Replicated Abundance Meets the Economics of Infinite Content
There’s an obvious self-selection in the current labor-related worries coming our way: It is precisely those of us who have invested most in this credentialist commentariat, who have sacrificed our lives and oriented our identities around the very cognitive and generative skills that LLMs now so effortlessly replicate.
It’s no longer that hard to have ChatGPT write like me (just train it on my past writing), Claude to code like a programmer with a decade of experience, or have a combined AI effort produce a beautiful, two-minute, period-piece ad spot for $100.
In The Great Harvest, a recent and ironically mostly AI-generated book, Adam Livingston captures the white-collar workplace revolution underway: It’s “not that your career will vanish overnight but that it was always just a fragile assemblage of solvable problems, [… your job] was actually a collection of separate functions waiting to be identified, isolated, and optimized away.”
The music industry and the economic value of songs were early indicators here, with supply and production way surpassing any feasible consumption or customer on the other side. Even though its economic threat stemmed originally from pirated rather than easily replicated material, the marginal value unavoidably fell to around zero. While Taylor Swift rakes in royalties from streams and other artificially scarce legal arrangements, she generates more economic value from concerts and merch. Her physical being becomes the ultimate, rivalrous, nonreplicable luxury good.
With the marginal cost of producing videos, images, music, or words going to zero, we should have expected infinite content and next-to-no meaning — see YouTube, TikTok, or Twitter.
With the rest of the arts and white-collar knowledge industry up next, it’s a little bit of an economist’s puzzle why prices (i.e., wages) haven’t dramatically fallen yet, to adjust relative scarcity and the now much more abundant supply — stories of social anchoring or nominal contract rigidities, no doubt. So far, we’re much more likely to see quantity adjusting, meaning fewer workers or worse labor-market conditions for programmers, journalists, accountants, and other white-collar jobs.
Where’s the Value? Humans as Tastemakers
A lot of digital ink has been spilled on trying to identify where we go from here. In a world of informational abundance and adequately generated text at everyone’s fingertips, where’s the economic value?
“Brainpower is now a commodity that is going cheap,”Andrew Yang reflected this month. Perhaps the best thing we can say about his UBI-infused presidential bid in 2020 is that it was premature.
“We have a love-hate relationship with working for a living,” Tim Harford observed for the Financial Times; the pain and hardship of working is heavily bound up with meaning and identity. Fred Krueger and Ben Sigman, in another recent book, observe that the “labor theory of value collapses when machines do all the labor,” and that “scarcity pricing becomes meaningless when AI makes many things abundant.” As intelligence becomes infinite, they conclude, the finite becomes infinitely valuable.
These reflections might as easily have been titled “The Return of the Labor Theory of Value,” not because the LTV was a particularly revolutionary economic theory, but because of what it indicates about our infinitely replicable information and knowledge system going forward. If everything from music to code, words, and video can be created at the press of a button, the only scarce thing left beside the physical world is our human attention. The things we choose to do, choose to look at, choose to labor on.
Fiction writers, faced with the nearly infinite onslaught of storylines and millions of predominantly self-published titles each year, have realized this: Their words or imagined characters may not be scarce, but the very fact that they labored intensively over them is what other humans recognize as worthwhile and impressive. (We might ultimately decide to pay a premium for human connection, attention, or presence.)
Book sales, while pretty stagnant in nominal and real terms, might be monetary votes of appreciation more than actual desire or follow-through to consume the work.
In the past, these industries had an overhang of gatekeepers and tastemakers deciding what was good music, good art, good writing, or good journalism. In the past few decades, it might have been liberating to have the gatekeepers shoved aside via technological means, but it’s only now that they’re gone that we’re starting to miss them. The artificial scarcity they imposed elevated excess economic value onto songs and books and movies that can now be generated and duplicated by the millions.
One way out, then, is to recreate the gatekeeping — not in production, that ship has sailed, but in attention and awareness. We might look to respectable minds, like we once did respectable labels or studios or outlets, not for reporting what is in the journalists’ style, but what matters. Trusting in their vision of what matters, using their long and somewhat obsolete experience as a filtering mechanism against the information overload we’re otherwise doomed to.
Muscle lost its economic dominance long ago; we all know that story. Now that machines are coming for the brains, we have a similar story of scarcity, abundance, and obsolete skills to contend with. What remains scarce — attention, trust, physicality, judgment, embodied presence — will command the rent.
Paul Ehrlich, famed biologist, died last week at age 93. Ehrlich rose to fame in the 1960s as the author of a book that resonated powerfully with the public, The Population Bomb, and became a recurring guest on late-night talk shows and a frequent subject of discussion in all the major newspapers. The even more famous Johnny Carson, interviewing him in 1980 — more than a decade after the book’s publication, a sign of its lasting impact — said he generated “more mail than any guest we ever had on the show.”
All in all, he appeared 25 times on one of history’s most famous talk shows.
The Population Bomb arrived at the right time: economic growth was fast across the world, and so was population growth. Given finite resources, population growth (at 3.5 billion people in 1968) would outstrip food production and deplete the stock of key resources (think metals, fossil fuels, farmable land). Eventually, Ehrlich argued, starvation would occur, mass famines would follow, and social collapse would take place. Whatever technological progress could be achieved would only delay the inevitable — and only do so trivially.
To stave off the chain reaction, Ehrlich suggested, economic growth would need to slow down. Overpopulation should be curtailed by discouraging large families, possibly with coercive population control measures. However, Ehrlich did not stop there. He proposed that the Federal Communications Commission should discourage media that portrayed large families positively. He argued for immigration restrictions because allowing the poor of the world to come to America would accelerate their consumption and hasten the collapse. He argued that international aid should be tied to conditions requiring other nations to slow down population growth. All his policy proposals ended up being calls for greater coercion and greater control.
Ultimately, he was proven wrong. We now have more than twice as many humans on this planet as when Ehrlich wrote his doomsday prophecy. We live longer, healthier, wealthier, safer lives on a planet that has, on many dimensions (but not all), grown cleaner. None of the extreme predictions came to pass. Technological innovations were not trivial — they were exceptional. The Green Revolution, improvements in transportation, improvements in energy efficiency have all staved off the predicted catastrophe.
Ehrlich’s intellectual nemesis — population economist Julian Simon — had long argued that humans were capable of producing economic growth and reducing environmental impacts, and of creating and innovating our way out of these problems. Humans, in Simon’s view, were The Ultimate Resource. In all the obituaries for Ehrlich, Simon is mentioned for his contrarian optimism (often labeled Cornucopianism) and for having bet on these outcomes against Ehrlich.
But, in the midst of all the commemorations, claims of vindication, and assertions that Ehrlich was merely “premature,” something has been forgotten: Paul Ehrlich lost even within the environmental movement he had helped fuel. His views have been largely and subtly, not always explicitly, abandoned — in favor of those of Julian Simon.
To see why, think about the explicit premise that Ehrlich held: humans are mouths to feed, polluters, and ultimately trespassers in the ecosystem. In other words, for the biologist that he was, they were a form of parasite. If a population grows too large, correction must come through extinction since the parasite kills the host. Human ingenuity plays little role; at best, it is trivial. After all, a parasite is a parasite. If the parasite innovates, it is to be a better parasite. Humans are not creators or even equal creatures, but burdens upon the ecosystem.
From that premise, it follows naturally that some degree of population control (including coercion) could be justified. Indeed, this view warrants a normative stance that says that some humans are dispensable or can be subjected to things that most would (and did, when Ehrlich’s proposals were applied) find morally repellent.
In contrast, Simon’s view was that humans are not merely consumers. We are creators. Given the right institutions, we can solve environmental problems through innovation. The real question is not population, but the institutional framework within which people operate. In fact, Simon frequently pointed out that Ehrlich’s prediction could come true because of the policies he proposed. Innovation rarely happens under compulsion. Innovation requires open environments that encourage it. Being a libertarian, he argued that the most extreme environmental disasters occurred in coercive regimes such as the USSR, Communist China, and Castro-led Cuba. That coercion is similar in nature (though not in intent) to what Ehrlich desired. Simon also argued that in uncoerced, free-market economies, improvements and innovations emerge to solve problems as they arise.
In Simon’s view, institutions mattered above all else. The term is broad, to be sure. Classical liberals, conservatives, and libertarians — closer to Simon — tend to emphasize secure property rights, open markets, and free trade as drivers of innovation. Social democrats, centrists, and progressives, by contrast, often use the “institutions” to mean a capable state that regulates to solve problems. In their view, markets alone are not sufficient. Government intervention, such as pricing pollution, is justified as a way to change behavior and spur innovation by aligning private incentives with social costs. In this sense, “institutions” carries very different meanings across perspectives.
But this is also where it becomes clear that Paul Ehrlich lost the argument. Consider the case of a carbon tax. Its justification rests on the idea that pricing pollution changes behavior and encourages innovation — not that humans are parasites, but that they respond to incentives. The premise is cooperation, not coercion born of scarcity panic.
All of these perspectives share a crucial assumption: humans are capable of solving problems. Environmental outcomes depend on incentives and institutions, not on reducing the number of “mouths.” In that sense, even Ehrlich’s opponents across the ideological spectrum converge on a common conclusion: humans are not parasites, but the ultimate resource.
This was not always the case. Environmental movements from the 1940s through the 1970s were far more receptive to Paul Ehrlich’s view. Many on the left and the right accepted his core premise, and for a time it was dominant. Today, it is not merely contested; it has largely been abandoned, even by those who neither cite nor sympathize with Julian Simon.
This is the real defeat of Ehrlich — even where one could think he had the most support, he lost ground. His core premises have been largely abandoned by all except the most extreme. In a way, Ehrlich died well after his ideas did.
And those ideas were truly horrible for human welfare. I do not rejoice in Ehrlich’s death. I will, however, dance on the tomb of his ideas and you should too. And when dancing, I will wear my “Julian Simon Fan Club” pin.
The United States has exceptionally abundant energy resources and a rich portfolio of energy-producing technologies. Nevertheless, its power sector un-derdelivers due to institutionally distorted policy design rather than technical constraints. Federal and state rules increasingly pick favored technologies instead of setting clear goals for emissions, reliability, and cost, and then allowing markets to discover the least-cost mix of energy production. As energy demand rapidly increases from AI data centers, electrification, and population growth, this policy misalignment further erodes US economic adaptability while amplifying the likelihood of blackouts and higher prices.
Over time, American energy policy has become a patchwork of technology-specific rules that have distorted energy markets and led to the shortcomings we see today. Early statutes treated nuclear, fossil fuel, and hydropower very differently, assigning each technology a regulatory risk profile that did not match their comparable environmental or reliability characteristics. Programs such as Renewable Portfolio Standards (RPS), Renewable Fuel Standard (RFS), clean energy tax credits, and net metering further entrenched technology categories, often rewarding membership in a favored technology class rather than rewarding measurable reductions in pollution or improvements in reliability. These policy interventions reflect public choice dynamics where organized interests secure concentrated gains while costs are dispersed across ratepayers and taxpayers.
The result is a system that leaves substantial investment untapped and excludes viable, highly efficient technologies. More than 2,000 gigawatts of proposed projects remain stuck in interconnection queues, far more than current installed capacity. Even a partial build-out of this pipeline would significantly expand energy supply and reduce the risks of shortages and higher prices, particularly if complemented by more advanced nuclear, geothermal, and other low-carbon technologies. Research suggests that with technology-neutral incentives and better market design, advanced nuclear capacity alone could grow from roughly 100 gigawatts to several hundred gigawatts by mid-century, while geothermal and other production methods could also scale dramatically.
This paper calls for a course correction in energy policy grounded in economic fundamentals. Environmental harms should be addressed with technology-neutral tools such as emissions pricing and/or Tradable Performance Standards (TPS). Reliability should be procured explicitly through markets for firm capacity, flexibility, and other system attributes. Industrial and regional development goals, where pursued, should be transparent, time-limited, and evaluated on their own terms rather than hidden inside technology and climate agendas. Within this framework, nuclear and other low-carbon technologies can compete on equal footing with renewables and fossil plants, and regulated utilities can be rewarded for outcomes rather than for the size of their capital base.
If policymakers adopt this approach, the US energy system can shift from a hodgepodge of politically favored technologies toward a market-driven portfolio that is cleaner, more reliable, and increasingly affordable. Clear, neutral rules would unlock stalled investment, accelerate deployment of advanced nuclear and other innovative resources, and position the United States to meet surging electricity demand without sacrificing economic growth. The gains are not hypothetical; they are embedded in existing projects waiting for a governance framework that lets them compete.
Key Points
The United States possesses abundant energy resources and proven technologies. Policy design, however, constrains energy supply, reliability, and decarbonization.
Current US energy production rules frequently pick favored technologies through mandates and subsidies.
Such interventions increase costs, misdirect capital, and slow innovation relative to technology-neutral, outcome-based approaches.
Policy design deficiencies have resulted in more than 2,000 gigawatts of proposed projects being stalled in interconnection queues. Implementing streamlined, technology-neutral permitting and market rules could unlock a substantial portion of this capacity and make future projects less costly and more efficient.
Advanced nuclear, geothermal, and other low-carbon resources can significantly reduce the cost of deep decarbonization if licensing is streamlined and they are allowed to compete fairly for reliability and emissions-focused payments.
A coherent reform package would price environmental externalities in neutral terms, procure reliability through explicit markets for system services, and overlay performance-based regulation on local monopolies while limiting opaque industrial policy.
If these changes are implemented, Americans can expect lower long-run electricity costs, fewer outages, faster growth in energy-intensive sectors, and meaningful reductions in pollution without sacrificing economic performance.
1. Introduction
The modern US power sector combines resource abundance and technological potential with persistent institutional constraints. The US possesses ample oil and natural gas, a significant existing nuclear fleet, substantial renewable resources, and mature transmission networks; yet concerns over rising system and electricity costs, reliability, and decarbonization persist. The puzzle is not one of technological deficiency but policy design. Energy and environmental rules now constitute one of the largest single clusters of federal regulation resulting in higher prices for Americans and slower economic growth.[1] The United States increasingly governs electricity through instruments that privilege specific technologies and industrial constituencies rather than through neutral, outcome-based rules. As a result, the sector’s core economic question — how to allocate capital to the most reliable, affordable, and efficient projects — is frequently subordinated to policies that favor technology selection and global climate objectives. This approach ultimately undermines all three goals. Given energy’s fundamental role in economic production and long-run prosperity, energy policy decisions play a critical role in shaping the United States’ growth trajectory.[2]
In 2024, the United States generated the second most energy of any country in the world, 4,391 terawatt-hours (TWh) compared to China’s 10,087 TWh.[3] Despite reaching record levels of domestic energy production, growth in US energy demand is increasingly outpacing supply.[4] The Energy Information Administration projects record electricity demand in 2026, driven largely by power-intensive data centers supporting artificial intelligence and cloud computing, as well as electric vehicles and building electrification.[5] America’s existing energy grid is already strained and the Department of Energy warns that blackouts could increase 100-fold by 2030 if substantial energy capacity is not added to the grid in the next five years.[6] Rising demand thus heightens the cost of policy error under a regime where what “counts” as clean or reliable power is increasingly defined by statute and tax code rather than by physics, prices, or performance.
A defining feature of recent US energy policy is the conflation of distinct objectives: advancing specific energy production technologies (notably wind and solar), internalizing environmental externalities (notably greenhouse gases and other pollutants, often at a global level), and advancing industrial or regional goals such as domestic manufacturing or rural income support.[7] Instead of treating these as analytically separable “ends” with correspondingly distinct instruments, legislation and regulation frequently bundle them into technology-specific mandates and subsidies. These interventions result in Renewable Portfolio Standards (RPS) that privilege particular resource categories, volumetric biofuel mandates, tax credits restricted to chosen technologies, domestic-content bonuses, and equipment-focused incentives for electric vehicles and storage. These policies embed judgments about “winners” ex ante, rather than defining emissions and reliability constraints and allowing decentralized agents to identify least-cost means of compliance.[8]
A transition from technology-picking policies to technology-neutral rules, coupled with reforms to permitting and interconnection processes, would yield significant near-term increases in the United States’ effective energy generation capacity. More than 1,400 gigawatts (GW) of generation capacity and 890 GW of storage are currently stalled in interconnection queues, spanning solar, wind, and natural gas technologies. Together, this is equivalent to approximately 960 Hoover Dams. This total far exceeds existing installed capacity, suggesting that even a partial realization of these projects would materially increase electricity output if interconnection and approval processes were accelerated.[9] Federal modeling suggests that advanced nuclear capacity could expand from approximately 100 gigawatts today to roughly three times that level by mid-century under conservative assumptions, provided that licensing processes are streamlined and long-term project financing is secured.[10] Comparable opportunities exist within renewable energy. The Department of Energy’s GeoVision analysis indicates that geothermal power could expand by an order of magnitude by 2050, contingent on advances in drilling technologies and more streamlined permitting processes.[11] For US households and firms, these changes would translate into lower electricity prices as lower-cost resources enter the market; improved reliability through the addition of firm capacity and transmission; accelerated growth in electricity-intensive sectors such as AI data centers and advanced manufacturing; and improved local air quality as higher-emitting generators are displaced.
Insights from economic theory and empirical analysis indicate both the fragility of the current US energy approach and the mechanisms through which its energy potential could be more fully realized. First, the Hayekian knowledge problem implies that regulators lack the information and adaptive capacity needed to select future cost-effective technologies ex ante. Price signals and the profit and loss mechanism in competitive markets are much better suited to aggregate dispersed information about the costs, risks, and innovation trajectories of energy technologies.[12] Second, the public choice literature, including Stigler’s (1971) theory of economic regulation and Peltz-man (1976) and Buchanan and Tullock’s (1962) work on constitutional political economy, shows that when regulation creates concentrated benefits and diffuse costs, it predictably becomes an object of rent-seeking and capture.[13] Technology-specific energy and climate policies inherently generate such economic rents. Eligibility for targeted tax credits, carve-outs within regulatory mandates, or guaranteed procurement classifications confers substantial benefits on a narrow set of firms or regions, creating strong incentive to preserve these arrangements even as more efficient alternatives emerge.
Third, work in environmental and energy economics indicates that policies that favor particular technologies are systematically less cost-effective than neutral, performance-based approaches. Fischer and Newell’s (2008) comparative analysis of climate policies finds that technology-specific subsidies and portfolio standards generally yield higher marginal abatement costs (the cost of reducing one more unit of emissions) and weaker innovation incentives than emissions pricing or tradable performance standards.[14] Additional research demonstrates how technology-focused climate policies are vulnerable to leakage (where local environmental regulation fails to reduce overall global pollution because the regulated activity shifts location or form) when applied to global pollutants like CO₂.[15] These results are not merely theoretical; they directly map onto the structure of US regulations such as Renewable Portfolio Standards (RPS), the Renewable Fuel Standard (RFS), and multiple overlapping clean energy tax expenditures.[16] Mandates and subsidies tied to particular technologies interact with rate-base incentives and planning requirements, encouraging compliance via eligible categories rather than minimizing total system cost subject to reliability and environmental constraints.
This paper frames US energy policy problems as fundamentally economic: government involvement that selects technologies and bundles objectives produces predictable information and incentive failures. The core analytical argument is that the ends must be separated to achieve each one more effectively. Separating objectives does not eliminate tradeoffs, but it makes them explicit: pursuing reliability, emissions reduction, and economic growth may still involve tensions, yet those tensions are revealed through prices, procurement costs, and performance metrics rather than obscured within technology mandates. Environmental externalities can be addressed through technology-neutral mechanisms, such as an emissions price or tradable performance standard, that apply uniformly to all resources based on measured impacts. Reliability and resource adequacy can be governed by explicit, market-compatible procurement of system attributes (firm capacity, flexibility, and locational deliverability), again on a technology-neutral basis. Distribu-tional and industrial objectives, where pursued, can be made transparent and subject to independent evaluation rather than embedded within nominally environmental or reliability programs. To prevent policymakers from implicitly re-ranking objectives, incentives and rules must be anchored to observable outcomes, administered through durable institutions, and constrained by clear statutory mandates that limit discretionary technology favoritism. Under such a regime, technologies like nuclear power that can provide low-emission firm capacity would neither be privileged nor penalized by categorical rules; instead, permitting and compensation would be tied to verifiable contributions to system objectives.
2. A Short History of US Energy Policy and the Current Policy Framework
2.1 The Rise of Targeted Intervention
Twentieth-century US electricity markets were built through successive legal regimes that assigned distinct risk–return profiles to different technologies. Over time, these frameworks shaped investment incentives and infrastructure development, channeling capital toward some generation sources while constraining others. These policies led to the current energy mix we have today. Roughly 60 percent of US utility-scale generation comes from fossil fuels, 18 percent from nuclear power, and 21 percent from renewable sources.[17]
These aggregate shares rest on institutional choices that predate contemporary climate debates, combined with the priorities of each successive administration.
Figure 1. US Energy Information Administration, Electric Power Annual
By the early twentieth century, federal and state authorities shifted from a light touch to active management of energy markets. In 1930, the Texas Railroad Commission imposed production reductions to lift crude prices and restrain competitive drilling. Under the New Deal, this framework was expanded through the National Industrial Recovery Act (NIRA) of 1933, which suspended normal competition in favor of industry-administered “fair trade” codes, initially adopted by the oil industry.[18] After the Supreme Court invalidated the NIRA in 1935 in the case A.L.A. Schechter Poultry Corp. v. United States, major oil producers supported the Connally Hot Oil Act, which provided federal backing for state programs that restricted output and propped up prices. This approach was favored over proposals for public-utility-style rate regulation of oil companies.[19]
The Tennessee Valley Authority Act of 1933 facilitated the expansion of large, federally backed hydropower projects and publicly financed generation and transmission, effectively socializing the investment risks associated with those assets.[20] In doing so, the federal government assumed a direct role in shaping the scale and financing of electricity infrastructure. Civil nuclear power was established under a distinct statutory regime called the Atomic Energy Act of 1946 which centralized ownership and control of fissionable materials in the federal government.[21] This framework reflected both national security concerns and deep uncertainty about technological and catastrophic risks. The Atomic Energy Act of 1954 subsequently allowed private participation under intensive federal licensing and information controls.[22] The Price–Anderson Nuclear Industries Indemnity Act of 1957 then created a capped, pooled liability system for nuclear operators, addressing catastrophic-risk insurability while codifying nuclear energy’s exceptional legal treatment relative to other generation.[23] Taken together, these measures embedded technology-specific treatment into US energy governance decades before present climate policy, shaping which investments confronted regulatory scarcity and which operated under more predictable conditions.
Postwar federal policy thus combined public ownership with tight regulatory control for selected emerging technologies. Large hydro and TVA-era projects benefited from direct public financing and statutory mandates, lowering financing and demand risk.[24] Nuclear facilities, by contrast, emerged within a framework that required detailed federal approval at each stage: centralized control of materials, security clearances, construction permits, operating licenses, and participation in the Price–Anderson liability pool. Historical work along with official US Nuclear Regulatory Commission histories shows how, from the 1960s through the 1970s, expanding safety requirements, increasingly formalized hearings, and growing public contestation lengthened licensing timelines and increased procedural uncertainty.[25] This regime raised the cost of capital for nuclear projects, making their viability unusually sensitive to regulatory and political risk rather than solely to underlying engineering or market fundamentals.
A further complication in the production of nuclear energy was that civilian nuclear policy grew out of a weapons-first bureaucracy. The Atomic Energy Act of 1946 centralized ownership of “fissionable materials” in the federal government, embedding secrecy and defense priorities that hindered early commercial deployment.[26] Eisenhower’s 1953 “Atoms for Peace” speech opened a diplomatic and legal pathway for power reactors, but institutional funding and expertise continued to tilt toward military priorities, namely naval propulsion and fissile-material production, leaving civilian projects to navigate security restrictions and case-by-case licensing.[27] After India’s 1974 “Smiling Buddha” nuclear device test, US policy tightened proliferation controls through President Carter’s 1977 decision to defer commercial reprocessing and the Nuclear Non-Proliferation Act of 1978, further constraining the civilian fuel cycle even as these moves targeted weapons risks.[28] In short, a governance architecture built for weapons shaped civilian nuclear energy, diverting resources and raising regulatory frictions precisely as other generation sources faced more routine permitting.[29]
Fossil-fuel development followed a different trajectory. Oil and gas producers benefited from comparatively stable and technology-favorable fiscal and legal arrangements. These included percentage depletion allowances and the expensing of intangible drilling costs, which reduced effective tax rates on exploration and extraction.[30] Federal and state leasing regimes, most notably the Mineral Leasing Act of 1920 for onshore resources and the Outer Continental Shelf Lands Act of 1953 for offshore tracts, established standardized mechanisms for oil and gas access. These frameworks relied on competitive auctions, fixed primary lease terms, and clearly defined royalty schedules, facilitating exploration and production across large prospective areas.[31] Although major environmental statutes and spill-driven reforms increased compliance costs, they did not typically subject individual oil and gas projects to the same case-by-case licensing uncertainty or centralized material control seen in the nuclear sector.[32]As a result, capital allocation in the oil and gas sector was shaped by these favorable fiscal and legal arrangements, while still operating through market mechanisms such as competitive leasing, price signals, and private risk-bearing.
This asymmetry between the US government’s regulatory approach to different energy sources constitutes an early and important manifestation of technology-differentiated policy. Two carbon-intensive technologies and one low-carbon technology were placed under sharply different regulatory risk profiles that bore little relationship to their respective contributions to long-run climate damages or local health risks. This initial divergence in institutional treatment set the stage for subsequent policy layers — RPS, biofuel mandates, technology-specific tax credits — that amplified rather than corrected distortions in capital allocation across nuclear, fossil, and later renewable resources.[33]
2.2 Crisis, Partial Liberalization, and the Foundations of Tech Picking
The 1973 Arab oil embargo and the 1979 Iranian Revolution triggered supply shocks that, amplified by price controls and rising demand, culminated in the oil crises of the 1970s and spurred a wave of federal interventions reshaping electricity and fuel markets.[34] Congress enacted the Energy Policy and Conservation Act of 1975, introducing strategic petroleum reserves and efficiency measures, created the Department of Energy through the Department of Energy Organization Act of 1977, and adopted the Public Utility Regulatory Policies Act of 1978 (PURPA).[35]
PURPA required utilities to purchase power from “qualifying facilities” (QFs), cogeneration plants and small power producers satisfying specified fuel, size, and ownership criteria at administratively determined “avoided cost” rates.[36]
This mandate was an early attempt to inject competition and diversify the generation base within vertically integrated monopoly systems.[37] PURPA’s eligibility rules and regulated “avoided cost” calculations allowed regulators to specify ex ante which types of projects merited guaranteed offtake and at what price, this framework played a central role in fostering the early development of markets for non-utility generation and renewable energy in the United States.
Paralleling developments in oil markets, the nuclear energy sector confronted multiple crises during the 1970s that significantly influenced subsequent policy. Major nuclear accidents transformed risk perceptions and regulatory responses, most notably following the 1979 Three Mile Island incident and the 1986 Chernobyl disaster.[38] Although numerous historical and technical assessments of the accidents — including Samuel Walker’s Three Mile Island, Nuclear Regulatory Commission reviews, UNSCEAR analyses, and OECD/ NEA reports on Chernobyl — found limited or context-specific health impacts of the accidents, the fallout resulted in substantial tightening of licensing procedures and safety reviews of any future nuclear energy projects.[39] These developments lengthened regulatory lead times and introduced substantial uncertainty for nuclear projects.
While contemporaneous research examined the impacts and risks of nuclear meltdowns, an expanding environmental health literature quantified the far greater external costs associated with coal and oil combustion, with comparative studies indicating orders-of-magnitude higher mortality per kilowatt-hour relative to nuclear power.[40] The regulatory response nonetheless focused on nuclear-specific restrictions more than on mitigating the negative externalities generated by any power source. This asymmetry illustrates how translating safety and environmental concerns through non-neutral regulation can invert the social-cost ranking of technologies and misdirect capital from lower-externality options.
After decades of asymmetric risk perception and non-neutral regulation, the next policy wave shifted from safety-driven constraints to market design and fiscal tools that continued to privilege categories over outcomes. The 1990s and early 2000s introduced partial restructuring of electricity markets and a new layer of technology-contingent fiscal incentives. The Federal Energy Regulatory Commission mandated open-access transmission and fostered the creation of independent system operators (ISOs) and regional transmission organizations (RTOs).[41] These reforms introduced wholesale competition while leaving much of the distribution and retail sale of energy under regulation.[42] Within this framework, Congress enacted targeted tax credits including the Production Tax Credit (PTC) for wind and closed-loop biomass energy sources, and later the Investment Tax Credit (ITC) for solar and related technologies.[43]
Several states explicitly mirrored or complemented federal clean-energy tax incentives by adopting their own technology-specific credits and abatements, though the form and generosity varied. For example, New York layered state production- and investment-style incentives onto the federal PTC and ITC through NY-SUN and related tax credits. California combined state investment incentives and rebates with the federal ITC via the California Solar Initiative, and Iowa adopted an early state-level wind energy production tax credit that closely paralleled the federal PTC.[44] Unsurprisingly, these credits significantly increased wind and solar deployment.[45] Yet the incentives were categorical by rewarding wind and solar specifically, not delivering verifiable emissions reductions or reliability services relative to alternatives. These tax credits essentially enshrined technology picking into federal fiscal policy.
State Renewable Portfolio Standards (RPS), widely adopted in the late 1990s and 2000s, reinforced technology-differentiated regulation by converting eligibility rules into binding procurement mandates for utilities.[46] Early and influential examples include Texas’s 1999 RPS centered on tradable renewable energy credits, California’s 2002 RPS with technology-specific eligibility and escalating targets, and New York’s 2004 RPS requiring load-serving entities to procure qualifying renewable generation, each embedding technology selection directly into state electricity markets.[47] Many RPS programs require retail suppliers to procure a specified share of electricity from resources classified as “renewable,” enforced through renewable energy certificates. RPS policies increased renewable energy generation of numerous kinds of projects that exhibited a wide variation in cost, emissions and design quality.[48]
Figure 2. State Renewable Portfolio Standards. Source: ClearPath, “What Is a State Renewable Portfolio Standard?”
In most cases, eligibility is defined by technology type, not by marginal abatement cost, which measures the cost of reducing one additional unit of emissions, or by system attributes such as capacity value, the ability of a power source to generate electricity during peak demand. Existing nuclear plants, despite being zero-carbon and dispatchable (they can adjust their output to meet electricity demand), are mostly excluded from these standards, while new wind and solar can stack RPS demand with federal tax benefits.[49] This institutional design exemplifies regulatory selection: the statutory definition of “renewable,” rather than a neutral comparison of emissions and reliability performance, determines which investments are rewarded, further embedding winner-picking into the core of clean energy policy.
2.3 Biofuels, Net Metering, and the Deepening of Goal Mixing
The Renewable Fuel Standard (RFS), enacted in 2005 and expanded in 2007, extended technology-specific mandates into transportation fuels. This Act of Congress, in a stated effort to reduce greenhouse gas emissions, required the addition of biofuels, namely corn ethanol, to be blended into gasoline and diesel. Subsequent empirical evaluation, however, raised questions about whether these mandates delivered the environmental benefits originally claimed. A 2008 study found that once indirect land-use change is incorporated, conventional corn ethanol can increase, rather than decrease, net greenhouse-gas emissions relative to gasoline.[50] The National Research Council’s comprehensive review of the RFS reaches similar conclusions about climate benefits while emphasizing distributional gains for specific agricultural producers.[51] Research shows how the mandate transfers surplus toward corn and biofuel producers and contributes to higher food and feed prices.[52] Together, this literature illustrates how a policy framed as advancing environmental and energy-security objectives can function in practice as a durable, technology-specific transfer to a concentrated constituency. More neutral instruments, such as carbon pricing or performance standards, could achieve larger emissions reductions at lower overall welfare cost.
Net metering, for example, is a billing system that allows consumers with renewable energy sources, like solar panels, to receive credit for the extra electricity they send back to the grid. This program spread from state to state between the late 1980s through the 1990s, first in Minnesota and California, then across most jurisdictions by the mid-2000s.[53] Most programs credited a household’s excess rooftop energy generation at the full retail rate on a monthly “net” basis, using standardized interconnection rules and bi-directional meters.[54] As adoption accelerated, state commissions began revising designs. California’s “Net Energy Metering (NEM) 2.0” and “NEM 3.0” lowered export credits toward time-varying values and added one-time inter-connection fees and successor tariffs.[55] Hawaii closed retail NEM in 2015 and moved customers to CGS/Smart Export structures.[56] Massachusetts added the Solar Massachusetts Renewable Target (SMART) program, which provides incentives for solar energy projects that decrease over time and are layered on top of crediting.[57] Some states piloted “value-of-solar” tariffs that pay a posted rate reflecting avoided energy, losses, capacity, and environmental adders, Austin Energy’s being the canonical early example.[58] The result by the late 2010s was a patchwork of incentives. Some states offered 1:1 retail credit, others linked credit to time-of-use (TOU) or avoided-cost values, and in island grids like Hawaii, regulators replaced retail-rate net metering with sub-retail export credits and companion tariffs that reward pairing rooftop solar with batteries. These policies allowed excess daytime solar to be stored and released during evening peaks instead of overloading the midday grid.[59]
Alongside net metering, other programs promoted renewable energy. State Renewable Portfolio Standards (RPS) set escalating percentages of retail sales to be met with legislatively defined “renewable” resources and enforced compliance through tradable renewable energy certificates (RECs).[60] At the federal level, the Production Tax Credit and the Investment Tax Credit reduced capital costs for eligible projects, while the Renewable Fuel Standard set annual biofuel obligations for refiners and importers.[61] Many states introduced community-solar programs to broaden access and adopted revenue decoupling, fixed-charge redesigns, and minimum bills to stabilize utility finances as volumetric sales growth slowed.[62]
For example, Minnesota and Colorado implemented large community-solar programs, while California and New York adopted revenue-decoupling and revised fixed charges to maintain utility cost recovery. Empirical evaluations suggest these programs had mixed effects on retail electricity prices, often increasing average rates or shifting costs toward nonparticipating customers, while delivering distributional benefits and revenue stability at the expense of higher system and administrative costs.[63] By the late 2010s, jurisdictions were updating legacy rules, tightening interconnection timelines, adjusting export credits, and refining REC eligibility, while leaving the core architecture of technology-specific credits, mandates, and retail-rate net energy metering largely intact.[64]
2.4 The Inflation Reduction Act, Nuclear’s Position, and Current Energy Policy Trends
The Biden Administration’s 2022 Inflation Reduction Act (IRA) continues to be the backbone of federal clean-energy incentives and, combined with decades of layered regulations, results in the regulatory mix we have today.[65] Current US energy regulation is extensive but unevenly structured across fuel types: fossil energy is governed by broad, cross-cutting environmental, leasing, and safety regimes (e.g., the Clean Air Act, Clean Water Act, Mineral Leasing Act, and pipeline-safety rules), while nuclear power is subject to a uniquely dense, technology-specific licensing and safety framework concen-trated in Title 10 of the Code of Federal Regulations and administered by the Nuclear Regulatory Commission. Renewable energy, by contrast, faces comparatively fewer stand-alone safety regulations. It is regulated primarily through incentive-based instruments, such as tax credits, renewable portfolio standards, net-metering rules, and siting requirements, and implemented through a fragmented mix of federal tax law and state-level utility and land-use regulation.[66]
In 2025 the law’s new “tech-neutral” credits went live, namely the Clean Electricity Investment Credit and the Clean Electricity Production Credit. These credits do not reward a specific energy source, such as a wind turbine or a solar panel, but are intended to promote any electricity that has very low or zero greenhouse-gas emissions, regardless of the technology that produces it, including nuclear.[67] To qualify, projects must meet emission standards and must be placed in service after December 31, 2024. Projects can receive bonus credits for meeting certain wage and apprenticeship requirements and/ or being in low-income communities or on Indian land.[68] Federal tax policy now rewards electricity providers for measured emissions performance. Developers still must deal with ordinary project hurdles such as financing, siting, and building schedules, but the credit rules themselves are marginally clearer and more inclusive than in 2023–24.[69]
The Trump administration has focused oil and gas policy on expanding leasing and development, while curtailing many solar and wind projects. The administration restarted work on a new five-year offshore leasing plan under the Outer Continental Shelf Lands Act that reopens areas for competitive auctions after several years of limited sales.[70] The One Big Beautiful Bill Act (OBBBA) requires the sale of more oil and gas leases including at least thirty in the Gulf of America by 2040.[71] The Department of the Interior also moved to reopen leasing in Alaska’s Arctic National Wildlife Refuge (ANWR) Coastal Plain and to advance related rights-of-way for oil and gas leasing. Those steps would allow companies to bid for exploration and production in the area, subject to environmental reviews.[72] The administration plans to offer offshore leases on multiple coasts in 2026, which would increase the number of tracts available for oil drilling bids.[73] At the same time, the administration paused or slowed numerous federal wind-energy leasing and permitting while it re-examined costs and local impacts. That pause created uncertainty for new offshore wind areas as oil and gas leasing accelerated.[74]
Nuclear policy has seen the most substantive recent changes. In May 2025 the President signed an executive order that directs the Nuclear Regulatory Commission (NRC) to streamline its processes and to reorganize how it reviews new reactors.[75] The Department of Energy highlighted related actions to rebuild the nuclear supply chain and to speed up testing and licensing.[76] The NRC approved an innovative small modular reactor (SMR) design in May 2025.[77]
SMRs are seen as the future of nuclear energy. Compared with today’s large reactors, they are small plants designed to be built in factories and then installed on site, saving time and money.[78] The NRC also continued work on a proposed rule that would create a risk-informed, technology-inclusive licensing framework for advanced reactors, meaning the rules would focus on measurable safety outcomes rather than a specific reactor type.[79] For large reactors, the NRC approved key steps that allow Holtec to move forward with restarting the Palisades plant in Michigan. If completed, Pal-isades would be the first US reactor to return to service after a shutdown.[80]
Taken together, these steps point to a federal tilt toward low-carbon nuclear power while oil and gas leasing expands.[81] Despite these strides, policy inertia from a bygone era persists. The Department of Energy continues to downblend uranium-233 (U-233), a key input for advanced nuclear reactors.[82] Preserving U-233 would support the development of thorium molten-salt reactors, a technology the US pioneered but later abandoned.[83] For decades, weapon-focused policies of the 1940s–60s prioritized uranium and plutonium and sidelined thorium, which is highly efficient for energy production but offers little value for weaponry.[84]
Despite recent modest improvements, US energy policy still routinely conflates ends and means. Much of federal energy policy continues to use technology-specific instruments to pursue environmental, reliability, and industrial objectives simultaneously, thereby aggravating information problems and inviting rent-seeking. A more coherent framework would separate these ends by addressing environmental harms through uniform, technology-neutral constraints on emissions or performance; ensuring reliability and resource adequacy through explicit, competitively procured systems; and eliminating industrial policy objectives. If industrial policy is undertaken, it should be pursued transparently and with explicit evaluation, rather than through opaque cross-subsidies embedded in energy regulation.
Subsequent sections elaborate how such separation can reduce welfare losses, discipline public choice vulnerabilities, and allow markets to discover efficient roles for nuclear and other technologies.
3. Diagnosing the Policy Failure
The central failures of US energy policy that have left roughly 2,000 giga-watts of proposed capacity idle reflect economic and institutional failures, not technological limits. They arise from (i) technology-picking instruments that ignore the variability in the expense of reducing greenhouse gas, (ii) the mixing of environmental and industrial-policy goals with energy policy, and
(iii) the interaction of these instruments with monopoly regulation. Public choice and information economics predict exactly the pattern observed: costly decarbonization, misallocated capital, and persistent preference for politically salient technologies over efficient ones.
3.1 Tech Picking vs. Outcomes
A large body of both theoretical and empirical literature finds that, given the government is involved in energy policy, technology-specific mandates and subsidies are second-best relative to technology-neutral price or performance instruments. This distinction matters because the choice of policy instrument determines whether incentives align with least-cost abatement or become tethered to particular technologies and interest groups. Fischer and Newell (2008) show that targeted subsidies and portfolio standards generally achieve a given emissions reduction at higher welfare cost than uniform emissions pricing or tradable performance standards, and that they distort innovation toward subsidized options rather than least-cost abatement.[85]Friedrich Hayek’s (1945) knowledge problem underscores why this is. Regulators cannot reliably anticipate future relative costs, system-integration needs, or innovation trajectories because they have no reliable feedback mechanism. Market signals emerging from decentralized choice are better at aggregating information.[86] Feedback like prices, profits, losses, and entry and exit decisions provide continuous feedback about scarcity, performance, and opportunity cost, allowing decentralized markets to coordinate investment and innovation far more effectively than administrative judgment. This theoretical insight is borne out empirically in observed policy outcomes across the US energy sector. Gillingham and Stock’s (2018) survey of mitigation costs, likewise, concludes that overlapping, technology-specific policies in the US have raised abatement costs relative to more neutral designs.[87]
Concrete US instruments exhibit these predicted distortions. Renewable Portfolio Standards (RPS) that credit only designated “renewables” while excluding nuclear or existing low-carbon resources reward projects based on category membership, not marginal abatement or reliability contribution. Empirical work by Carley (2009) and by Wiser et al. (2016) shows that RPS policies do increase wind and solar deployment, but also reveal considerable variation in costs and limited alignment with least-cost abatement once exclusions and design details are accounted for.[88] Ethanol volume mandates under the Renewable Fuel Standard (RFS) represent another form of tech picking. Studies by Searchinger et al. (2008) and the National Research Council find that once land-use change and market responses are incorporated, conventional corn ethanol offers small or negative climate benefits while clearly transferring income to specific agricultural interests.[89] These category-based programs steer dollars toward labels rather than the cheapest verified tons. This pattern is evident in EV purchase subsidies, whose climate benefits vary widely with grid mix and often come at higher abatement cost, and for storage mandates, whose value depends on when and where services are delivered rather than on installed megawatts alone[90]
Such technology-specific policies function as constrained optimization problems with arbitrary bounds: regulators pre-select eligible technologies, then let markets optimize only within that subset. This structure predictably produces higher costs than allowing the market to price the full technology set, given the stated outcome constraints. When laws pre-select “eligible” technologies, investors face regulatory risk on top of normal market risk: rules or relative costs can change, leaving projects unviable even if they were compliant when built. In utility regulation, losses often don’t stay with the investor because commissions allow rate-base treatment, meaning the project’s undepreciated cost is added to the utility’s regulated asset base and recovered from customers over time with an allowed return. Or commissions approve stranded-cost mechanisms, which are special charges on customer bills that compensate utilities for past investments that became uneconomic after policy or market shifts.[91]
Regulatory features have direct implications for how firms time and scale irreversible capital investments under uncertainty. Dixit and Pindyck (1994) and Pindyck’s (1991) work shows us that investing in long-lived, hard-to-reverse assets has an “option value of waiting.” This means that when uncertainty is high, delaying investment can be efficient; technology-specific mandates compress that option value and can trigger premature, welfare-reducing build-outs.[92] Once such mandates are in place, investment decisions are no longer disciplined solely by market signals but increasingly by regulatory expectations. The work of Stigler (1971), Peltzman (1976), and Kornai, Maskin, and Roland (2003) shows that when the benefits of policy mandates accrue to concentrated groups while costs are widely dispersed, public-choice incentives predict systematic political support for such arrangements. Combined with soft-budget constraints and the expectation that regulators will permit cost recovery even when projects underperform, these dynamics encourage overinvestment in the protected subset.[93] The resulting allocation severs the link between project performance and financial accountability. Thus, the upside is privatized while downside risk is shifted to ratepayers and taxpayers.
3.2 Goal Mixing and Public Choice Distortions
A second policy failure that underpins US energy policy is the routine fusion of distinct objectives into single policy instruments. Emissions reduction, technology promotion, regional development, and industrial policy are frequently bundled together through technology-specific mandates and subsidies rather than pursued with separate, purpose-built tools. In their research, Aldy and Stavins (2012) argue that climate policy is more effective when environmental objectives are pursued with dedicated, transparent instruments rather than embedded in overlapping subsidies aimed at co-benefits. [94] This analytical distinction clarifies why policies designed to accomplish multiple goals simultaneously often perform poorly on each dimension. For example, instead of EV-only purchase rebates and domestic-content bonuses bundled into tax credits, a uniform carbon price or tradable performance standard would directly reward verified emissions reductions regardless of technology (Aldy & Stavins 2012). Problems compound further when bundled objectives are pursued at subnational scales for pollutants with global damages. Bushnell, Peterman, and Wolfram (2008) show that subnational policies targeting global pollutants via local technology requirements are especially prone to leakage and inefficiency.[95] Their work illustrates how state renewable or low-carbon fuel mandates can shift high-emissions production to neighboring states or induce credit “reshuffling,” reducing in-state emissions on paper without cutting total emissions economy-wide (Bushnell, Peterman & Wolfram 2008).
Public choice theory explains why mixed-goal, technology-picking instruments persist. Stigler’s (1971) theory of regulation and Peltzman’s (1976) extension of that theory predict that regulation tends to allocate benefits to organized interests when those benefits are concentrated and costs diffuse.[96] Buchanan and Tullock’s (1962) constitutional political economy, and Olson’s work on collective action, similarly emphasize how small, cohesive groups secure favorable rules, while large groups of consumers face high coordination costs.[97] These frameworks point to a systematic bias toward policies that bundle distributive benefits with regulatory goals. In practice, this creates “coalition goods” when policies are bundled to satisfy multiple organized constituencies like manufacturers, fuel producers, unions, and regional blocs, making technology-specific designs politically cheaper to pass than outcome-based rules. Moreover, information asymmetries and revolving-door expertise further tilt the process toward insiders who can draft eligibility criteria, measurement rules, and bonus provisions that quietly channel rents. The familiar “bootleggers-and-Baptists” dynamic then sustains the policy: moral or environmental justifications provide cover, while commercial beneficiaries finance the lobbying that preserves the instrument.[98]
Once enacted, RPS categories, RFS mandates, EV-specific credits, and domestic-content bonuses all create concentrated rents for eligible industries and regions. Once in place, these beneficiaries have strong incentives to defend and expand their privileges, even when new evidence reveals that other technologies (e.g., existing nuclear, firm low-carbon resources, or demand-side options) could deliver superior reliability and environmental outcomes. Wiser et al. (2016), Barbose (2024), and Joskow’s (1997), work shows that entrenchment occurs through design choices.[99] Choices like grandfathering, tradable certificates, multi-year crediting schedules, and stranded-cost recovery lock in asset values and make reform appear to threaten jobs, tax bases, and utility balance sheets. Crucially, because the underlying investments are long-lived and quasi-irreversible, beneficiaries can credibly warn of write-downs and litigation if rules are changed, raising the political price of course correction.[100] The cumulative effect is dynamic rigidity when climate and reliability policy become vehicles for industrial favoritism, and course corrections toward more neutral instruments face entrenched opposition.[101]
3.3 Monopoly Incentives
The US electricity sector is organized around hundreds of state-granted local monopolies for distribution, service, and, in many states, vertically integrated utilities that also own generation and transmission. Local electricity provision is typically organized as a government-regulated natural monopoly, characterized by capital-intensive transmission and distribution networks with large fixed and sunk costs. Under cost-of-service regulation, a utility’s earnings are determined by applying an authorized rate of return to its regulated “rate base.”[102]
The classic Averch–Johnson (1962) result predicts that, under such rules, regulated firms will tilt toward capital-intensive choices because they increase the capital base that earns an authorized return; absent countervailing incentives.[103] Laffont and Tirole (1993) formalize why regulators have difficulty counteracting this tendency: regulators face an information problem and cannot perfectly observe firms’ true costs or effort levels. As a result, contracts designed to limit excess rents can weaken investment incentives, while more generous allowances risk encouraging overcapitalization.[104]
These state-granted local monopolies result in a single seller facing the market demand curve and maximizing profit by restricting output below the competitive level and charging a higher price than they would under market competition. Because the monopolist has no direct competitors, it does not need to expand output to meet demand at lower prices and instead chooses the price–quantity combination that maximizes its own profits rather than total surplus. This monopoly pricing creates deadweight loss: the loss of potential economic value that arises when mutually beneficial trades fail to occur because the monopoly price prevents transactions that would make both buyers and sellers better off.[105] Monopolies also exhibit many inefficiencies. With little-to-no competitive pressure, firms let costs creep up through slack operations, excess staffing, or cost-inflated processes.[106] These higher prices, fewer units sold, and more internal waste than we’d see in a competitive market further complicate the provision of energy.
A more coherent regulatory design follows directly from the economics of monopoly regulation. Drawing on Demsetz’s (1964) franchise-bidding insight, such a design would replace guaranteed utility ownership with competitive procurement wherever monopoly provision is not technologically necessary, most notably in electricity generation and many ancillary and grid-support services.[107] Joskow and Tirole (2007) show that reliability can be procured efficiently when scarcity pricing and capacity remuneration are paired with well-designed retail policies; this logic points toward performance-based reg-ulation (PBR), under which utility earnings depend on measurable outcomes, such as reliability indices, interconnection timelines, or verified emissions intensity, rather than on the volume of capital placed into the rate base.[108]
In practice, this approach would involve utilities competitively procuring ca-pacity, flexibility, and clean-energy attributes through auctions or standard-ized contracts, while regulators reward utilities for meeting clearly specified performance benchmarks instead of expanding owned assets. Elements of this model already exist: US wholesale capacity markets (e.g., PJM and ISO-NE) and price-cap or incentive-based regimes such as the United Kingdom’s Revenue = Incentives + Innovation + Outputs (RIIO) framework that reflect partial implementations of outcome-oriented regulation.[109] Together, these examples demonstrate that technology-neutral environmental instruments and performance-oriented monopoly regulation are not speculative reforms but extensions of tools already in use. When combined, they can mitigate information problems and rent-seeking that otherwise drive excessive, category-driven capital accumulation.[110]
4. Principles for a Course Correction
Correcting these failures does not require retreating from environmental or energy performance objectives but disentangling them. It requires recasting the state’s role along lines consistent with basic market economics and informed by public choice constraints. Four principles follow.
4.1 Separate the Ends: Externality Control, Energy Performance, and Industrial Policy
4.1.1 Externality Control
Environmental harms from energy use, greenhouse gases, local air pollutants, and upstream methane are classic externalities. Abundant research supports pricing these harms directly, either via emissions taxes, cap-and-trade, or functionally equivalent tradable performance standards.[111] Critics sometimes argue that cap-and-trade systems encourage firms to focus on acquiring allowances rather than innovating, but empirical evidence from the US Sulfur Dioxide (SO₂) trading program shows the opposite: firms responded by developing lower-cost abatement technologies and operational improvements to reduce allowance demand.[112] More broadly, by placing a persistent price on emissions, cap-and-trade preserves continuous incentives to innovate because firms that reduce emissions below the cap capture ongoing gains, whereas technology mandates truncate innovation once compliance is achieved.
Weitzman and Montgomery (1974) show that, for uniformly mixed pollutants, price and quantity instruments can be designed to achieve cost-effective abatement under uncertainty. The key is that the instrument is technology-neutral and tied to emissions outcomes, not equipment categories.[113] Revesz and Stavins (1972) likewise argue that well-designed market-based instruments tend to outperform prescriptive regulation both in static efficiency and dynamic innovation.[114] An externality control regime would therefore adopt (i) a uniform price on CO₂ and major co-pollutants across sectors, and/or (ii) a tradable performance standard (e.g., tons CO₂e/MWh, verified methane intensity) with rigorous monitoring, reporting, and verification (MRV). Technology-specific production mandates and fuel-volume requirements would be phased out as redundant.
4.1.2 Energy-Market Performance
Reliability, flexibility, generation capacity, and resource adequacy are con-ceptually distinct from environmental externalities and should be addressed through the explicit procurement of system attributes. Research shows that well-functioning energy-only and capacity markets depend on scarcity pricing and, where applicable, capacity remuneration mechanisms. These instruments reflect the value of reliability and system adequacy directly, rather than privileging particular technologies.[115] An energy-market performance regime would: (i) define products such as firm capacity, ramping capability, inertia, and locational deliverability; (ii) procure them through competitive auctions open to all resources (generation, storage, demand response, inter-connection, nuclear life-extension) meeting performance standards; and (iii) allow scarcity pricing to signal when additional investment is valuable. This model will select for verifiable performance, not whether a resource is wind, solar, nuclear, or gas.
4.1.3 Industrial Policy
Industrial policy, as has been discussed, suffers from classic public choice problems. Concentrated beneficiaries lobby for targeted subsidies, local content rules, and tax credits, while diffuse consumers bear the costs. Over time, these programs become institutionally “sticky” and tend to expand, as beneficiary constituencies mobilize to preserve and extend them, even when accumulating evidence indicates that lower-cost instruments could achieve the same stated objectives.[116] By design, this approach involves picking winners. The state selects particular firms, sectors, or technologies despite severe information constraints that make governments systematically weaker investors than decentralized markets.[117] In the energy sector, layering industrial policy objectives on top of environmental and reliability goals blurs accountability and raises overall costs. Rather than compensating providers for measurable outcomes, such as emissions reductions or reliability services delivered, these rules reward eligibility categories, thereby inviting rent-seeking behavior. The remedy is disentanglement. Keep industrial experiments, if pursued at all, transparent, time-limited, and evaluated on explicit milestone payments, while returning core objectives to the marketplace.[118]
This market structure channels competition toward measurable outcomes while constraining opportunities for regulatory capture. Separating these objectives creates a level playing field in which all energy resources face the same carbon price or performance standard and have equal opportunity to be compensated for delivering reliability attributes.
4.2 Keep Metrics Minimal and Auditable
Given knowledge problems and regulatory capture concerns, the metric set should be as transparent as possible: for example, (i) verified tons of CO₂e, (ii) standardized reliability attributes, and (iii) simple consumer-cost indicators. Complex composite indices or opaque “sustainability scores” create scope for manipulation and selective weighting. Independent MRV bodies with open methods and data reduce information asymmetries and limit the ability of regulated entities or agencies to inflate compliance claims.
Public choice analysis suggests institutional safeguards including publishing formulas ex ante, minimizing discretionary exemptions, and subjecting metrics to periodic independent review. These measures reduce opportunities for cronyism by making it harder to hide preferential treatment inside bespoke eligibility criteria.
4.3 Technology-Neutral by Law
To discipline winner-picking, core policy instruments should be drafted in technology-neutral terms. Statutes and regulations should specify emissions rates, reliability attributes, or other verifiable performance outcomes, rather than privileging particular fuels, devices, or ownership structures. Conditioning support on outcomes rather than eligibility categories channels innovation and investment toward the lowest-cost means of achieving policy goals, which is the central economic case for neutrality.[119]
Technology neutrality does not imply ignoring heterogeneous risks of energy production technologies. Nuclear energy, for example, justifiably requires dedicated safety regulation. Even there, however, rules should be risk-informed and performance-based, not open-ended or discretionary in ways that effectively function as technology bans. Where genuinely technology-specific externalities exist, such as methane leakage from particular equipment, they can be addressed through targeted, measurable performance standards nested within an otherwise neutral policy framework.
4.4 Stable but Sunsetted
Finally, sustained private investment depends on policy stability that is credible over time, while remaining attentive to public choice concerns about entrenching permanent favors. The time-consistency literature and political economy research on regulatory credibility emphasize that effective policy must be predictable ex ante yet revisable through well-defined, rule-based processes.[120] Stability should therefore arise from transparent commitments and procedures, not from open-ended guarantees to particular technologies or constituencies.
A coherent framework would legislate multi-year carbon-pricing and reliability instruments with clearly specified trajectories, paired with automatic, periodic reviews — for example every five years — using transparent metrics such as cost per ton abated, realized reliability outcomes, and evidence of market power or rent extraction. Programs that fail these cost-effectiveness or integrity thresholds would trigger pre-specified sunsets or clawbacks, particularly for technology-specific credits or carve-outs, ensuring capital is redirected toward higher-performing options. At the same time, the regime would limit retroactive rule changes that undermine legitimate investment expectations, permitting exceptions only in cases of fraud or material misrepresentation.
Such a design reduces regulatory risk across technologies while limiting the persistence of rent-seeking arrangements, consistent with Dixit’s insight that predictable, rules-based policy is essential for attracting irreversible investment in capital-intensive sectors.[121] Taken together, these principles address the core policy failures by clarifying the state’s role: to allow markets to price externalities, define reliability and safety requirements, and enforce transparent rules, not to centrally plan the generation mix. A framework that separates objectives, relies on minimal and auditable performance metrics, and applies technology-neutral, rule-based instruments can harness market discovery to deliver lower-cost decarbonization and reliability, while substantially reducing opportunities for cronyism and policy-driven misallocation.
One practical avenue for advancing technology-neutral energy regulation is to decouple safety certification from project-by-project siting decisions. A first step in this direction would be a standardized, one-time design approval for technologies such as small modular reactors (SMRs). This approval would be portable across sites, akin to aircraft type certification, so that a vendor that clears a rigorous safety review could deploy the same design without re-litigat-ing core technical issues in each individual licensing docket.[122] This approach reduces licensing risk and lowers the cost of capital, which empirical research identifies as a primary determinant of nuclear power’s levelized cost.[123]
A complementary reform is the adoption of a reference-plant pathway, in which a technology is built once at full scale and subsequently replicated without fundamental redesign. When paired with modularization and factory fabrication, this approach compresses construction schedules and reduces execution risk. Both modeling and historical evidence indicate that repetition — rather than bespoke one-off projects — is the primary source of efficiency gains in complex capital-intensive systems.[124] A further requirement is controlled, rules-based access to specialized materials, including uranium-233 (U-233) and medical or industrial isotopes, through transparent allocation mechanisms and robust safeguards. Several advanced reactor concepts, such as molten-salt systems and micro-reactors, depend on testable fuel cycles to validate performance and safety claims.[125] Taken together, these reforms are intentionally technology-neutral: they alter how technologies are licensed and demonstrated, not which technologies are permitted.
5.2 Externality track
From an economic perspective, the cleanest response to environmental externalities from energy production is a uniform price on emissions or a tradable performance standard that sets an output-based emissions rate and allows firms to trade credits. When paired with rigorous monitoring, reporting, and verification (MRV) that accounts for lifecycle effects, both instruments are technology-neutral in operation.[126] Moreover, under uncertainty, well-designed price or quantity mechanisms can achieve a given environmental target at least cost, while avoiding the information problems inherent in technology-specific carve-outs.[127] Once such a neutral framework is established, overlapping subsidies, mandates, and technology-specific quotas should sunset, both to prevent double-counting and to ensure that innovation is guided by least-cost abatement rather than statutory classifications.[128]
5.3 Reliability and market design
Reliability is a bundle of attributes including firm capacity, fast ramping (the ability to change output quickly), inertia (physical resistance to frequency changes), and locational deliverability (the ability to serve load behind congested wires). Energy systems work most reliably when they are compensated explicitly for these attributes and allowed to offer scarcity pricing. This means that retail prices are allowed to rise during peak demand so that investment and demand response are rewarded when reliability is most valuable.[129]
Accreditation of all resources should be explicitly probabilistic, relying on effective load-carrying capability (ELCC) so that a megawatt of solar, wind, storage, gas, or nuclear capacity is credited based on its empirically measured contribution to reducing loss-of-load probability, rather than on nameplate capacity under ideal conditions.[130],[131] To fully internalize system-integration costs, intermittent generators and large, inflexible loads — such as certain data centers — should carry tradable obligations demonstrating access to firm supply, storage, or demand-side flexibility during scarcity hours.[132] Finally, pay-for-performance rules with meaningful penalties for non-delivery should apply symmetrically to storage and demand response, ensuring that reliability value is reflected at the meter rather than assumed by category.[133]
5.4 Working within regulated monopolies
Given that retail electricity service is largely delivered by state-granted mo-nopolies, performance-based regulation (PBR) should overlay traditional cost-of-service rules to tie utility earnings to measurable outcomes rather than capital accumulation. Relevant outcomes include reduced outage dura-tion and frequency, faster interconnection and permitting timelines, verified emissions intensity per megawatt-hour delivered, and the cost per ton of emissions avoided relative to a defined benchmark portfolio.[134] Where statutes allow, competitive sourcing, including all-source solicitations and third-party power-purchase agreements, should replace guaranteed utility ownership for power generation services.[135] Transmission and interconnection reform should rely on cluster-based studies paired with binding shot clocks and transparent hosting-capacity maps that indicate how much incremental generation or load each line or feeder can accommodate. Evidence from jurisdictions that have adopted these tools shows faster queue processing and higher conversion rates from interconnection requests to completed projects.[136]
6. The Case for Nuclear Now
Nuclear power, particularly recent advances in small modular reactors (SMRs) and molten-salt reactors (MSRs), offers a uniquely strong case in the current energy landscape. The renewed policy interest in nuclear energy is not accidental but demand-driven. Rapid growth in artificial intelligence and data-intensive computing is sharply increasing electricity requirements, with credible estimates suggesting global data-center demand could roughly double by 2030 on an already constrained power grid.[137] Nuclear power provides near-zero lifecycle emissions and exceptionally high capacity factors (the share of time a plant operates at full output), while delivering reliable power during scarcity events, precisely when variable renewable resources are most constrained.[138] System-level modeling consistently shows that portfolios relying exclusively on variable renewables plus storage face sharply rising system costs and residual reliability risk during peak demand windows. Whereas portfolios that incorporate firm low-carbon resources such as nuclear, geothermal, or carbon capture and storage (CCS) achieve deep decarbonization at substantially lower expected cost.[139]
Recent policy developments are moving in the right direction but remain incomplete. The Trump administration’s 2025 executive order directing the Nuclear Regulatory Commission toward a risk-informed, technology-inclusive regulatory framework. DOE’s Liftoff analyses, along with newly enacted technology-neutral tax credits, help translate political interest into potentially bankable signals. But the investment case for advanced nuclear still hinges on predictable licensing timelines, durable, neutral rules that reduce regulatory risk over multi-decade horizons, and access to the necessary materials for advanced nuclear research.[140] Current Department of Energy policy undermines the US’s strategic position by continuing to downblend uranium-233, the critical starter material for next-generation reactor designs. Uranium-233 enables thorium fuel cycles, leveraging thorium’s abundance as a byproduct of rare-earth mining to deliver safer, cheaper, and more fuel-efficient nuclear power.
China’s molten-salt thorium pilot highlights a growing strategic risk: US policy choices are allowing China to pull ahead in advanced nuclear technologies.[141] China’s progress in advanced nuclear reflects a deliberate effort to secure uranium-233, the crucial starter fuel for thorium-powered nuclear energy, and manufacture this starter fuel, while the United States has moved in the opposite direction by downblending its existing U-233 stockpile rather than making it available for research and reactor demonstration.[142] This policy choice has slowed domestic innovation and ceded leadership in next-generation nuclear technologies to foreign competitors. Uniquely, the United States possesses both substantial thorium resources and the world’s most significant U-233 stockpile, produced during mid-century research programs.[143] As other nations incur high costs to recreate this capability, current US policy effectively discards a singular strategic advantage by rendering U-233 unusable rather than deploying it for reactor demonstration and fuel-cycle validation.[144]
The AI era intensifies this need, as data centers and electrification add large, relatively inflexible loads, and over-reliance on any single intermittent resource increases system costs and outage probability unless paired with firm, long-duration capacity.[145] A genuinely neutral and research-forward framework allows nuclear to compete to supply these reliability attributes on equal terms, without bespoke carve-outs, but also without structural exclusion.
7. Conclusion
The evidence assembled in this paper points toward a straightforward but politically difficult truth: the United States does not lack energy resources or technological options; it lacks a coherent, neutral policy framework for deploying them. Today’s system of technology-specific subsidies, eligibility rules, and overlapping mandates constrains investment to politically chosen categories rather than allowing markets to meet clear environmental and reliability requirements at least cost. A shift toward technology-neutral, outcome-based policy would unlock a vast reservoir of stalled projects. More than 2,000 GW are already sitting in interconnection queues, waiting for permission rather than invention. Even partial build-out of this pipeline would transform US electricity supply, lowering consumer costs, increasing resilience, and enabling industries such as AI, manufacturing, and data-center operations to expand without triggering grid instability. Federal modeling similarly shows that advanced nuclear power could scale from about 100 GW-equivalent to several hundred GW if licensing were streamlined and project finance made predictable. These technologies already exist; they pencil out and are waiting on governance to unlock their potential.
To capture this opportunity, the US must legislate and regulate for clear and concise outcomes. That means placing environmental externalities on a single neutral instrument, whether an emissions price or a tradable performance standard with rigorous monitoring and verification. It means procuring reliability as explicit products — specifically firm capacity,
ramping capability, and other reliability attributes — rather than smuggling climate goals into technology-specific mandates. It requires overlaying performance-based regulation onto existing state-granted monopolies so that utility earnings rise or fall with reliability, interconnection speed, and verifiable emissions intensity. And it requires building a standardized, bankable pathway for nuclear power, namely portable design approvals, standardized licensing, reference-plant replication, and safeguarded access to essential materials.
If Congress and the states adopt such a framework and sunset overlapping carve-outs, the market will discover the cleanest, most reliable, and least-cost energy production mix. The result will be an energy system finally aligned with America’s resource strengths, capable of meeting surging demand, and structured to deliver durable economic and environmental gains for decades to come.
Glossary of Abbreviations Used in Text
AI – Artificial Intelligence ANWR – Arctic National Wildlife Refuge CCS – Carbon Capture and Storage CGS – Customer Grid-Supply CO₂ – Carbon Dioxide CO₂e – Carbon Dioxide Equivalent DOE – Department of Energy ELCC – Effective Load-Carrying Capability EV – Electric Vehicle GW – Gigawatt IRA – Inflation Reduction Act ISO – Independent System Operator ISO-NE – ISO New England ITC – Investment Tax Credit MSR – Molten-Salt Reactor MRV – Monitoring, Reporting, and Verification NEA – Nuclear Energy Agency NEM – Net Energy Metering NIRA – National Industrial Recovery Act NRC – Nuclear Regulatory Commission OBBBA – One Big Beautiful Bill Act OECD – Organisation for Economic Co-operation and Development PBR – Performance-Based Regulation PTC – Production Tax Credit PURPA – Public Utility Regulatory Policies Act QF – Qualifying Facility REC – Renewable Energy Certificate RFS – Renewable Fuel Standard RIIO – Revenue = Incentives + Innovation + Outputs RPS – Renewable Portfolio Standard RTO – Regional Transmission Organization SMR – Small Modular Reactor SO₂ – Sulfur Dioxide SMART – Solar Massachusetts Renewable Target TMI – Three Mile Island TOU – Time-of-Use TPS – Tradable Performance Standard TVA – Tennessee Valley Authority TWh – Terawatt-hour U-233 – Uranium-233 US – United States UNSCEAR – United Nations Scientific Committee on the Effects of Atomic Radiation
Endnotes
[1] “What Are the Most Regulated Industries? A Compre-hensive Guide for Businesses in 2025,” Trilink FTZ, accessed October 23, 2026, https://trilinkftz.com/global-trade-regula-tory-compliance/what-are-the-most-regulated-industries-a-comprehensive-guide-for-businesses-in-2025/.
Patrick A. McLaughlin, Oliver Sherouse, and Laura Stanley, RegData 3.1: A Database on US Federal Regulations, Mercatus Center at George Mason University, 2020.
[2] David I. Stern, “The Role of Energy in Economic Growth,” Annals of the New York Academy of Sciences 1219, no. 1 (2011): 26–51.
[3] Ember, “Yearly Electricity Data,” accessed September 24, 2025, https://ember-energy.org/data/yearly-electricity-data/.
[4] US Energy Information Administration, “In 2024, the United States Produced More Energy Than Ever Before,” Today in Energy, June 9, 2025, https://www.eia.gov/today-inenergy/detail.php?id=65445.
[5] Scott DiSavino, “US Power Use to Reach Record Highs in 2025 and 2026, EIA Says,” Reuters, September 9, 2025, https://www.reuters.com/business/energy/us-power-use-reach-record-highs-2025-2026-eia-says-2025-09-09/.
[6] US Department of Energy, “Department of Energy Releases Report on Evaluating US Grid Reliability and Security,” news release, July 7, 2025, https://www.energy.gov/articles/ department-energy-releases-report-evaluating-us-grid-reli-ability-and-security.
[7] US Congress, Energy Policy Act of 2005, Pub. L. No. 109-58 (2005), https://www.govinfo.gov/content/pkg/BILLS-109hr6enr/pdf/BILLS-109hr6enr.pdf.
[8] Appalachian Regional Commission, “National and State Energy Policy Trends: Appalachian Region Energy Blueprint Research Brief” (August 2006), https://www.arc.gov/ wp-content/uploads/2020/06/SummaryofNationalandSta-teEnergyPolicyTrends.pdf.
[9] Lawrence Berkeley National Laboratory, Queued Up 2025 Edition: Characteristics of Power Plants Seeking Transmission Interconnection as of December 31, 2024, accessed December 10, 2025, https://emp.lbl.gov/publications/queued-2025-edi-tion-characteristics.
[10] US Department of Energy, Pathways to Commercial Lift-off: Advanced Nuclear (Washington, DC, September 2024), https://gain.inl.gov/content/uploads/4/2024/11/DOE-Ad-vanced-Nuclear-Liftoff-Report.pdf.
Massachusetts Institute of Technology Energy Initiative, The Future of Nuclear Energy in a Carbon-Constrained World (Cambridge, MA, 2018), https://energy.mit.edu/wp-content/ uploads/2018/09/The-Future-of-Nuclear-Energy-in-a-Car-bon-Constrained-World.pdf.
[11] US Department of Energy, GeoVision: Harnessing the Heat Beneath Our Feet (Washington, DC: Office of Energy Efficiency and Renewable Energy, 2019), https://www. energy.gov/sites/prod/files/2019/06/f63/GeoVision-full-re-port-opt.pdf.
[12] F. A. Hayek, “The Use of Knowledge in Society,” American Economic Review 35, no. 4 (1945): 519–530.
[13] George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (1971): 3–21.
Sam Peltzman, “Toward a More General Theory of Regulation,” JournalofLawandEconomics19, no. 2 (1976): 211–240.
James M. Buchanan and Gordon Tullock, TheCalculusof Consent:LogicalFoundationsofConstitutionalDemocracy (Ann Arbor: University of Michigan Press, 1962).
[14] Carolyn Fischer and Richard G. Newell, “Environmental and Technology Policies for Climate Mitigation,” Journal of Environmental Economics and Management 55, no. 2 (2008): 142–162.
[15] James B. Bushnell, Carla Peterman, and Catherine Wolfram, “Local Solutions to Global Problems: Climate Change Policies and Regulatory Jurisdiction,” Review of Environmental Economics and Policy 2, no. 2 (2008): 175–193.
[16] US Environmental Protection Agency, “Renewable Fuel Standard,” last updated October 30, 2025, https://www.epa. gov/renewable-fuel-standard.
[17] US Energy Information Administration, “Electricity Explained: Electricity Generation,” Electric Power Annual, table 1.1, accessed January 7, 2026, https://www.eia.gov/ electricity/annual/table.php?t=epa_01_01.html.
[18] Peter Van Doren, “A Brief History of Federal Energy Regulations,” Downsizing the Federal Government, March 9, 2016, https://www.downsizinggovernment.org/energy/regulations.
[19] Laurie E. Jasinski, “Understanding the Connally Hot Oil Act of 1935,” Handbook of Texas Online, accessed November 24, 2025, https://www.tshaonline.org/handbook/entries/ connally-hot-oil-act-of-1935.
[20] Tennessee Valley Authority Act, 16 USC. § 831 et seq. (1933).
[21] Atomic Energy Act of 1946, Pub. L. No. 79-585, 60 Stat. 755 (1946), https://www.govinfo.gov/content/pkg/STAT-UTE-60/pdf/STATUTE-60-Pg755.pdf.
[22] Atomic Energy Act of 1954, Pub. L. No. 83-703, 68 Stat. 919 (1954), https://www.govinfo.gov/content/pkg/STAT-UTE-68/pdf/STATUTE-68-Pg919.pdf.
[23] Mark Holt, Price-Anderson Act: Nuclear Power Industry Liability Limits and Compensation to the Public After Radioactive Releases, CRS Report for Congress IF10821 (Washington, DC: Congressional Research Service, February 28, 2025), https://www.congress.gov/crs-product/IF10821.
[24] Erwin C. Hargrove, Prisoners of Myth: The Leadership of the Tennessee Valley Authority, 1933–1990 (Princeton, NJ: Princeton University Press, 1994).
[25] Samuel J. Walker and Thomas R. Wellock, A Short History of Nuclear Regulation, 1946–2009, NUREG/BR-0175 (Washington, DC: US Nuclear Regulatory Commission, 2024), https://www.nrc.gov/reading-rm/doc-collections/ nuregs/brochures/br0175/.
[26] Atomic Energy Act of 1946, Pub. L. No. 79-585, 60 Stat. 755 (1946), https://www.govinfo.gov/content/pkg/STAT-UTE-60/pdf/STATUTE-60-Pg755.pdf.
[27] Dwight D. Eisenhower, “Atoms for Peace,” speech delivered December 8, 1953, Voices of Democracy, https://voicesofdemocracy.umd.edu/eisenhower-atoms-for-peace-speech-text/.
Richard G. Hewlett and Jack M. Holl, Atoms for Peace and War, 1953–1961: Eisenhower and the Atomic Energy Commission (Berkeley: University of California Press for the US Atomic Energy Commission, 1989).
[28] Association for Diplomatic Studies and Training, “The NPT and the Aftermath of India’s Nuclear Test, May 1974,” ADST Oral History Reader, 2015, https://adst.org/2015/05/ the-npt-and-the-aftermath-of-indias-nuclear-test-may-1974/.
Nuclear Non-Proliferation Act of 1978, Pub. L. No. 95-242, 92 Stat. 120 (1978).
Jimmy Carter, “Nuclear Power Policy: Statement on Decisions Reached Following Review,” April 7, 1977, The American Presidency Project, https://www.presidency.ucsb.edu/ node/243139.
[29] Samuel J. Walker and Thomas R. Wellock, A Short History of Nuclear Regulation, 1946–2009, NUREG/BR-0175 (Washington, DC: US Nuclear Regulatory Commission, 2024), https://www.nrc.gov/reading-rm/doc-collections/ nuregs/brochures/br0175/.
[30] Gilbert E. Metcalf, “Taxing Energy in the United States: Which Fuels Does the Tax Code Favor?” American Economic Review 99, no. 2 (2009): 52–57.
[31]Mineral Leasing Act of 1920, 30 USC. § 181 et seq. (1920).
[32] Clean Air Act, 42 USC. § 7401 et seq. (1970). Federal Water Pollution Control Act (Clean Water Act), 33 USC. § 1251 et seq. (2011). National Environmental Policy Act of 1969, 42 USC. § 4321 et seq. (1969).
[33] Samuel J. Walker and Thomas R. Wellock, A Short History of Nuclear Regulation, 1946–2009, NUREG/BR-0175 (Washington, DC: US Nuclear Regulatory Commission, 2024), https://www.nrc.gov/reading-rm/doc-collections/ nuregs/brochures/br0175/.
[34] Office of the Historian, US Department of State, “The 1973 Oil Embargo,” Milestones: 1969–1976, https://history. state.gov/milestones/1969-1976/oil-embargo. Laurel Graefe, Federal Reserve History, “Oil Shock of 1978–79,” Federal Reserve History (2013), https://www.feder-alreservehistory.org/essays/oil-shock-of-1978-79. Paul W. MacAvoy, The Energy Industry: The Political Economy of Regulation (New York: W. W. Norton, 1983).
[35] Energy Policy and Conservation Act of 1975, Pub. L. No. 94-163, 89 Stat. 871–921 (1975), https://www.govinfo.gov/ content/pkg/STATUTE-89/pdf/STATUTE-89-Pg871.pdf.
Department of Energy Organization Act, 91 Stat. 565–578 (1977), https://www.govinfo.gov/content/pkg/STATUTE-91/ pdf/STATUTE-91-Pg565.pdf.
Public Utility Regulatory Policies Act of 1978, Pub. L. No. 95-617, 92 Stat. 3117 (1978), https://www.govinfo.gov/content/ pkg/STATUTE-92/pdf/STATUTE-92-Pg3117.pdf.
[36] Title 18 — Conservation of Power and Water Resources, Part 292 — Electric Regulations Under Section 210 of PURPA, Code of Federal Regulations, title 18, pt. 292, https://www. ecfr.gov/current/title-18/chapter-I/subchapter-K/part-292.
[37] Paul L. Joskow, “Restructuring, Competition and Regulatory Reform in the US Electricity Sector,” Journal of Economic Perspectives 11, no. 3 (1997): 119–138.
[38] US Nuclear Regulatory Commission, “Backgrounder on the Three Mile Island Accident,” NRC, 2018, https://www. nrc.gov/reading-rm/doc-collections/fact-sheets/3mile-isle. html.
United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), Sources and Effects of Ionizing Radiation: UNSCEAR 2008 Report, vol. II, annex D, Health Effects Due to the Chernobyl Nuclear Accident (New York: United Nations, 2011), https://www.unscear. org/unscear/uploads/documents/unscear-reports/UN-SCEAR_2008_Report_Vol.II.pdf.
OECD Nuclear Energy Agency, Chernobyl: Assessment of Radiological and Health Impacts (Paris: OECD Publishing, 2002), https://www.oecd-nea.org/rp/chernobyl/chernobyl. html.d February 7, 2025).
[39] J. Samuel Walker, ThreeMileIsland:ANuclearCrisisin Historical Perspective (Berkeley: University of California Press, 2004).
US Nuclear Regulatory Commission, “Backgrounder on the Three Mile Island Accident,” NRC, 2018, https://www.nrc.gov/ reading-rm/doc-collections/fact-sheets/3mile-isle.html.
United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), Sources and Effects of Ionizing Radiation: UNSCEAR 2008 Report, vol. II, annex D, Health Effects Due to the Chernobyl Nuclear Accident (New York: United Nations, 2011), https://www.unscear.org/unscear/uploads/doc-uments/unscear-reports/UNSCEAR_2008_Report_Vol.II.pdf. OECD Nuclear Energy Agency, Chernobyl: Assessment of Radiological and Health Impacts (Paris: OECD Publishing, 2002), https://www.oecd-nea.org/rp/chernobyl/chernobyl.html.
[40] Anil Markandya and Paul Wilkinson, “Electricity Generation and Health,” The Lancet 370, no. 9591 (2007): 979–990, https://www.un-ilibrary.org/content/periodicals/24121428.
[41] Federal Energy Regulatory Commission. “Order No. 888: Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities.” 1996. https://www.ferc.gov/sites/de-fault/files/2020-05/rm95-8-00w.txt
Federal Energy Regulatory Commission. “Order No. 889: Open Access Same-Time Information System and Standards of Conduct.” 1996. https://www.ferc.gov/industries-data/ electric/industry-activities/open-access-transmission-tar-iff-oatt-reform/history-of-oatt-reform/order-no-889-1
[42] Joskow, Paul L. “Restructuring, Competition and Regulatory Reform in the US Electricity Sector.” Journal of Economic Perspectives, vol. 11, no. 3 (1997):119–138.
[43] Energy Policy Act of 1992, Pub. L. No. 102-486, 106 Stat. 2776 (1992), https://www.congress.gov/bill/102nd-congress/ house-bill/776.
[44] New York State Energy Research and Development Authority (NYSERDA), NY-SUN Program, https://www. nyserda.ny.gov/All-Programs/Programs/NY-Sun.
SolarInsure, “California Solar Incentives,” accessed January 2026, https://www.solarinsure.com/california-solar-incen-tives.
[45] Carley, Sanya. “State Renewable Energy Electricity Policies: An Empirical Evaluation of Effectiveness.” Energy Policy, vol. 37, no. 8 (2009):3071–3081.
Wiser, Ryan, et al. “A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards.” Energy Policy, vol. 96 (2016):645–660.d July 13, 2024).
[46] ClearPath, “What Is a State Renewable Portfolio Standard?,” accessed January 7, 2026, https://www.clearpath. energy/blog/what-is-a-state-renewable-portfolio-standard.
California Public Utilities Commission, “Renewable Portfolio Standard (RPS),” accessed October 14, 2026, https:// www.cpuc.ca.gov/rps/.
New York State Energy Research and Development Author-ity (NYSERDA), “Renewable Portfolio Standard,” accessed January 7, 2026, https://www.nyserda.ny.gov/All-Programs/ Clean-Energy-Standard/Clean-Energy-Standard-Resources/ Renewable-Portfolio-Standard.
[48] Sanya Carley, “State Renewable Energy Electricity Policies: An Empirical Evaluation of Effectiveness,” Energy Policy37, no. 8 (2009): 3071–3081.
Ryan Wiser et al., “A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards,” Energy Policy 96 (2016): 645–660.
Haitao Yin and Nicholas Powers, “Do State Renewable Portfolio Standards Promote In-State Renewable Generation?” EnergyPolicy38, no. 2 (2010): 1140–1149.
[49] Sustainability Directory, “How Does Nuclear Energy Qualify for Clean Energy Standards but Not Renewable Portfolio Standards?,” accessed January 7, 2026, https:// energy.sustainability-directory.com/learn/how-does-nucle-ar-energy-qualify-for-clean-energy-standards-but-not-re-newable-portfolio-standards/.
[50] Timothy Searchinger et al., “Use of US Croplands for Biofuels Increases Greenhouse Gases Through Emissions from Land-Use Change,” Science 319, no. 5867 (2008):1238–1240, https://www.science.org/doi/10.1126/sci-ence.1151861.
[51] National Research Council, Renewable Fuel Standard: Potential Economic and Environmental Effects of US Biofuel Policy (Washington, DC: National Academies Press, 2011), https://nap.nationalacademies.org/catalog/13105/renew-able-fuel-standard-potential-economic-and-environmen-tal-effects-of-us-biofuel.
[52] Harry de Gorter and David R. Just, “The Welfare Eco-nomics of a Biofuel Tax Credit and the Interaction with Price Contingent Farm Subsidies,” Energy Policy 37, no. 11 (2009): 4513–4525, https://www.sciencedirect.com/science/ article/pii/S0165176509002669.
Roberts, Michael J., and Wolfram Schlenker. “Identifying Supply and Demand Elasticities of Agricultural Commodities: Implications for the US Ethanol Mandate.” American Economic Review, vol. 103, no. 6 (2013):2265–2295. https:// www.aeaweb.org/articles?id=10.1257/aer.103.6.2265
[53] National Academies of Sciences, Engineering, and Medicine, The Power of Change: Innovation for Development and Deployment of Increasingly Clean Electric Power Technologies, chap. 5 (Washington, DC: National Academies Press, 2021), https://www.nationalacademies.org/read/26704/ chapter/5.
Interstate Renewable Energy Council (IREC), National Shared Renewables and Net Metering Policy Inventory, annual editions, https://www.irecusa.org/programs/shared-re-newables/.
[54] Solar United Neighbors, “Net Metering in Texas,” accessed January 7, 2026, https://solarunitedneighbors.org/ resources/net-metering-in-texas/.
California Public Utilities Commission, Decision Adopting Modifications to the Net Energy Metering Tariff, D.16-01-044 (San Francisco: CPUC, January 2016), https://docs. cpuc.ca.gov/PublishedDocs/Published/G000/M158/ K181/158181678.PDF.
[55] California Public Utilities Commission, “Net Billing Tariff,” accessed January 7, 2026, https://www.cpuc.ca.gov/ industries-and-topics/electrical-energy/demand-side-man-agement/customer-generation/nem-revisit/net-billing-tar-iff.
Hawaii Public Utilities Commission, OrderNo.33258, Terminating Net Energy Metering Program, Docket No. 2014-0192 (Honolulu: Hawaii PUC, October 2015), https:// puc.hawaii.gov/wp-content/uploads/2015/10/2014-0192-Order-Resolving-Phase-1-Issues-final.pdf.
[56] Hawaii Public Utilities Commission. Order No. 33258 TerminatingNet EnergyMetering Program, 2015, https:// puc.hawaii.gov/wp-content/uploads/2015/10/2014-0192-Order-Resolving-Phase-1-Issues-final.pdf
[57] Massachusetts Department of Energy Resources, Solar Massachusetts Renewable Target (SMART) Program, 225 CMR 20.00, accessed October 14, 2025, https://www.mass. gov/regulations/225-CMR-2000-solar-massachusetts-re-newable-target-smart-program.
[58] City of Austin, Austin Energy Value of Solar Tariff, accessed January 7, 2026, https://services.austintexas.gov/ edims/document.cfm?id=210805.
[59] Hawaii Public Utilities Commission, Order No. 33258, Terminating Net Energy Metering Program, Docket No. 2014-0192 (Honolulu: Hawaii PUC, October 2015), https:// puc.hawaii.gov/wp-content/uploads/2015/10/2014-0192-Order-Resolving-Phase-1-Issues-final.pdf.
[60] Ryan Wiser, Galen Barbose, et al., US Renewable Portfolio Standards: 2021 Status Update—Early Release (Berkeley, CA: Lawrence Berkeley National Laboratory, 2021).
[61] Internal Revenue Code § 45 (Renewable Electricity Production Tax Credit), and related Internal Revenue Service notices and guidance, various years.
Internal Revenue Code § 48 (Investment Tax Credit), and related Internal Revenue Service notices and guidance, various years.
Energy Policy Act of 2005, Pub. L. No. 109-58, 119 Stat. 594 (2005); Energy Independence and Security Act of 2007, Pub. L. No. 110-140, 121 Stat. 1492 (2007).
[62] David Feldman et al., Shared Solar: Current Landscape, Market Potential, and the Impact of Federal Securities Regulation, NREL/TP-6A20-63892 (Golden, CO: National Renewable Energy Laboratory, 2015).
[63] Interstate Renewable Energy Council (IREC), Shared Renewables Policy Catalog, accessed January 7, 2026, https:// irecusa.org/resources/shared-renewables-policy-catalog/.
California Public Utilities Commission, Decision Adopting Revenue Decoupling and Net Energy Metering Reforms, Decision 16-01-044 (January 2016); New York Public Service Commission, OrderAdoptingtheCleanEnergyStandard, Case 15-E-0302 (August 2016).
National Academies of Sciences, Engineering, and Medicine, The Power of Change: Innovation for Development and Deployment of Increasingly Clean Electric Power Technologies (Washington, DC: National Academies Press, 2021), chap. 5.
Paul L. Joskow, “Electricity Sector Restructuring and Competition: Lessons Learned,” Cuadernos de Economía 40, no. 121 (2003): 548–558.
[64] Ryan Wiser, Galen Barbose, et al., US Renewable Portfolio Standards: 2021 Status Update—Early Release (Berkeley, CA: Lawrence Berkeley National Laboratory, 2021).
Joseph Rand et al., Queued Up: Characteristics of Power Plants Seeking Transmission Interconnection in the United States, LBNL-2001386 (Berkeley, CA: Lawrence Berkeley National Laboratory, 2021).
[65] Inflation Reduction Act of 2022, Pub. L. No. 117-169, 136 Stat. 1818 (2022), https://www.irs.gov/inflation-reduc-tion-act-of-2022.
[66] Clean Air Act, 42 USC. § 7401 et seq. Clean Water Act, 33 USC. § 1251 et seq.
Mineral Leasing Act of 1920, 30 USC. § 181 et seq.
Atomic Energy Act of 1954, 42 USC. § 2011 et seq.; 10
C.F.R. pts. 20, 50, 52, 70, 72, 73, 100
US Nuclear Regulatory Commission, A Short History of Nuclear Regulation,1946–2009, NUREG/BR-0175 (Washington, DC: NRC, 2024).
Internal Revenue Code §§ 45, 48, 45Y, 48E Energy Policy Act of 2005, Pub. L. No. 109-58
Energy Independence and Security Act of 2007, Pub. L. No. 110-140.
[67] US Environmental Protection Agency, “Summary of Inflation Reduction Act Provisions Related to Renewable Energy,” EPA Green Power Markets (2025).
[68] Columbia Law School, Sabin Center for Climate Change Law, “Inflation Reduction Act Tracker: IRA Section 13702—Clean Electricity Investment Credit,” 2025, https://iratracker. org/programs/ira-section-13702-clean-electricity-invest-ment-credit/.
[69] US Environmental Protection Agency, “Summary of Inflation Reduction Act Provisions Related to Renewable Energy,” EPA Green Power Markets (2025).
Internal Revenue Service, “Clean Electricity Production Credit (§ 45Y) and Related Guidance,” IRS.gov, 2025.
[70] Congressional Research Service, Five-Year Offshore Oil and Gas Leasing Program, CRS Report R44692 (Washington, DC: Congressional Research Service, 2025).
[71] Bureau of Ocean Energy Management, “Outer Continen-tal Shelf Lands Act History,” 2025, https://www.boem.gov/ oil-gas-energy/leasing/ocs-lands-act-history.
[72] US Department of the Interior, “Interior Takes Bold Steps to Expand Energy, Local Control, and Land Access in Alaska,” press release, October 23, 2025, https://www.doi.gov/pressre-leases/interior-takes-bold-steps-expand-energy-local-control-and-land-access-alaska.
[73] Tracy J. Wholf and Seiji Yamashita, “Trump Administration Proposes Auctioning Offshore Oil Leases on Multiple Coasts,” CBS News, October 24, 2025, https://www.cbsnews. com/news/trump-offshore-oil-leases-us-coastlines/.
Elizabeth Manning, “The Trump Administration Plans New Oil and Gas Leases in the Western Arctic,” Earthjustice, October 21, 2025, https://earthjustice.org/press/2025/the-trump-administration-plans-new-oil-and-gas-leases-in-the-western-arctic-and-will-soon-finalize-a-rule-repealing-protections.
[74] Jennifer McDermott, “Trump Temporarily Halts Leasing and Permitting for Wind Energy Projects,” Associated Press, January 20, 2025, https://apnews.com/article/wind-energy-off-shore-turbines-trump-executive-order-995a744c3c1a2eddb-30cacf50b681f13.
[75] Donald J. Trump, “Ordering the Reform of the Nuclear Regulatory Commission,” Executive Order, May 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/05/or-dering-the-reform-of-the-nuclear-regulatory-commission/.
[76] Michael Goff, “9 Key Takeaways from President Trump’s Executive Orders on Nuclear Energy,” US Department of Energy, Office of Nuclear Energy, June 10, 2025, https://www.energy.gov/ne/articles/9-key-takeaways-presi-dent-trumps-executive-orders-nuclear-energy.
[77] US Department of Energy, Office of Nuclear Energy, “NRC Approves NuScale Power’s Uprated Small Modular Reactor Design,” May 30, 2025, https://www.energy.gov/ne/ articles/nrc-approves-nuscale-powers-uprated-small-mod-ular-reactor-design.
[78] US Nuclear Regulatory Commission, “NRC Approves Standard Design for NuScale US460 Small Modular Reactor,” News Release No. 25-033, May 29, 2025, https://www. nrc.gov/cdn/doc-collection-news/2025/25-033.pdf.
[79] US Nuclear Regulatory Commission, “Part 53—Risk-Informed, Technology-Inclusive Regulatory Framework for Advanced Reactors,” proposed rule, Federal Register, October 31, 2024 (docket active 2025).
Nuclear Innovation Alliance, “Comments on NRC’s Rulemaking on the Part 53 Risk-Informed, Technology-Inclusive Regulatory Framework for Advanced Reactors,” March 6, 2025, https://nuclearinnovationalliance. org/nia-comments-nrcs-rulemaking-part-53-risk-in-formed-technology-inclusive-regulatory-framework.
[80] “NRC Approves Requests Enabling Holtec to Advance Palisades Restart,” Reuters, July 25, 2025, https://www.reu-ters.com/business/energy/nrc-approves-holtecs-request-re-start-michigan-nuclear-plant-2025-07-25/.
[81] Donald J. Trump, “Ordering the Reform of the Nuclear Regulatory Commission,” Executive Order, May 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/05/ ordering-the-reform-of-the-nuclear-regulatory-commis-sion/.
US Nuclear Regulatory Commission, “Part 53—Risk-Informed, Technology-Inclusive Regulatory Framework for Advanced Reactors,” proposed rule, Federal Register, October 31, 2024 (docket active 2025).
“NRC Approves Requests Enabling Holtec to Advance Palisades Restart,” Reuters, July 25, 2025, https://www.reuters. com/business/energy/nrc-approves-holtecs-request-re-start-michigan-nuclear-plant-2025-07-25/.
[82] US Department of Energy, Office of Environmental Management, “U-233 Material Downblending Project,” en-ergy.gov, https://www.energy.gov/orem/articles/u-233-ma-terial-downblending-project.
Oak Ridge National Laboratory, “ORNL’s Building 3019 and the Disposition of Uranium-233,” ornl.gov, https://www.ornl.gov/content/ornls-building-3019.
[83] Samuel J. Walker and Thomas R. Wellock, A Short His-tory of Nuclear Regulation, 1946–2009, NUREG/BR-0175 (Washington, DC: US Nuclear Regulatory Commission, 2024), https://www.nrc.gov/reading-rm/doc-collections/ nuregs/brochures/br0175/.
World Nuclear Association, “Thorium,” 2024, https:// world-nuclear.org/information-library/current-and-fu-ture-generation/thorium.aspx.
[84] Richard G. Hewlett and Jack M. Holl, Atoms for Peace and War, 1953–1961: Eisenhower and the Atomic Energy Commission (Berkeley: University of California Press for the US Atomic Energy Commission, 1989), https://www.energy.gov/sites/default/files/2013/09/f2/HP_Hewlett_Atoms_for_Peace_and_War_0.pdf.
US Atomic Energy Commission, CivilianNuclearPower: AReport tothe President(Washington, DC: Government Printing Office, November 1962).
[85] Carolyn Fischer and Richard G. Newell, “Environmental and Technology Policies for Climate Mitigation,” Journal of Environmental Economics and Management 55, no. 2 (2008): 142–162.
[86] F. A. Hayek, “The Use of Knowledge in Society,” AmericanEconomicReview35, no. 4 (1945): 519–530.
[87] Kenneth Gillingham and James H. Stock, “The Cost of Reducing Greenhouse Gas Emissions,” Journal of Economic Literature 56, no. 4 (2018): 909–939.
[88] Sanya Carley, “State Renewable Energy Electricity Policies: An Empirical Evaluation of Effectiveness,” Energy Policy37, no. 8 (2009): 3071–3081.
Ryan Wiser et al., “A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards,” Energy Policy 96 (2016): 645–660.
[89] Timothy Searchinger et al., “Use of US Croplands for Biofuels Increases Greenhouse Gases Through Emissions from Land-Use Change,” Science 319, no. 5867 (2008):1238–1240.
National Research Council, Renewable Fuel Standard: PotentialEconomicandEnvironmentalEffectsofUSBiofuel Policy (Washington, DC: National Academies Press, 2011).
[90] Stephen P. Holland et al., “Are There Environmental Benefits from Driving Electric Vehicles? The Importance of Local Factors,” AmericanEconomicReview:Papers& Proceedings106, no. 5 (2016): 91–95.
Carolyn Fischer and Richard G. Newell, “Environmental and Technology Policies for Climate Mitigation,” Journal of Environmental Economics and Management 55, no. 2 (2008):142–162.
Ramteen Sioshansi, “Welfare Impacts of Electricity Storage and the Implications of Ownership Structure,” TheEnergy Journal31, no. 2 (2010): 173–198.
[91] Paul L. Joskow, “Regulation of Natural Monopoly,” in HandbookofLawandEconomics, vol. 2, ed. A. M. Polinsky and Steven Shavell (Amsterdam: Elsevier, 2007), 1227–1348.
Harvey Averch and Leland L. Johnson, “Behavior of the Firm under Regulatory Constraint,” American Economic Review52, no. 5 (1962): 1052–1069.
[92] Avinash K. Dixit and Robert S. Pindyck, Investment under Uncertainty (Princeton, NJ: Princeton University Press, 1994).
Robert S. Pindyck, “Irreversibility, Uncertainty, and Investment,” Journal of Economic Literature 29, no. 3 (1991):1110–1148.
[93] George J. Stigler, “The Theory of Economic Regulation,” BellJournalofEconomicsandManagementScience2, no. 1 (1971): 3–21.
Sam Peltzman, “Toward a More General Theory of Regulation,” Journal of Law and Economics 19, no. 2 (1976): 211–240.
János Kornai, Eric Maskin, and Gérard Roland, “Understanding the Soft Budget Constraint,” Journal of Economic Literature41, no. 4 (2003): 1095–1136.
[94] Joseph E. Aldy and Robert N. Stavins, “Using the Market to Address Climate Change: Insights from Theory and Experience,” Daedalus 141, no. 2 (2012): 45–60.
[95] James B. Bushnell, Carla Peterman, and Catherine Wolfram, “Local Solutions to Global Problems: Climate Change Policies and Regulatory Jurisdiction,” Review of Environmental Economics and Policy 2, no. 2 (2008): 175–193.
[96] George J. Stigler, “The Theory of Economic Regulation,” BellJournalofEconomicsandManagementScience2, no. 1 (1971): 3–21.
Sam Peltzman, “Toward a More General Theory of Regulation,” JournalofLawandEconomics19, no. 2 (1976): 211–240.
[97] James M. Buchanan and Gordon Tullock, TheCalculus ofConsent:LogicalFoundationsofConstitutionalDemocracy(Ann Arbor: University of Michigan Press, 1962).
Mancur Olson, The Logicof CollectiveAction: PublicGoods andtheTheoryofGroups(Cambridge, MA: Harvard University Press, 1965).
[98] Bruce Yandle, “Viewpoint: Bootleggers and Baptists — The Education of a Regulatory Economist,” American Enterprise Institute, May 1, 1983, https://www.aei.org/articles/ viewpoint-bootleggers-and-baptists-the-education-of-a-re-gulatory-economist/.
[99] Ryan Wiser et al., “A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards,” Energy Policy 96 (2016): 645–660.
Galen L. Barbose, US Renewable Portfolio Standards: 2023 Status Update and Early Review of 2024 Developments (Berkeley, CA: Lawrence Berkeley National Laboratory, 2024).
Paul L. Joskow, “Restructuring, Competition and Regulatory Reform in the US Electricity Sector,” Journal of Economic Perspectives 11, no. 3 (1997): 119–138.
[100] Paul L. Joskow, “Restructuring, Competition and Regulatory Reform in the US Electricity Sector,” Journal of EconomicPerspectives11, no. 3 (1997): 119–138.
Avinash K. Dixit and Robert S. Pindyck, Investmentunder Uncertainty (Princeton, NJ: Princeton University Press, 1994).
[101] Paul Pierson, “When Effect Becomes Cause: Policy Feedback and Political Change,” World Politics 45, no. 4 (1993): 595–628.
[102] Paul L. Joskow, “Regulation of Natural Monopoly,” in HandbookofLawandEconomics, vol. 2, ed. A. Mitchell Polinsky and Steven Shavell (Amsterdam: Elsevier, 2007), 1227–1348.
Alfred E. Kahn, The Economicsof Regulation:Principles and Institutions(Cambridge, MA: MIT Press, 1988).
[103] Harvey Averch and Leland L. Johnson, “Behavior of the Firm under Regulatory Constraint,” American Economic Review 52, no. 5 (1962): 1052–1069.
[104] Jean-Jacques Laffont and Jean Tirole, A Theory of Incentives in Procurement and Regulation (Cambridge, MA: MIT Press, 1993).
[105] Abba P. Lerner, “The Concept of Monopoly and the Measurement of Monopoly Power,” ReviewofEconomic Studies 1, no. 3 (1934): 157–175.
Arnold C. Harberger, “Monopoly and Resource Allocation,” American Economic Review 44, no. 2 (1954): 77–87.
[106] Harvey Leibenstein, “Allocative Efficiency vs. ‘X-Efficiency,’” American Economic Review 56, no. 3 (1966):392–415.
[107] Harold Demsetz, “Why Regulate Utilities?” Journal of Law and Economics 11, no. 1 (1968): 55–65.
[108] Paul L. Joskow and Jean Tirole, “Reliability and Competitive Electricity Markets,” RAND Journal of Economics 38, no. 1 (2007): 60–84.
Mark N. Lowry and Matt Makos, Performance-Based Regulation(PBR) forUS ElectricUtilities (Washington, DC: Edison Electric Institute, 2013).
[109] Federal Energy Regulatory Commission, Understanding Wholesale Capacity Markets, accessed January 7, 2026, https://www.ferc.gov/understanding-wholesale-capacity-markets.
United Kingdom RIIO framework SP Energy Networks, This is how we will transform our transmission network, accessed October 17, 2025, https:// www.spenergynetworks.co.uk/userfiles/file/46753_SPEN_ T3_StakeholderSummary.pdf.
[110] Paul L. Joskow, “Regulation of Natural Monopoly,” in HandbookofLawandEconomics, vol. 2, ed. A. Mitchell Polinsky and Steven Shavell (Amsterdam: Elsevier, 2007), 1227–1348.
Paul L. Joskow and Jean Tirole, “Reliability and Competitive Electricity Markets,” RANDJournalofEconomics38, no. 1 (2007): 60–84.
[111] Joseph E. Aldy and Robert N. Stavins, “The Promise and Problems of Pricing Carbon: Theory and Experience,” Journal of Environment & Development 21, no. 2 (2012):152–180.
Joseph E. Aldy and Robert N. Stavins, “Using the Market to Address Climate Change: Insights from Theory and Experience,” Daedalus 141, no. 2 (2012): 45–60.
A. Denny Ellerman, Paul L. Joskow, Richard Schmalensee, Juan-Pablo Montero, and Elizabeth M. Bailey, Markets for Clean Air: The US Acid Rain Program (Cambridge: Cambridge University Press, 2000).
Nathaniel O. Keohane, “Cap and Trade, Rehabilitated,” Review of Environmental Economics and Policy 3, no. 1 (2009): 42–62.
W. David Montgomery, “Markets in Licenses and Efficient Pollution Control Programs,” Journal of Economic Theory 5, no. 3 (1972): 395–418.
[112] Richard Sweeney, Robert Stavins, Robert Stowe, and Gabriel Chan, “What the US Sulphur Dioxide
Cap-and-Trade Programme Can Teach Climate Policy,” VoxEU, accessed November 8, 2025, https://cepr.org/ voxeu/columns/us-sulphur-dioxide-cap-and-trade-programme-and-lessons-climate-policy.
Montgomery, W. David. “Markets in Licenses and Efficient Pollution Control Programs.” Journal of Economic Theory5, no. 3 (1972): 395–418.
[113] Martin L. Weitzman, “Prices vs. Quantities,” Review ofEconomicStudies41, no. 4 (1974): 477–491.
W. David Montgomery, “Markets in Licenses and Efficient Pollution Control Programs,” Journal of Economic Theory5, no. 3 (1972): 395–418.
[114] Richard L. Revesz and Robert N. Stavins, “Environmental Law and Policy,” in Handbook of Law and Economics, vol. 1, ed. A. Mitchell Polinsky and Steven Shavell (Amsterdam: Elsevier, 2007), 499–589.
[115] Paul L. Joskow and Jean Tirole, “Reliability and Competitive Electricity Markets,” RAND Journal of Economics 38, no. 1 (2007): 60–84.
William W. Hogan, “Electricity Scarcity Pricing Through Operating Reserves,” Economics of Energy & Environmental Policy 2, no. 2 (2013): 65–86.
Peter Cramton and Steven Stoft, “The Convergence of Market Designs for Adequate Generating Capacity,” white paper prepared for the Electricity Oversight Board, Center for Energy and Environmental Policy Research, MIT, April 2006, https://ceepr.mit.edu/ wp-content/uploads/2023/02/2006-007.pdf.
[116] George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (1971): 3–21.
Sam Peltzman, “Toward a More General Theory of Regulation,” JournalofLawandEconomics19, no. 2 (1976): 211–240.
[117] Howard Pack and Kamal Saggi, “The Case for Industrial Policy: A Critical Survey,” World Bank Research Observer 21, no. 2 (2006): 267–297.
Ken Warwick, BeyondIndustrialPolicy:EmergingIssuesand New Trends, OECD Science, Technology and Industry Policy Papers no. 2 (Paris: OECD Publishing, 2013).
[118] Joseph E. Aldy and Robert N. Stavins, “Using the Market to Address Climate Change: Insights from Theory and Experience,” Daedalus 141, no. 2 (2012): 45–60.
[119] Carolyn Fischer and Richard G. Newell, “Environmental and Technology Policies for Climate Mitigation,” Journal of Environmental Economics and Management 55, no. 2 (2008): 142–162.
[120] Finn E. Kydland and Edward C. Prescott, “Rules Rather Than Discretion: The Inconsistency of Optimal Plans,” Journal of Political Economy 85, no. 3 (1977): 473–491.
[121] Avinash Dixit, The Making of Economic Policy: A Transaction-Cost Politics Perspective (Cambridge, MA: MIT Press, 1996).
[122] US Nuclear Regulatory Commission, Licenses, Certifications, and Approvals for Nuclear Power Plants, 10 C.F.R. pt. 52 (as amended through various years), https://www.nrc. gov/reading-rm/doc-collections/cfr/part052/.
[123] Jessica R. Lovering, Arthur Yip, and Ted Nordhaus, “Historical Construction Costs of Global Nuclear Power Reactors,” Energy Policy 91 (2016): 371–382, https://doi.org/10.1016/j.enpol.2016.01.011.
[124] Jessica R. Lovering, Arthur Yip, and Ted Nordhaus, “Historical Construction Costs of Global Nuclear Power Reactors,” Energy Policy 91 (2016): 371–382, https://doi.org/10.1016/j.enpol.2016.01.011.
US Department of Energy, PathwaystoCommercialLiftoff: AdvancedNuclear(Washington, DC: Office of Clean Energy Demonstrations and Office of Nuclear Energy, 2023–2024), https://www.energy.gov/liftoff/advanced-nuclear.
[125] Charles W. Forsberg, “Molten Salt Reactors for Efficient Nuclear Fuel Cycles,” Progress in Nuclear Energy 77 (2014): 240–246.
[126] Carolyn Fischer and Richard G. Newell, “Environmental and Technology Policies for Climate Mitigation,” Journal of Environmental Economics and Management 55, no. 2 (2008): 142–162.
Ramón A. Alvarez et al., “Assessment of Methane Emissions from the US Oil and Gas Supply Chain,” Science 361, no. 6398 (2018): 186–188.
Adam R. Brandt et al., “Methane Leaks from North American Natural Gas Systems,” Science343, no. 6172 (2014):733–735.
[127] Martin L. Weitzman, “Prices vs. Quantities,” Review of EconomicStudies41, no. 4 (1974): 477–491.
W. David Montgomery, “Markets in Licenses and Efficient Pollution Control Programs,” JournalofEconomicTheory5, no. 3 (1972): 395–418.
[128] Carolyn Fischer and Richard G. Newell, “Environmental and Technology Policies for Climate Mitigation,” Journal of Environmental Economics and Management 55, no. 2 (2008): 142–162.
Joseph E. Aldy and Robert N. Stavins, “The Promise and Problems of Pricing Carbon: Theory and Experience,” Journal of Environment & Development 21, no. 2 (2012):152–180.
[129] Paul L. Joskow and Jean Tirole, “Reliability and Competitive Electricity Markets,” RAND Journal of Economics 38, no. 1 (2007): 60–84.
William W. Hogan, “Electricity Market Design and Efficient Pricing,” The EnergyJournal 43, no. 3 (2022): 3–28.
[130] Nameplate rating or nameplate capacity, refers to the maximum electric output a generator or power plant can produce under specific, ideal test conditions as designated by the manufacturer.
[131] Michael Milligan et al., Capacity Value of Wind and Solar: A Tutorial, NREL Technical Report NREL/TP-5000-62861 (Golden, CO: National Renewable Energy Laboratory, 2017).
[132] William W. Hogan, “Electricity Market Design and Efficient Pricing,” The Energy Journal 43, no. 3 (2022): 3–28.
Peter Cramton and Steven Stoft, “Forward Reliability Markets: Less Risk, Less Market Power, More Efficiency,” Utilities Policy16, no. 3 (2008): 194–201.
[133] Eric Hittinger and Inês L. Azevedo, “Bulk Energy Storage for Grid Decarbonization: Policy Options and Welfare Impacts,” Energy Policy87 (2015): 442–454.
Paul L. Joskow and Jean Tirole, “Reliability and Competitive Electricity Markets,” RANDJournalofEconomics38, no. 1 (2007): 60–84.
[134] Mark N. Lowry and Matt Makos, Performance-Based Regulation (PBR) for US Electric Utilities (Washington, DC: Edison Electric Institute, 2013).
[135] Harold Demsetz, “Why Regulate Utilities?” Journal of Law and Economics 11, no. 1 (1968): 55–65.
[136] Joachim Seel et al., Queued Up: Characteristics of Power Plants Seeking Transmission Interconnection in the United States, Lawrence Berkeley National Laboratory, updated 2023–2024.
National Association of Regulatory Utility Commissioners (NARUC), BestPracticesinDistributionSystemPlanning andHostingCapacity(Washington, DC: NARUC, 2020).
[137] S&P Global Commodity Insights, “Global Data Center Power Demand Expected to Almost Double by 2030,” November 5, 2025, https://www.spglobal.com/energy/en/ news-research/latest-news/electric-power/110525-global-data-center-power-demand-expected-to-almost-double-by-2030.
[138] Benjamin K. Sovacool, “Valuing the Greenhouse Gas Emissions from Nuclear Power: A Critical Survey,” Energy Policy 36, no. 8 (2008): 2940–2953.
US Energy Information Administration, “Electric Power Monthly: Capacity Factors,” EIA, various years.
[139] Nestor A. Sepulveda et al., “The Role of Firm Low-Carbon Electricity Resources in Deep Decarbonization of Power Generation,” Joule2, no. 11 (2018): 2403–2420.
Jesse D. Jenkins et al., “The Benefits of Firm Low-Carbon Electricity Resources in Decarbonizing Power Systems,” Joule2, no. 11 (2018): 2498–2510.
[140] US Department of Energy, PathwaystoCommercial Liftoff: Advanced Nuclear (Washington, DC: Office of Clean Energy Demonstrations and Office of Nuclear Energy, 2023–2024).
White House, “Executive Order on Catalyzing Clean, Firm Power — Advanced Nuclear,” 2025; see also US Nuclear Regulatory Commission, advanced reactor rulemaking dockets.
[141] Casey Crownhart, MIT Technology Review, “Old Nuclear, New Technology,” May 1, 2025, https://www. technologyreview.com/2025/05/01/1115957/old-new-nu-clear-technology/.
NuclearEngineering International, “China Refuels Thorium Reactor Without Shutdown,” https://www.neimagazine. com/news/china-refuels-thorium-reactor-without-shut-down/.
[142] US Department of Energy, Final Environmental Assessment for the Disposition of Surplus Uranium-233, DOE/ EA-1488 (Washington, DC: DOE, 2004), https://www.energy.gov/sites/prod/files/EA-1488-FEA-2004.pdf.
[143] Robert Alvarez, “Uranium-233: Proliferation and Safeguards Implications,” Science & Global Security 21 (2013), https://scienceandglobalsecurity.org/archive/sgs21alvarez. pdf.
[144] Robert Alvarez, “Uranium-233: Proliferation and Safeguards Implications,” Science & Global Security 21 (2013), https://scienceandglobalsecurity.org/archive/sgs21alvarez. pdf.
[145] US Energy Information Administration, Short-Term Energy Outlook: Expected Electricity Demand from Data Centers, EIA, 2024–2025.
A new Harvard-Harris poll shows that 60 percent of voters believe teachers unions should stay out of politics. A differently worded poll would likely reveal even stronger sentiment. Ask voters whether union dues pulled straight from teachers’ paychecks should fund political activity, and the numbers would likely climb higher. The public already senses what many teachers have lived: unions exist more for activism than for academics.
The National Education Association’s annual financial report confirms the imbalance. Less than 10 percent of its funding goes toward actually representing teachers. Meanwhile, more than 98 percent of its political contributions flow to Democrats in every election cycle. Roughly a quarter of teachers identify as conservatives. These educators should not feel compelled to hand over their hard-earned paychecks to causes they oppose.
The 2018 Janus v. AFSCME decision made this choice possible. The Supreme Court ruled that public employees can no longer be forced to pay union dues as a condition of employment. That ruling upheld First Amendment rights and ended compulsory support for political speech. Independents and rational Democrats who simply want to avoid politics and focus on the basics in the classroom now have every reason to opt out as well.
The dam is breaking on the teachers union monopoly. Some 10,000 teachers have joined the independent alternative Teacher Freedom Alliance, as more teachers are realizing they do not have to fund agendas they oppose. With genuine support now available outside traditional unions, teachers can apply pressure and hold union bosses accountable.
The best teachers deserve the freedom to negotiate their own salaries and benefits, including liability insurance. Instead, unions drag all educators down to the level of the lowest common denominator. Seniority rules and rigid contracts protect the weakest performers while holding back those who deliver results for kids.
Teacher salaries have remained flat over the past half-century when adjusted for inflation. Union leaders fight to hire more staff because bigger headcounts mean more leverage, a larger voting bloc, and more dues-paying members. Those decisions come at a direct cost: less money available for the teachers already doing an outstanding job. The system rewards expansion over excellence.
A teacher exodus gives union bosses an incentive to focus on their members instead of political activism. Just as school choice competition incentivizes district officials to up their game, teacher choice gives unions a reason to put classrooms first or risk losing even more support.
The old model forced teachers into one-size-fits-all representation that served union power more than classroom needs. The new, competitive landscape lets educators keep more of their money, secure better personal protections, and stay focused on what matters most: teaching kids.
The teachers union cartel built its power on compulsion. Janus cracked that foundation. The Teacher Freedom Alliance and similar groups are finishing the job by offering a positive alternative. Educators now have real options, real protections, and real freedom.
Teachers deserve better than a system that treats them as dues payers first and professionals second. They deserve the chance to keep their money, protect their careers on their own terms, and stay out of endless political fights. The exodus is underway. The monopoly is crumbling. Freedom is winning.
Note: As of the January 2026 Business Conditions Monthly (BCM) calculation, sufficient data have become available to resume publication of the three BCM diffusion indices. However, because the most recent fully complete run of constituent data dates back to July 2025, there is a discontinuity in the series. As a result, the current readings should be interpreted on a standalone basis — reflecting present conditions — rather than as the latest observation in a continuous trend. Until several additional months of consistent data are released and any revisions are incorporated, caution is warranted in drawing conclusions about momentum or directional changes in underlying economic conditions.
The Leading Indicator stands at 63, indicating a modestly positive tilt in forward-looking measures, with pockets of resilience in expectations-sensitive components. The Roughly Coincident Indicator registers at 42, suggesting somewhat subdued but not contracting real-time activity, consistent with a mixed and uneven current economic environment. Meanwhile, the Lagging Indicator comes in at 33, pointing to relatively soft backward-looking conditions, particularly in areas tied to credit and price dynamics.
LEADING INDICATOR (63)
The Leading Indicator registered 63, with seven of 12 components improving, one unchanged, and four declining.
Gains were led by market-sensitive and forward-looking components. The University of Michigan Consumer Expectations Index rose 7.1 percent, while the Conference Board US Leading Index Stock Prices 500 Common Stocks increased 1.7 percent. The Conference Board US Manufacturers New Orders Nondefense Capital Goods Ex Aircraft advanced 0.5 percent, and US New Privately Owned Housing Units Started by Structure Total SAAR climbed 4.8 percent. United States Heavy Truck Sales SAAR surged 16.7 percent, and Debit Balances rose 0.9 percent. Labor-market forward conditions improved as US Initial Jobless Claims SA declined 6.0 percent (a positive after inversion). Offsetting these gains, the Inventory-to-Sales Ratio Total Business fell 0.7 percent, US Average Weekly Hours All Employees Manufacturing SA declined 0.2 percent, and the Conference Board US Leading Index Manufacturers’ New Orders Consumer Goods and Materials slipped 0.3 percent. The 1-to-10 Year Treasury Yield Spread widened sharply by 70.2 percent but was scored negatively given its inversion. Adjusted Retail and Food Services Sales Total SA was effectively unchanged. Overall, the leading profile reflects strength in expectations, housing, and financial conditions, partially offset by softness in production-related metrics and the yield curve signal.
ROUGHLY COINCIDENT INDICATOR (42)
The Roughly Coincident Indicator came in at 42, with two components improving, one unchanged, and three declining.
On the positive side, Conference Board Coincident Manufacturing and Trade Sales increased 0.8 percent, and US Industrial Production rose 0.3 percent. However, Conference Board Coincident Personal Income Less Transfer Payments declined 0.2 percent, and Conference Board Consumer Confidence Present Situation SA fell 2.1 percent. US Labor Force Participation Rate edged down 0.2 percent, while US Employees on Nonfarm Payrolls Total SA was essentially flat. The balance of evidence suggests modest activity in production and sales, but weakening income growth, participation, and sentiment are weighing on current conditions.
LAGGING INDICATOR (33)
The Lagging Indicator stood at 33, with two components improving and four declining.
US CPI Urban Consumers Less Food and Energy Year over Year NSA rose 0.2 percent, and US Manufacturing and Trade Inventories Total SA increased 0.1 percent. In contrast, US Commercial Paper Placed Top 30 Day Yield declined 5.3 percent, Conference Board US Lagging Commercial and Industrial Loans fell 0.7 percent, and Census Bureau US Private Construction Spending Nonresidential NSA dropped 0.7 percent. The Conference Board US Lagging Average Duration of Unemployment rose 5.6 percent and was scored negatively after inversion. Taken together, the lagging profile points to persistent inflation alongside softening credit conditions, declining construction activity, and lengthening unemployment duration — signals consistent with a cooling economic backdrop.
The January 2026 BCM readings suggest an economy with a modestly positive forward tilt, uneven current activity, and still-soft trailing conditions, but with an important caveat given the break in the data. The Leading Indicator (63) points to resilience in expectations, housing, and market-sensitive components, indicating some forward momentum despite lingering weaknesses in production metrics and an adverse yield curve signal. In contrast, the Roughly Coincident Indicator (42) reflects a mixed present, where gains in production and sales are offset by declining income growth, sliding labor force participation, and soft consumer sentiment, leaving real-time activity subdued. The Lagging Indicator (33) reinforces a softer backdrop, with tightening credit conditions, weakening construction, and longer unemployment durations outweighing persistent inflation pressures. Taken as a whole, the configuration is consistent with tentative forward strength layered over fragile current conditions and weakening underlying fundamentals. But because these figures follow a several-month gap in complete data, they should be interpreted cautiously as a snapshot of current conditions rather than as evidence of a sustained trend or turning point.
DISCUSSION, February — March 2026
The latest inflation data present a conflicted but still informative picture, with early signs of disinflation at the consumer level offset by mounting upstream price pressures and a firmer underlying trend in services. February’s Consumer Price Index (CPI) came in softer than typical seasonal patterns would suggest, with headline inflation rising 0.27 percent month-over-month and holding at 2.4 percent year-over-year, while core CPI moderated to 0.22 percent. Much of the cooling reflects easing in heavily weighted categories such as rents and vehicles, and a broad decline in the share of items experiencing rapid price increases. Pockets of firmness remain, however, including apparel, discretionary services, and select goods categories influenced by rising input costs — particularly metals and memory chips — hinting at emerging supply-side pressures. At the same time, January’s Personal Consumption Expenditures (PCE) data showed a hotter underlying inflation trend, with core PCE rising 0.36 percent on the month and 3.1 percent year-over-year, driven largely by health care and other service categories. Consumer spending continues to rotate away from goods and toward services, even as income growth remains modest and the saving rate rises to 4.5 percent, suggesting cautious but still resilient household behavior.
Upstream, February’s Producer Price Index (PPI) reinforces the notion that cost pressures are building beneath the surface. Headline PPI rose a stronger-than-expected 0.7 percent on the month and accelerated to 3.4 percent year-over-year, reflecting higher transportation, warehousing, and especially metals costs — now up more than 15 percent since tariff measures took effect last year. While some components feeding into core PCE, such as health care and airfares, may exert less upward pressure in the near term, the broader trend in input costs points to continued pipeline inflation. Overlaying this dynamic is the emerging energy shock tied to the Iran conflict, which is likely to push headline CPI higher in the near term, potentially toward 3 percent, complicating the Federal Reserve’s policy path. Taken together, the data suggest that while consumer-level inflation has shown tentative signs of cooling, underlying service-sector strength and rising input costs — combined with geopolitical risks — leave the inflation outlook uncertain and increasingly sensitive to both supply shocks and policy responses.
Recent US labor market data point to a clear cooling in hiring, though the signal is muddied by temporary disruptions and measurement issues. February nonfarm payrolls fell by 92,000 — well below expectations — and the three-month average slowed to just 5,700 jobs, indicating that hiring momentum has nearly stalled and is likely running below breakeven. Some of the weakness reflects one-off factors, including a major health-care strike, adverse winter weather, and payback from unusually strong January conditions, while revisions to business formation estimates may have further amplified the decline. Even so, broader indicators suggest genuine softening: the unemployment rate rose to 4.44 percent, driven by job losers, while rising inflows into unemployment and declining outflows point to a less dynamic labor market. Wage growth remains relatively firm at 0.4 percent, but appears uneven and partly distorted by sector-specific effects, and aggregate income growth has softened as declining payrolls offset earnings gains.
At the same time, complementary data suggest the labor market is not collapsing but instead settling into a looser equilibrium. Job openings rose unexpectedly to 6.95 million in January, and private payroll data showed a modest gain of 63,000 jobs in February, with hiring concentrated in health care, education, and smaller firms. However, the ratio of vacancies to unemployed workers remains below one, indicating that slack persists and that labor demand is no longer outpacing supply. The quits rate has stabilized and layoffs have edged lower, reinforcing the view of reduced churn rather than renewed strength. Taken together, the data describe a labor market that is cooling and stabilizing at a lower level of activity — no longer a clear source of inflationary pressure, but not yet signaling a sharp deterioration — leaving policymakers inclined toward caution but still biased toward eventual easing.
The February ISM surveys paint a picture of continued expansion across both manufacturing and services, though with important differences in momentum and inflation signals. The ISM Manufacturing PMI edged down slightly to 52.4 but came in above expectations, indicating ongoing — if somewhat moderating — growth in the industrial sector. Demand remains solid, with new orders still firmly in expansion territory and supported by low customer inventories and improving backlogs, suggesting continued production ahead. The headline was bolstered by gains in employment, inventories, and slower supplier deliveries, the latter signaling capacity strain. However, a sharp rise in the prices-paid index — driven in part by higher metals costs and tariff-related pressures — points to renewed input cost inflation that may concern policymakers even as output stabilizes.
In contrast, the ISM Services PMI delivered a stronger and more broadly positive signal, jumping to 56.1, its highest level since mid-2022. The increase was driven by a surge in new orders, accelerating production, and improving employment, indicating robust demand across the dominant services sector. Backlogs and export orders also strengthened, while supply chains showed modest improvement. Importantly, price pressures became less pervasive, with the services prices index declining, offering some relief on the inflation front. Taken together, the two reports suggest a US economy entering the year with improving activity and sentiment, led by services strength and supported by steady manufacturing demand, but with a divergence in inflation dynamics — easing in services while intensifying in goods — that complicates the broader outlook.
Recent sentiment data across consumers and small businesses point to a cautiously stable but increasingly fragile outlook, with geopolitical and energy developments emerging as key risks. The University of Michigan’s preliminary consumer sentiment index slipped modestly to 55.5 in March, as a decline in expectations more than offset a slight improvement in current conditions, with the drop largely concentrated after the onset of military action in Iran. Inflation expectations remained relatively anchored — unchanged at 3.4 percent for one year and easing slightly to 3.2 percent over five years — but rising gasoline prices, up roughly 27 percent this month to their highest levels since late 2023, pose a growing threat to real incomes and future confidence. Small-business sentiment remains just above its long-run average, with the NFIB index at 98.8, supported by improved recent sales and profit trends, but forward-looking indicators have softened, including a notable drop in expected sales and only modest plans for capital spending. While fewer firms are currently raising prices, a significant share still intends to do so, reflecting persistent cost pressures. Across businesses, taxes remain the top concern, followed by labor quality, inflation, and weakening sales, with rising competition and financing concerns also in the mix. Taken together, the data suggest that while sentiment has not yet deteriorated sharply, it is increasingly vulnerable to energy-driven cost shocks and geopolitical uncertainty, which could weigh on both consumer demand and business confidence in the months ahead.
Retail sales data and the latest consumer surveys together suggest a consumption picture that is shifting from temporary softness toward more structural pressure. Retail sales dipped 0.2 percent in January, largely due to winter weather disruptions that curtailed in-person activity, especially dining out, while boosting online purchases, which helped cushion the decline. Core measures excluding autos and gasoline remained modestly positive, indicating that underlying demand had not collapsed and could rebound as weather effects fade. However, that near-term stabilization is now being challenged by a sharp energy-driven shock tied to the Iran conflict, with gasoline prices jumping from about $2.94 to $3.63 per gallon and oil rising roughly 40 percent. This surge has rapidly fed into consumer expectations, with anticipated gas price increases and broader inflation expectations moving higher, while perceptions of personal finances have deteriorated meaningfully.
More concerning for the consumption outlook is the breadth of this shift in sentiment. Higher-income households — previously a key driver of spending strength — have also pulled back sharply, raising the risk that aggregate consumption may weaken more broadly. At the same time, rising fuel costs are acting as a direct drag on discretionary spending, effectively reallocating household budgets toward necessities while amplifying price pressures in areas like transportation and travel. Coupled with growing concerns about job security and declining confidence in future income, these developments suggest that while early-year retail weakness may have been partly weather-related, the outlook for consumer spending is increasingly constrained by a combination of eroding purchasing power and heightened uncertainty.
Productivity and industrial output data together point to an economy that is still generating growth efficiently, even as sectoral performance remains uneven. Labor productivity rose at a solid 2.8 percent annualized pace in the fourth quarter, with prior quarters revised higher due to lower measured hours worked, reinforcing a favorable underlying trend. While unit labor costs increased at a 2.8 percent annualized rate in the quarter — driven by stronger compensation — the four-quarter trend remains subdued at just 1.3 percent, indicating that labor costs are not exerting meaningful inflationary pressure. This combination — steady productivity gains alongside contained cost growth — suggests that output can continue to expand without forcing a policy response from the Federal Reserve on labor-driven inflation grounds.
Industrial production data echo this theme of modest but uneven expansion. Output rose 0.2 percent in February, supported by gains in manufacturing — particularly transportation equipment and business investment-related categories — while consumer goods production was flat, with durable gains offset by declines in nondurables. Utilities dragged on the headline amid weak natural gas output, while mining and energy extraction showed early signs of strengthening, likely to become more important given geopolitical supply disruptions. Taken together, the data suggest a production environment characterized by resilience in capital-intensive and strategic sectors, softer performance in consumer-facing goods, and an overall growth path that remains intact but far from broad-based.
The early March Beige Book points to an economy that is losing some momentum but largely maintaining its footing, with activity expanding at a slight to moderate pace in seven of twelve Federal Reserve districts, down from eight previously, and a growing share reporting flat or declining conditions. Consumer behavior remains uneven, with heightened price sensitivity among lower-income households weighing on discretionary spending, including weaker auto sales. At the same time, manufacturing conditions improved, supported in part by demand tied to data centers and energy infrastructure, highlighting a divergence between capital-intensive sectors and more consumer-facing areas. Employment was broadly stable, though firms cited softer demand, rising input costs, and uncertainty as constraints on hiring. Price pressures remained moderate, with tariffs contributing to elevated costs across most districts, but firms expressed expectations for some easing ahead — an encouraging signal that is now complicated by renewed geopolitical risks. While business sentiment has become more optimistic, the report largely predates the escalation of the Iran conflict and related policy uncertainty, suggesting that current conditions may understate the degree of volatility and downside risk facing the economy in the near term.
The current policy mix reflects a growing tension between a still-restrictive monetary stance and a fiscal impulse that is increasingly being offset by external shocks. At its March meeting, the Federal Reserve held rates steady at 3.50 percent to 3.75 percent but signaled a more hawkish underlying posture, raising its estimates of long-run growth and the neutral rate — partly on expectations of AI-driven productivity gains — while also revising inflation forecasts higher. Although the median projection still includes one rate cut this year, the upward shift in the dot plot and higher assumed policy baseline suggest a reduced willingness to ease quickly, especially amid lingering inflation risks and geopolitical uncertainty. This leaves policy effectively tighter than it may appear on the surface, particularly as the Fed balances stronger projected growth against rising inflation pressures and an unsettled global backdrop.
At the same time, the anticipated boost from fiscal policy is being eroded in real time. The One Big Beautiful Bill Act was expected to support consumption through increased tax refunds — roughly $650 to $800 per household, contributing about 0.4 percentage points to GDP — but that impulse is now being offset by a sharp rise in energy prices tied to the Iran conflict. With oil prices already above the estimated breakeven level of roughly $83 per barrel, higher gasoline costs are effectively neutralizing the benefit of those refunds, particularly for lower-income households that spend a larger share of income on fuel. In effect, the policy mix is shifting from one of modest support to one of partial offset, where fiscal stimulus is diluted by energy-driven real income losses and monetary policy remains cautious. Together, this creates a more constrained near-term outlook, in which growth depends increasingly on higher-income consumers and favorable financial conditions, both of which are vulnerable to ongoing geopolitical and market volatility.
The US economy is showing modest but uneven growth, with services and productivity providing support while manufacturing, consumption, and hiring lose some momentum. Inflation appears to be cooling at the consumer level, but rising input costs and an energy shock tied to the Iran conflict are reintroducing upward pressure and complicating the outlook. The labor market is clearly softening, while consumer spending faces increasing strain from higher gasoline prices, weakening sentiment, and a potential pullback among higher-income households. Looking ahead, an increasingly cautious Federal Reserve (possibly facing a policy trap) and fiscal boost increasingly offset by energy costs leave the expansion intact but fragile, with growth more vulnerable to geopolitical risks and shifts in confidence.