Category

Economy

Category

Fluctuations in food prices are so commonplace that the entire category is excluded from the Federal Reserve’s preferred measure of inflation. From war and weather to fertilizer and labor, hundreds of unseen influences shape the prices of goods long before they reach grocery shelves. But a sustained surge in one American staple has everybody buzzing. 

Beef prices are up 65 percent since April 2020. Ground beef has surged to $6.70 per pound as a prolonged supply shock has failed to keep up with consistent demand. Relief will not come soon. Industry experts say we are only partway through an unprecedented price increase that began in 2019, and if demand stays constant, could top $10 per pound by fall. Drought in grazing lands and cyclical shrinking of national cattle herds have flattened supply to 75-year lows. While individual beef cattle are getting heavier and meatier with grain-feeding, the actual number of animals hasn’t grown at the same clip as Americans’ dietary demand.

Timing the conception of calves for two-year growth cycles requires farmers to anticipate when feed and forage will be affordable, in hopes that the price for finished beef will cover costs. And anyone looking to expand their breeding herd is paying the same sky-high price as beef buyers. If prices drop even a little, the profitability margin disappears.

“Instead of spurring ranchers to breed heifers, high prices are incentivizing producers to sell them to pay debts,” Narciso Perez, a cattle broker in Albuquerque, New Mexico, told The Guardian newspaper. Some tariff rollbacks and carveouts have allowed more beef into the US, but ranchers aren’t happy about the increased competition.

High prices aren’t a problem, necessarily. They signal consumers to conserve and producers to expand output. Recent pushes encouraging people to eat more protein, combined with the continued buying power of the meat-eating middle class, keep demand high even as prices climb. The American Farm Bureau Federation says shortages will take years to resolve. 

Are Burgers Destined to Become a Luxury Item? 

High beef prices don’t just crowd out burgers at your summer cookout. The signals of upstream scarcity are fundamentally changing the margins everywhere from fast food to fine dining. 

Soaring prices forced high-end steakhouses to adjust menu pricing, with many premium cuts surpassing $100 even as margins drop. More moderate steak spots, though, like Longhorn Steakhouse, have reported increased demand as the difference between steaks cooked at home and steaks at low-margin mid-tier dining establishments has all but disappeared. The average price for uncooked beef steaks in the grocery store is now about $12.74 per pound — a record high, federal data show. 

Hamburger Helper dinner mixes, long a staple for strapped families, now prominently suggest on the packaging: “Try with hot dogs instead of ground beef.” 

The pasta-and-cheese-sauce mixture soared in popularity in the 1970s, when inflation and beef prices last took their toll on Americans’ weeknight dinner options. The New York Times reported a 15-percent surge in sales of Hamburger Helper in late 2025, suggesting cost-conscious substitution of inferior goods is once again in vogue. The cost of making the meal with a pound of hamburger now easily exceeds $10, above the USDA estimates for a basic family meal . Even families buying in bulk and cooking at home have seen grocery costs take a larger share of take-home pay. The box price of Hamburger Helper has risen since 2020, but the pricy part of the meal is protein: swapping ground beef for a pound of Oscar Mayer beef franks lowers the total only to $8.50, while introducing more sugar, salt, and preservatives.

Even with these drawbacks, people are clearly willing to make the switch from higher-quality cuts of beef to lower-quality, and also from beef to less-expensive proteins. Sam Kelbanov writes for Morning Brew, in an article titled “Beef Is Getting Bougie”: 

The likes of Raising Cane’s and Dave’s Hot Chicken have had an expansion bonanza in recent years, while burger-centric value chains like Burger King are struggling with declining margins. Meanwhile, McDonald’s recently beefed up its chicken offerings by adding sauce-lathered and seasoned McCrispy Strips to its menu.

That’s textbook substitution effect, and it isn’t just for burgers. Sales of flank steak and skirt steak are rising as buyers opt for less-expensive, tougher cuts than the traditional tenderloin or ribeye. Substitution of inferior goods is a common adaptation for families under price strain, and shifting the ingredients of your burrito or chili serves the same goal as replacing ground beef with hot dogs in Hamburger Helper.

Between Barn and Bun

While the cost of cattle is the largest determinant of beef’s soaring price at the grocery store, dozens of other inputs play supporting roles. 

Transportation costs are significant. Like other groceries, beef is moved around the country overwhelmingly by trucks. Energy shocks related to the conflict in the Middle East exacerbate the expense of moving high-spoilage foods. 

Fertilizer is also shipped through the Strait of Hormuz, and without affordable soil augmentation, growing the volume of grain required to sustain cattle herds becomes more costly. 

Capital equipment required to keep cattle ranches running is increasingly expensive, partly due to the high cost of importing steel and other metals. Tariffs of 50 percent apply to materials coming in from China and Canada, some of our most prolific trading partners.  

Interest on agricultural and operating loans, which many farmers and ranchers use to sustain their overhead or update equipment, has increased along with other rates in the post-pandemic correction.

Regulatory Uncertainty

In case that weren’t enough individual factors to keep track of (not to mention, say, the costs of veterinary care or grazing rights), ranchers face the threat of shifting political priorities and constant compliance headaches. Beef production intersects with many societal priorities — environmental protection, animal welfare, labor rights, food safety — that require oversight. But that puts the industry permanently in the crosshairs of unelected administrative agencies like EPA, USDA, FDA, and their various iterations of “supplemental guidance.”

Agriculture Secretary Brooke Rollins told a Fox Business reporter that the Biden Administration’s climate-protection policies constituted a “war on cattle,” and supply would take years to replenish. Unfortunately, a change of leadership hasn’t entirely lessened Washington’s interventionist appetites. 

Just days ago, the Justice Department announced it would investigate alleged antitrust violations by large meatpacking companies. President Trump, continuing his habit of conducting official business on Truth Social, called for an inquiry into possible collusion, specifying “Majority Foreign Owned Meat Packers, who artificially inflate prices, and jeopardize the security of our Nation’s food supply” (sic). Similar civil and criminal probes have recently targeted poultry farmers, egg producers, and fertilizer companies. While uncovering truly anticompetitive practices is important, given the tremendous difficulty of accurately predicting and pricing these biological products in volatile markets, investigators are likely to generate as many pain points as they can resolve. 

Regulatory uncertainty discourages herd expansion by making investment and innovation riskier. Multi-agency regulatory infrastructure drags down production, generating compliance costs, malinvestment, and deadweight losses. 

Burger Boom and Bust

Economy-wide, though, demand for beef continues to climb, even as one in four adults have cut back on meat for ethical, health, or financial reasons. Substitution is happening at the margins, but not enough to offset total demand. Beef processing remains at around half of its full capacity. 

Prices are doing their job: signaling scarcity and forcing substitution. But when supply takes years to respond and return on investment is uncertain, those signals translate into prolonged tradeoffs rather than rapid price relief. The market will adjust — but not before your summer burger budget does.

The idea that artificial intelligence could usher in a “post-money” world — and that such a world would also render firms obsolete — rests on a misunderstanding of what firms are and why they exist. Even if, for the sake of argument, we accept the highly implausible premise that money would disappear beneath an AI/robotics explosion of superabundance, it does not follow that firms would disappear with it. Firms are not artifacts or by-products of monetary exchange; they are organizational responses to coordination problems, uncertainty, and the costs of markets.

The classic insight comes from British economist Ronald Coase, whose theory of the firm begins not with money, but with transaction costs. Costs do not necessarily connote prices. Markets are not frictionless arenas in which individuals seamlessly contract for every task. Searching for counterparties, costs of instantaneity, negotiation, enforcement, and adapting to unforeseen changes all impose costs. Firms arise precisely to economize on these costs by internalizing certain transactions. Instead of navigating every step of production through the price system, firms substitute managerial direction for repeated market exchange.

Nothing in that logic depends on money per se. One can imagine a world in which prices are denominated in some non-monetary unit, or — in the scenario that Musk and others like him are envisioning — a world in which advanced AI systems coordinate resource allocation without explicit prices. But the underlying coordination problem remains. Complex production — whether building aircraft, running cloud infrastructure, or developing pharmaceuticals — requires an alignment of hundreds or thousands of interdependent tasks. Even in a hypothetical AI-managed system, there must be boundaries within which decisions are made, hierarchies to resolve conflicts, and mechanisms designed to allocate effort. Those are the defining features of firms.

Going a bit further, the elimination of money would, if anything, increase the need for firms (or firm-like) structures. Prices are compressed information signals, conveying relative scarcities and preferences. Without them, the information burden shifts elsewhere. AI might assist in processing vast datasets, but it does not eliminate the need to define objectives, contend with tradeoffs, or assign accountability. Someone, or something, must decide whether a given unit of labor or material is better used in healthcare, energy, or transportation. These are not simply technical questions: they involve prioritization, constraints, and considerable opportunity costs. Firm structures provide the locus for making such decisions in a structured manner.

Moreover, incentives do not vanish with money. Even in a non-monetary economy, individuals will inevitably face tradeoffs in time, effort, status, access, or other scarce benefits. Systems will need to motivate participation, discourage shirking, and reward performance. Compensation may take the form of wages, privileges, reputation, access to scarce resources, or combinations thereof; the fundamental problem of aligning individual incentives with organizational goals persists. Firms, properly understood, are the institutional solution to this problem.

Perhaps the largest issue involves the unavoidable nature of risk and uncertainty. Production unfolds in time, along a term structure, and often requires upfront investment in projects whose outcomes are uncertain. Firms internalize, bundle, assess, and manage risks, deciding which projects to undertake and how to allocate resources among them. Even if AI could forecast outcomes with greater accuracy than human managers, uncertainty would not disappear. The future remains inherently unknowable in countless dimensions, partially driven by attempts to ameliorate them in the present. That is particularly the case where innovation is concerned. Organizational structures that can absorb, distribute, and respond to risk would still be critical, whether or not the form they take is familiar.

The notion that “no money means no firms” conflates the medium of exchange function of money with the structure of production. Money (in addition to having other roles) facilitates exchange across decentralized actors; firms exist precisely because not all coordination is best handled through decentralized exchange. They are islands of planned coordination and networks of contracts, arbitraging between functions more efficiently undertaken outside versus within their notional borders, whether that system is market-based, AI-mediated, or something else entirely.

Many similar predictions were made early in the internet era, and more recently, where DAO (Decentralized Autonomous Organizations) innovation has occurred. (If the markets for those tokens are indicative, nothing of the sort is expected any time soon.) 

Nothing about either of those, nor AI, abolishes the economic problems that give rise to firms; at most, they would shift. New problems may indeed arise. Coordination, incentives, uncertainty, and transaction costs do not disappear in a world of abundance or advanced technology. They simply take new forms. And as long as those problems exist, so too will the need for organizations that solve them. They will be firms by another name, perhaps — but firms nevertheless.

The Federal Open Market Committee is widely expected to leave its policy rate unchanged at this week’s meeting. The CME Group puts the odds that the FOMC will continue to target the federal funds rate within the 3.5 to 3.75 percent range at 99.5 percent. But the near certainty regarding this week’s decisions masks the growing problem Fed officials face. 

The rise in energy prices tied to the conflict with Iran is the sort of negative supply shock that makes monetary policy especially difficult. It puts upward pressure on inflation even as it threatens to slow growth and weaken employment.

That puts the Federal Reserve in an awkward position. Under its dual mandate, the Fed is supposed to promote both price stability and maximum employment. Ordinarily, Fed officials have the luxury of focusing on one of those objectives at a time. When inflation is increasing, the Fed can raise rates to cool demand. When growth slows and unemployment rises, it can cut rates to support spending and hiring. An adverse supply shock is different because it simultaneously threatens both goals.

What the Rules Say

The difficulty posed by adverse supply shocks makes it all the more important to seek guidance from monetary rules. The latest Monetary Rules Report from AIER’s Sound Money Project shows that the Fed’s current policy rate already sits near the lower end of the recommended range. 

The Taylor Rule remains the most familiar place to start. It says that the Fed should set interest rates higher when inflation runs above target and lower when economic activity or employment fall below sustainable levels. Using the most recent data available, the original version of the rule points to a federal funds rate of 4.66 percent. A modified version that minimizes interest rate volatility and accounts for forecasts of future inflation implies a policy rate of 3.99 percent. If anything, the Taylor Rule suggests Fed officials ought to consider an increase in the federal funds rate target. 

Rules based on nominal gross domestic product, or NGDP, suggest somewhat lower rates, with an NGDP level rule at 3.93 percent and an NGDP growth rule at 3.53 percent. These estimates are in line with the current stance of policy and support the expected decision to hold steady at 3.5–3.75 percent. 

How Rules Account for Supply Shocks

In normal circumstances, both types of rules provide a useful way to translate incoming data into a policy rate prescription. But supply shocks make the Taylor Rule harder to interpret, because they create conflicting signals. Higher energy prices put upward pressure on inflation, which points toward tighter policy. At the same time, they raise production costs and squeeze household budgets, which can weaken output and employment, pointing toward easier policy. As a result, the Taylor Rule gets pulled in opposite directions.

That tension has also shown up in recent commentary from policymakers. Some of the more dovish voices inside the Fed and around the administration — who have been quite eager to lower rates — have admitted that any cuts are more likely to come later in the year, after the current Middle East conflict subsides. That shift reflects how difficult it is to formulate policy when a negative supply shock strains both sides of the Fed’s mandate at once.

A Better Guide During Supply Shocks

This is where rules based on nominal spending become especially useful. NGDP is simply the total dollar value of spending in the economy. Its growth rate combines inflation and real output growth into a single measure. That makes it an especially useful guide when supply shocks hit. Instead of forcing policymakers to weigh inflation and growth separately, NGDP rules ask a broader question: what is happening to total spending?

The NGDP growth rule, for instance, suggests that monetary policymakers aim for annual 4 percent growth in nominal spending. The 4 percent benchmark reflects the fact that the Fed targets a 2 percent inflation rate and annual output growth tends to average 2 percent. It effectively wraps both sides of the Fed’s dual mandate into a single statistic. Importantly, in the context of a negative supply shock, it also accommodates offsetting movements in those two objectves. For instance, if spiking energy prices push inflation to 3 percent and pull real growth down to 1 percent, overall NGDP growth would remain at 4 percent and the Fed would be justified in keeping rates steady — despite inflation moving temporarily above target. 

In other words, if oil prices rise because of geopolitical conflict, inflation may move higher even though overall spending is not accelerating in a way that calls for tighter monetary policy. At the same time, weaker real growth alone does not necessarily mean the Fed should cut, so long as nominal spending remains reasonably stable. Looking at NGDP helps policymakers avoid overreacting to only one dimension of the shock.

In the April report, the NGDP rules are broadly consistent with leaving policy unchanged. The NGDP growth rule, in particular, suggests that current policy is roughly on target, with the most recent data showing NGDP growth of 4.2 percent — very close to the rule’s 4 percent benchmark. That figure is backward-looking, so it is reasonable to worry that it may not fully capture recent developments tied to the conflict in the Middle East. Even so, more recent inflation data and forecasts of real output growth still point to nominal spending growth of around 4 percent. That reinforces the case for keeping policy where it is. 

What This Means for the Fed

Supply shocks create some of the most challenging problems for monetary policymakers because they blur the line between inflation risk and economic weakness. That is what makes the current moment so uncomfortable for Fed officials. But discomfort need not mean confusion. The leading rules still offer a useful signal: when overall nominal spending remains close to trend, policymakers should be careful not to overreact to either dimension of a supply disturbance. Fed officials can remain confident by keeping policy within the range offered by the leading monetary policy rules.

The Federal Open Market Committee is widely expected to leave its policy rate unchanged at this week’s meeting. The CME Group puts the odds that the FOMC will continue to target the federal funds rate within the 3.5 to 3.75 percent range at 99.5 percent. But the near certainty regarding this week’s decisions masks the growing problem Fed officials face. 

The rise in energy prices tied to the conflict with Iran is the sort of negative supply shock that makes monetary policy especially difficult. It puts upward pressure on inflation even as it threatens to slow growth and weaken employment.

That puts the Federal Reserve in an awkward position. Under its dual mandate, the Fed is supposed to promote both price stability and maximum employment. Ordinarily, Fed officials have the luxury of focusing on one of those objectives at a time. When inflation is increasing, the Fed can raise rates to cool demand. When growth slows and unemployment rises, it can cut rates to support spending and hiring. An adverse supply shock is different because it simultaneously threatens both goals.

What the Rules Say

The difficulty posed by adverse supply shocks makes it all the more important to seek guidance from monetary rules. The latest Monetary Rules Report from AIER’s Sound Money Project shows that the Fed’s current policy rate already sits near the lower end of the recommended range. 

The Taylor Rule remains the most familiar place to start. It says that the Fed should set interest rates higher when inflation runs above target and lower when economic activity or employment fall below sustainable levels. Using the most recent data available, the original version of the rule points to a federal funds rate of 4.66 percent. A modified version that minimizes interest rate volatility and accounts for forecasts of future inflation implies a policy rate of 3.99 percent. If anything, the Taylor Rule suggests Fed officials ought to consider an increase in the federal funds rate target. 

Rules based on nominal gross domestic product, or NGDP, suggest somewhat lower rates, with an NGDP level rule at 3.93 percent and an NGDP growth rule at 3.53 percent. These estimates are in line with the current stance of policy and support the expected decision to hold steady at 3.5–3.75 percent. 

How Rules Account for Supply Shocks

In normal circumstances, both types of rules provide a useful way to translate incoming data into a policy rate prescription. But supply shocks make the Taylor Rule harder to interpret, because they create conflicting signals. Higher energy prices put upward pressure on inflation, which points toward tighter policy. At the same time, they raise production costs and squeeze household budgets, which can weaken output and employment, pointing toward easier policy. As a result, the Taylor Rule gets pulled in opposite directions.

That tension has also shown up in recent commentary from policymakers. Some of the more dovish voices inside the Fed and around the administration — who have been quite eager to lower rates — have admitted that any cuts are more likely to come later in the year, after the current Middle East conflict subsides. That shift reflects how difficult it is to formulate policy when a negative supply shock strains both sides of the Fed’s mandate at once.

A Better Guide During Supply Shocks

This is where rules based on nominal spending become especially useful. NGDP is simply the total dollar value of spending in the economy. Its growth rate combines inflation and real output growth into a single measure. That makes it an especially useful guide when supply shocks hit. Instead of forcing policymakers to weigh inflation and growth separately, NGDP rules ask a broader question: what is happening to total spending?

The NGDP growth rule, for instance, suggests that monetary policymakers aim for annual 4 percent growth in nominal spending. The 4 percent benchmark reflects the fact that the Fed targets a 2 percent inflation rate and annual output growth tends to average 2 percent. It effectively wraps both sides of the Fed’s dual mandate into a single statistic. Importantly, in the context of a negative supply shock, it also accommodates offsetting movements in those two objectves. For instance, if spiking energy prices push inflation to 3 percent and pull real growth down to 1 percent, overall NGDP growth would remain at 4 percent and the Fed would be justified in keeping rates steady — despite inflation moving temporarily above target. 

In other words, if oil prices rise because of geopolitical conflict, inflation may move higher even though overall spending is not accelerating in a way that calls for tighter monetary policy. At the same time, weaker real growth alone does not necessarily mean the Fed should cut, so long as nominal spending remains reasonably stable. Looking at NGDP helps policymakers avoid overreacting to only one dimension of the shock.

In the April report, the NGDP rules are broadly consistent with leaving policy unchanged. The NGDP growth rule, in particular, suggests that current policy is roughly on target, with the most recent data showing NGDP growth of 4.2 percent — very close to the rule’s 4 percent benchmark. That figure is backward-looking, so it is reasonable to worry that it may not fully capture recent developments tied to the conflict in the Middle East. Even so, more recent inflation data and forecasts of real output growth still point to nominal spending growth of around 4 percent. That reinforces the case for keeping policy where it is. 

What This Means for the Fed

Supply shocks create some of the most challenging problems for monetary policymakers because they blur the line between inflation risk and economic weakness. That is what makes the current moment so uncomfortable for Fed officials. But discomfort need not mean confusion. The leading rules still offer a useful signal: when overall nominal spending remains close to trend, policymakers should be careful not to overreact to either dimension of a supply disturbance. Fed officials can remain confident by keeping policy within the range offered by the leading monetary policy rules.

A few weeks ago, social media skeptics received their best news in years.

In KGM v. Meta, a jury found Meta and Google negligent for their role in fueling a youth mental health crisis. Now, six million dollars in damages is basically meaningless to companies that gross hundreds of billions in revenue annually. But the reason this case has gotten so much media attention is for what it might represent. Some have compared the case to the beginning of litigation against Big Tobacco last century, which culminated in a $206 billion master settlement with more than 40 states.

In this case, however, the jury got it wrong. It concluded three things:

  • Instagram and YouTube were designed in ways that encouraged uncontrollable use and addictive behaviors.
  • The companies failed to adequately warn users, especially minors, about the risks.
  • The design of their platforms was a considerable factor in causing the plaintiff’s mental health problems.

All three of these things could be true, but neither Meta nor Google should be held liable for any of them. Unlike prior cases involving social media, KGM treated YouTube and Instagram as fundamentally defective products. The central question wasn’t whether malicious users could misuse these platforms, but whether the platforms themselves posed inherent risks. In general, online companies aren’t legally accountable for what users post due to Section 230 protections — Meta, for instance, wouldn’t be held liable for someone using its products to incite violence. In this case, though, Judge Carolyn Kuhl ruled that platform design elements — like algorithm-driven feeds, autoplaying videos, and push notifications — could be challenged. 

In other words, Instagram and YouTube should be held liable because they’re addictive, and too effective at providing content users want.

In a motion denying summary judgment, Judge Kuhl wrote: “The fact that a design feature like ‘infinite scroll’ impelled a user to continue to consume content that proved harmful does not mean that there can be no liability for harm arising from the design feature itself.” In other words, Meta and Google can be held responsible for designing a product that fulfills a consumer desire. Such an argument is dubious. Product innovation exists precisely to meet the demands of consumers — and that’s a good thing.

If such a conclusion holds, where could it not apply? Oreos are delicious — should Mondelez International be forced to make their product less appealing because a “design feature” of Oreos causes repeated consumption of Oreos, with negative health outcomes? Should TV shows that end on a cliffhanger be banned because such a “design feature” creates an addictive cycle, causing the viewer to continue watching? In excess, many other products besides social media can become addictive, but it’s not the government’s job to single out certain products or consumer desires as addictive. 

And then there’s the First Amendment problem. Even assuming that social media is addictive in a way analogous to tobacco, the two differ in a key respect. Social media companies are being held liable for their speech, which is protected by the First Amendment. As Erwin Chemerinsky, Dean of the UC Berkeley School of Law, put it:

The plaintiffs in these lawsuits argued that companies design algorithms that are tailored to individual users to keep them hooked. But algorithms are themselves speech, and there is no reason to treat this speech differently from the code that encourages people to keep playing video games.

Or, as the Supreme Court Justice Elena Kagan wrote in Moody v. NetChoice, “the First Amendment … does not go on leave when social media [is] involved.” And while social media is almost certainly a drain on society — decreasing attention spans, increasing depression, and spreading misinformation — neither restricting First Amendment-protected speech nor regulating the free market is the answer.

Forcing social media companies to restrict access to social media won’t necessarily lead to meaningfully lower social media usage by teenagers. For one, even the most extreme option — simply banning social media usage by teenagers — is easily circumvented by most teenagers. Teenagers have cleared visual age checks. As one Australian teenager put it, “I scrunched my face up to get more wrinkles, so I looked older, and it worked!” Perhaps not a high-tech workaround, but it nevertheless worked, and many other techniques do, too.

And even if the current mainstream social media companies — Meta, Google, TikTok, etc. — were forced to make their products less addictive, that would just open the door for competitors to replace them. And then what? Regulate those products until they’re less addictive, too? At some point, the government will just be playing First Amendment Whac-A-Mole. 

Ultimately, this is not a problem for the courts — nor even legislatures — but rather for civil society. Regulating trillion-dollar companies out of existence won’t fix the underlying problem. If social media were intrinsically detrimental, in the way that cigarettes cause a chemical addiction and subsequent health problems, then almost every teenager who uses social media would struggle with addiction and see some demonstrable negative impact on their life. But that’s not the case. About one in five teens say social media has hurt their mental health. Another study found that social media usage beyond three hours a day increased internalizing problems (like anxiety/depression) by about 60 to 80 percent. Neither of these numbers are great. But they also reveal that a significant percentage of teenagers who use social media are perfectly fine. 

So what explains how one teen could use social media and neither become addicted nor have their mental health suffer, and another teen could experience the opposite? Very likely having access to a robust civil society — family, activities, community organizations, religious groups, and other social supports. Social media accounts for about one percent of the variation in life satisfaction. By contrast, family situations explain about a third of life satisfaction for young adults. Running to government for legislation to fix our minor woes allows these important community bonds to atrophy. An important aspect of the liberal political order is the recognition that voluntary, robust civil society can play a much more effective role in addressing these societal problems than can even well-intentioned meddling by the government. Social media is no exception.

President Donald Trump is discovering what Joe Biden learned the hard way: voters don’t easily forgive price increases. Despite inflation cooling from its peak, two-thirds of Americans disapprove of how Trump is handling inflation, according to an April Economist/YouGov poll.

The Republican Party’s victory lap over no tax on tips and no tax on overtime rings hollow, considering persistent public frustration with the cost of living. It doesn’t help that Trump’s tariff war and the war in Iran are further fueling rising prices.

And voter frustration isn’t just about recent price changes. It’s also about the lasting damage from the inflation surge of 2021–2022, which pushed the overall price level permanently higher.

There’s one cure, however, that Washington continues to miss. Inflation is increasingly driven by unsustainable budget policy, and politicians on both sides of the aisle keep pouring gasoline on the fiscal fire.

When debt grows persistently faster than the economy, it eventually forces difficult choices. Investors begin to question how the government will meet its obligations. There are only three answers: raise taxes, cut spending, or allow inflation to erode the real value of debt. When the first two options are repeatedly postponed, inflation becomes the likely path of least resistance.

This is the risk of so-called fiscal dominance. Even a formally independent Federal Reserve cannot ignore the consequences of excessive borrowing. If interest costs rise rapidly and financial markets come under stress, the Fed will face pressure to lower borrowing costs at the risk of fueling inflation.

In that world, debates about whether a Fed chair is politically independent miss the bigger picture. The real danger is that fiscal policy leaves the central bank with no good options.

Recent experience offers a clear warning. The inflation surge earlier this decade was not primarily caused by pandemic-related supply disruptions. Nor does the corporate greed theory hold any water. It was mostly driven by unprecedented deficit-financed stimulus spending combined with accommodative monetary policy.

In short, the government spent too much, and to enable this excessive government spending,  the Fed printed too much money.

Bringing inflation back down required interest rate hikes, raising borrowing costs across the economy. That painful adjustment underscores a key lesson: restoring credibility after inflation takes hold is far more costly than maintaining discipline in the first place.

Yet Washington is not only failing to change course but doubling down.

Despite campaign promises to rein in spending with efforts like the Department of Government Efficiency (DOGE) and vows by President Trump to balance the budget, the Trump administration and Congress have continued to expand the federal debt.

From extending and expanding the Trump tax cuts without commensurate spending reductions to doing an end-run around the appropriations process to boost defense and immigration enforcement, Republicans have repeatedly sidestepped budget rules to pass deficit-financed, partisan measures.

Interest costs on the national debt now exceed federal spending on national defense. That could soon change, however, as President Donald Trump pushes to reverse the imbalance — not by lowering interest rates, but by increasing defense spending.

Republicans aren’t the only ones to blame. Democrats under Biden also abused the budget process and executive powers to enact green energy subsidies, forgive student loan debt, and accelerate the cost of food stamps.

Meanwhile, neither party is willing to confront the unchecked growth of entitlement programs. Social Security, Medicare, and Medicaid are expanding faster than the economy and faster than federal revenues. Demographic shifts, including an aging population and lower birth rates, mean fewer workers are supporting more beneficiaries.

The bigger problem is poor program design. Social Security benefits grow with wages, exceeding inflation, and federal health care programs are open-ended entitlements devoid of market incentives to control price pressures.

Absent meaningful reform, the conclusion is unavoidable: inflation will rise to reduce the fiscal burden of the debt.

Sound fiscal policy is the only answer. When Congress credibly stabilizes debt, it anchors inflation expectations and reduces the risk premium investors demand. Lower long-term interest rates ease borrowing costs across the economy and slow the growth of federal interest payments.

Congress should adopt a credible and enforceable fiscal target to stabilize debt relative to the economy. Its members should stop the misuse of emergency spending provisions to bypass budget constraints. And most importantly, they must reform the entitlement programs driving long-term spending growth.

That means refocusing Social Security on preventing poverty in old age while adjusting benefits and eligibility to reflect higher earners’ ability to save on their own and longer life expectancies. It means slowing Medicare’s growth through stronger budget constraints and cost discipline, best achieved by giving beneficiaries more control over how their subsidies are spent. And it means restructuring Medicaid to limit federal exposure and improve accountability, with states bearing a larger share of costs.

None of these steps are politically easy. An independent fiscal commission could help break the partisan deadlock and advance these reforms.

Trump’s declining approval ratings on inflation are a warning sign. Voters know something is wrong. Until policymakers confront the underlying source of the problem — unsustainable federal spending — inflation will remain a recurring threat, and the Federal Reserve’s independence will erode under the weight of the nation’s debt.

This year marks the 250th anniversary of both the Declaration of Independence and Adam Smith’s The Wealth of Nations no mere coincidence. The Enlightenment ideals of individual liberty and voluntary exchange that inspired America’s founders also laid the foundation of modern economics. Yet two and a half centuries later, persistent policy blunders — protectionist trade barriers, ballooning national debt, and stubborn inflation — reveal how far we have strayed from the Scotsman’s insights, endangering the principles upon which our republic was founded.

It is tempting to blame these failures solely on politicians. But economists share responsibility. Returning to The Wealth of Nations, one is struck by how little progress has been made in educating the public about sound principles, a task that must be renewed with every generation. While our internal scholarship has grown more sophisticated, the core policy debates have remained largely unchanged since 1776. Smith discredited mercantilism’s fixation on the balance of trade, deeming it “absurd” and a flawed foundation for trade restrictions. He also observed that accumulated public debt is seldom repaid honestly; governments instead print money and erode purchasing power. These debates sound strikingly contemporary.

After 250 years of theoretical and empirical advances, including 99 Nobel laureates, why do governments keep repeating the same mistakes? As Deirdre McCloskey has noted, the field of economics suffers from Smithian specialization without Smithian trade: narrow expertise unaccompanied by broad intellectual exchange. In a 1976 bicentennial assessment, Terence W. Hutchison criticized the profession for narrowing its scope, assuming a stable social and political backdrop so as not to disrupt isolated economic analysis. This approach excels at precision on narrow questions but neglects the wider terrain of political economy, driving a wedge between academic research and policy relevance. Smith’s “system of natural liberty” demanded the comprehensive foundations he providednot fragmented silos.

This internal refinement has come at the expense of teaching basic principles effectively. Smith contrasted the lively instruction at Glasgow, where professors’ pay depended partly on student fees, with the uninspired, often absent lectures at Oxford, where compensation was fixed regardless of enrollment. Incentives shape behavior, even among economists. Modern academia rewards narrow research over conveying fundamentals in the classroom or engaging the public, leading to a widening gap between specialized technical research and actual debates that shape policy. Novelty, not timeless wisdom, drives top-journal publications. Delivering a mundane walkthrough of textbooks or PowerPoint decks passes for “teaching” in far too many classrooms.

Graduate programs tend to emphasize exceptions to Smith’s core ideas, however tenuous, over the principles themselves. As Bryan Caplan has noted, graduate students start their programs already steeped in market-failure arguments, and two additional years of mathematical theory presenting “dozens of esoteric ways for markets to fail” will only reinforce this worldview. The approach neglects the principle that when individuals are free to pursue their own betterment, beneficial social coordination and order emerge spontaneously. The system of liberty called common sense at our nation’s founding reflects how order arises without central design if government is limited to “peace, low taxes, and a tolerable administration of justice.” Market failures are the exception, not the rule.

Focusing economists’ training primarily on market failure is like training physicists only to probe exceptions to natural laws while ignoring the universe’s consistent regularities. It encourages siloed experts to recommend “minor” interventions, as if executed by a host of benevolent bureaucrats, which aggregate into a system of control entrusted to fallible politicians, not angels. 

Hutchison closed his 1976 remarks with hope that by 2026 economists might reclaim Smith’s broad foundations. Fifty years on, the drift has only deepened, underscoring the urgent need for introspection. If not economists themselves, who else will uphold and popularize genuine economic principles and make the case for laissez-faire in the spirit of Adam Smith?

In this shared 250th anniversary of 1776, economists should reclaim their Smithian inheritance: teach the timeless truths of a system of natural liberty, echoing the Enlightenment ideals that birthed both our nation and modern economics.

For nearly a century, economists struggled with the famous diamond-water paradox. Water, while so essential for life, is so cheap. Diamonds, on the other hand, are luxuries that command such a high price. 

The resolution, articulated by Carl Menger, was that value is not inherent in goods themselves but comes from the importance individuals place upon them at the margin. Prices, as such, reflect marginal valuation conditioned by scarcity, not total usefulness in general. 

A similar misunderstanding applies to today’s debate over a “living wage.” Advocates are often quite explicit in their demand. The National Employment Law Project, for example, insists that “every job should pay a living wage.” The moral appeal is clear. Economically, however, such an assertion assumes what needs to be proven: that every job creates enough value to garner such a wage. 

Wages Are Prices

Let us begin with a simple point: wages are prices. Just as the price of bread reflects supply and demand, so do wages for labor in particular occupations. They signal how scarce certain skills are and how much value workers add at the margin. 

As Friedrich Hayek explained, the price system is “a mechanism for communicating information,” and wages are a part of that system. They are not arbitrary. They communicate where labor is most urgently needed and where it is less highly valued. 

A Thought Experiment

Imagine someone who chooses to manufacture horse-drawn carriages in the modern United States. Outside of niche markets, like Jackson Square in New Orleans, demand for such a good is minimal. Call him James. He is producing something very few people want, and the economic value he is, therefore, generating is quite low. Accordingly, the wage that could be sustained by his line of work will also be low. 

James, however, is not discouraged. He insists that he deserves a “living wage” simply by virtue of being employed.

The absurdity of the demand should be apparent. It is not a question of the dignity of the work. Let us assume his craftsmanship is top-notch, and he is obviously not engaged in the production of anything morally objectionable. Yet, the value James creates is limited relative to other uses of labor and capital. So much so that, economically speaking, James is not even engaged in production but consumption.

Paying him a high wage, then, would require diverting resources away from more valuable activities. In effect, this would mean asking others to subsidize James’s “production” that consumers have already overwhelmingly revealed to be of little value. If James wishes to continue this work for personal satisfaction, he is free to do so. But it does not follow that others are obligated to sustain it.

The Living Wage Problem

The problem here is that the living wage argument implicitly assumes that wages should be determined by the needs of the workers rather than by the value of what they produce. 

As Bernie Sanders has said repeatedly, “a job should lift you out of poverty, not keep you in it.” Superficial sentimentality presents this as understandable, but it does not follow that every particular job, in every place and moment, can and should bear a wage set by need rather than productivity, and do so indefinitely. Employment does not exist in the abstract. Jobs are specific — an auto mechanic in Acworth, Georgia in 2026, not simply a “job in the United States.” If local demand for that service is limited, the wage will reflect that reality and it ought to

Once wages are detached from productivity, economic coordination begins to break down. If employers are required to pay wages above the value generated by certain jobs, several outcomes tend to follow:

  • Some jobs disappear entirely
  • Businesses substitute capital for labor
  • Firms reduce hiring or restructure production
  • Opportunities for low-skill or inexperienced workers decline

As economist Thomas Sowell bluntly put it, “the real minimum wage is always zero.” When the cost of hiring exceeds the value a worker can produce, employers will just not hire. This, of course, does not eliminate the need for income, but it does eliminate the opportunity to earn it. 

None of this is to deny that people should wish for wages sufficient to support themselves and their families. In fact, economic progress engineered by capitalism over the last two centuries has made that wish increasingly attainable. That progress, though, followed a clear pattern: higher productivity leads to higher value, which leads to higher wages. 

Policies that try to mandate higher wages in spite of productivity levels undermine the very mechanism generating rising standards of living. The issue lies in demanding that every conceivable job, regardless of its contribution to society, ought to sustain a person and his family. 

Wages Reflect Reality

Wages, like any other price, reflect the economic realities of a particular time and place. If wages appear low, this is not an injustice (assuming they are the result of market, not government, forces). This signals limited value currently generated by that activity relative to other possible uses of labor. 

The lesson needed today is the same as the lesson from the diamond-water paradox. Prices do not reflect how important something feels. Instead, they reflect scarcity, marginal value, and human choices. Wages are no exception.

The ACLU is raising concerns about the abuse of automated license plate reader (ALPR) technology in the wake of a disconcerting story out of Kansas. The technology, which has been described as a tool for mass surveillance, was used by police to track a man who had published an opinion piece critical of the police department in a local paper, and who was subsequently suspected of putting up anti-ICE posters around town a few days before the op-ed was published.

Canyen Ashworth published his guest column in the Kansas City Star on September 30 of last year. A resident of Lenexa — a suburb of Kansas City — Ashworth argued that the city and police department were not doing enough to protect the rights of residents when it came to ICE raids and related immigration issues.

Later that day, as KCUR investigative journalist Sam Zeff later discovered, then-police chief Dawn Layman sent the column to a department crime analyst, suggesting she was considering a criminal investigation into Ashworth.

Some time later — exactly when and why remains unclear — Ashworth was also linked to the “Paper Hanger” case. On September 26, an unidentified suspect had put up four anti-ICE posters around town featuring the words “Remember when we killed fascists.” The posters were promptly taken down and a criminal investigation was opened, ostensibly because the glue was damaging city property.

Based on Ashworth’s alleged connection to the “Paper Hanger” case, an allegation that was suspiciously convenient for those who took issue with his column, a BOLO (“be on the lookout”) email was sent to all patrol officers, dispatchers, and commanders on October 21. The email identified Ashworth as a suspect in the “Paper Hanger” case and featured some blurry images of a hooded suspect along with an image of Ashworth’s car.

It turns out that the police department had been using their ALPR technology to track Ashworth’s movements. “He doesn’t get out much; he last hit a week ago today and appeared to come from McKeevers,” wrote the crime analyst who penned the email, referring to a local market.

The analyst went on to say that “This is MYOC,” that is, “make your own case.” There was no arrest warrant for Ashworth, so police could only stop him if they could come up with a reason.

In the end, Ashworth was never stopped or questioned. He only found out about being suspected and having his car tracked when Zeff told him about what he had uncovered.

“The first emotion that comes to mind is jarring for sure,” Ashworth said upon learning what happened. “And then I think after that comes being pissed off.”

After Zeff started contacting experts about his findings, which were published on February 2, Ashworth was hardly the only one who felt this way.

‘A Rare Public Example’ of Abuse

Micah Kubic, the ACLU of Kansas Executive Director, has put into words what many are no doubt thinking about this story. “The idea of putting out, the equivalent of, an all-points bulletin, BOLO, on an individual for putting up posters is both a rejection of the First Amendment, and a really ridiculous misuse of resources,” said Kubic. “The idea that you can essentially just make something up to throw against the wall and see if it sticks to be able to go after someone, is a really chilling and dangerous thing.”

First Amendment attorney Bernie Rhodes expressed particular concern about the former police chief’s abuse of the ALPR system. “She’s using the city’s license plate readers not to combat a wave of armed robberies, but to track down the everyday movements of an everyday citizen who dared to write the Kansas City Star and express their opinion,” he said.

Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, echoed these concerns. “This is a rare public example of exactly the kind of abuse that we’ve long warned against when it comes to mass-surveillance systems like license plate readers,” he writes.

He goes on to say that this story is “a particularly clear example of the abusive dynamic that mass-surveillance systems always end up falling into.” The dynamic he describes follows a simple three-step process:

Step 1: Authorities identify a target they dislike but have no evidence against.

Step 2: They aim sophisticated surveillance technologies at the targeted person.

Step 3: They try to catch the target doing something they can be charged with, no matter how petty.

Stanley’s comparison is apt. For Step 1, Ashworth wrote an article that made him a target of the local police department. In Step 2, the police weaponized their license plate reader technology against him, tracking his movements. Ostensibly this was only about the posters and had nothing to do with the article, but it looks awfully suspicious. And even if it was genuinely only about the posters, does anyone seriously believe that the reason for the criminal investigation was property damage from glue? “Posters about lost pets and community events were generally not removed,” Zeff notes. So even the posters narrative seems to follow the three steps, except in that case the ire of the police department was initially raised over the message of the posters rather than of the article.

Once the target is being spied on, Step 3 is for the police to find an excuse to arrest him. This is represented in our story by the BOLO email and the “make your own case” rhetoric, which is perhaps extra chilling because they’ve even made an acronym out of it — MYOC — which suggests this is a common practice in the Lenexa Police Department.

No doubt those who have found themselves under arrest by this department would be curious to learn whether their experience was the result of a “make your own case” initiative.

But the broader point is this. Even if we assume the absolute best in this story, even if we assume no foul play, no malicious intent, and no wrongdoing, these events still highlight the immense potential for the abuse of these kinds of surveillance technologies.

At the risk of making myself a target of these three steps, it’s worth reminding everyone that the police are not always saints, and that giving them the power to monitor our daily lives does not necessarily result in the limited, judicious, and well-intended surveillance that is always promised with such sincerity.

Watching the Watchmen in an Age of Mass Surveillance

That those in positions of authority cannot always be trusted to wield their power virtuously is hardly a new idea. As far back as the second century, the Roman poet Juvenal famously asked “Who will watch the watchmen?” But that question becomes more significant in proportion to the power of the watchmen. When modern surveillance technology gives police jaw-dropping powers to monitor our every move, the concern about whether they can be trusted to do the right thing with that power becomes considerably more pressing. This is no longer the second century, nor is it 1920 — the year the ACLU was founded. The world we now inhabit is a world of automated license plate readers, of targeted advertisements for something you merely had a conversation about 12 hours earlier, and now AI. As such, institutional limits on surveillance powers are more important than ever.

The rejoinder will be that these powers help the police to combat crime. By limiting their ability to spy on us, we are limiting their ability to keep us safe. This is an understandable concern, but it overlooks the crucial fact that we need to be kept safe, not only from common criminals, but also from the police themselves. The view that more surveillance power always means more safety is born from the naïve assumption that the police are always interested in protecting the people they watch over, and never in harming them.

Regrettably, this is not the world we live in.

The trade-off therefore needs to be reframed. The choice we are presented with is not really about safety versus privacy. It is about being kept safe from common criminals versus being kept safe from those in authority.

Navigating this trade-off is never easy, but when stories like the Canyen Ashworth case come out, they are a sobering reminder that the need to be protected from the people who are supposed to be our protectors is all too real.

When Polonius tells Laertes in Hamlet, “Neither a borrower nor a lender be,” perhaps Shakespeare was speaking from family experience. In the early 1570s, his father, John Shakespeare, was accused in court several times of lending money at usurious rates. While, in modern terms, he settled one case, he was fined in another. It is unclear if these cases were connected to the decline of Shakespeare Sr’s business, but he managed to get into debt himself, echoing Polonius’ warning. Under the laws at the time, usury, the practice of charging interest on debts, was called “a vice most odious and detestable.”

Yet by the time Adam Smith wrote his Inquiry into the Nature and Causes of the Wealth of Nations two hundred years later, credit was an established element of commercial life. Smith devoted an entire chapter to “Of Stock Lent at Interest.” He noted that the borrower viewed the loan as capital, which could either be consumed or, more productively, used as capital for enterprise. In the intervening two hundred years, credit had become an economic institution.

The gap between these two pillars of British literature was filled with the development of English commerce from its medieval form to something we would recognize today. Part of that development was the realization that time does not always cooperate with our financial undertakings. Costs arrive today when income is expected tomorrow. Bridging that gap requires both credit and interest. Commerce worked that out, but explaining why required the development of economics.

Credit did not arise across the Western world because its societies were uniquely greedy or exploitative, nor because bankers somehow imposed a mechanism to extract rent from happily self-sufficient communities. It arose because advanced commercial life requires its existence. That moral hero, the entrepreneur, must often assemble labor and capital before a single unit is sold. Credit bridges the interval.

That is also why credit appears repeatedly even where kings, priests, or populist politicians have tried to suppress it. It appears in many different forms. Sometimes it is a straightforward loan. Sometimes it is trade credit, deferred payment, or discounting. Sometimes it is tailored to the borrower, sometimes it is offered on similar terms to everyone. The underlying function is always the same: providing funds to those who need them, when they need them.

Yet those kings, priests, and populist politicians keep advancing similar objections: that credit is simply greed, or exploitation. Virtually every Western society has had laws against usury on the books, and many still do. What explains how credit continually overcomes this opposition?

The old case against usury was not completely irrational; it was often a moral response to real abuse. Many anti-usury laws grew out of a world where borrowing was not about business investment but relief from distress. A poor man borrowed only because he had suffered a crop failure, a medical emergency, or other personal tragedy. To profit from another man’s desperation seemed predatory. Medieval theologians considered money to be “barren,” as only a medium of exchange. St. Thomas Aquinas argued that charging interest is intrinsically unjust because it demands a double payment: the return of the principal and a price for its use.

This doctrine weakened when commercial societies discovered, first in practice and then in theory (as is so often the case), that money in a market economy is not, in fact, economically barren. Command over money is valuable because it gives access to opportunities, allows one to bear uncertainty and frees one from waiting. Western society evolved from condemning all interest to distinguishing legitimate interest from exploitative usury, thereby more realistically reflecting time, risk, and opportunity cost.

Yet old beliefs linger. Even Adam Smith thought that interest should be capped to benefit the prudent, which led to correspondence with Jeremy Bentham, who argued that rates should be able to float. Bentham’s argument was one that still has validity today: adults should be free to contract on whatever terms they choose and attempts to suppress high-rate lending will only block risky but potentially productive enterprise.

The debate between Smith and Bentham represented a turning point. The West in general gradually moved from asking whether any payment for the use of money was illicit to asking instead what counts as extortionate or abusive, thereby separating the existence of credit from the abuse of credit, a distinction that matters. A society can condemn fraud, coercion, and rapacious terms; this does not mean that all interest is predation.

If commercial credit had triumphed over usury laws, however, a new critique would soon emerge. Karl Marx approached credit from another direction, treating credit as part of the capitalist system of exploitation. In Das Kapital, he argued that it allowed the capitalist to spend money he hadn’t earned yet, thereby disconnecting reality from expectation and serving as the means by which the capitalist steals the value of his production from the worker. This in turn allowed companies to continue producing goods no one would be interested in purchasing, resulting in overproduction, all based on a mirage of “fictitious capital,” which made the world look wealthier than it was. This was what led to financial crises.

It was Eugen von Böhm-Bawerk, an Austrian economist from the turn of the twentieth century, who refuted Marx’s analysis in his work Capital and Interest and other treatises. He realized that human beings have a time preference and that people and indeed society prefer jam today over jam tomorrow. So, far from stealing from or exploiting the worker, the capitalist is actually paying him a premium by giving him higher wages for producing something that might not be sold for some time. Credit allows the capitalist to do this.

The wages Marx views as low are in fact discounted, because the worker gets $100 today instead of the potential of $110 in a year. The 10 percent discount represents the price of getting money immediately, satisfying the worker’s time-preference. Again, von Böhm-Bawerk shows us that credit allows this to happen.

As for the argument that credit facilitates crises, von Böhm-Bawerk’s theory of value reveals that the failure of companies to sell produced goods is not a phenomenon of the existence of credit, but a miscalculation of subjective value by the company. By articulating a theory of subjective value rather than labor value, von Böhm-Bawerk demolishes Marx’s interpretation of credit.

Thus, a world without credit would not be a world without exploitation in Marx’s sense. It would be a poorer world with fewer enterprises, fewer homes, fewer durable goods, and far less social mobility.

Credit is therefore at the center of production rather than at its margins. It should not be viewed as a device to gratify impatient consumers, but as a way of coordinating stages of production that unfold over time. Interest is the price attached to the use of present goods in a world where future goods are discounted and productive processes take time.

Schumpeter added another important insight. In his Theory of Economic Development, credit is how the entrepreneur acquires command over resources needed to carry out new combinations. As the economist David Henderson succinctly puts it in his “ten pillars of economic wisdom,” the only way to create wealth is to move resources from a lower-valued to a higher-valued use. Innovation requires withdrawing labor and materials from established uses and redirecting them toward untried purposes, which cannot usually be financed out of existing cash reserves. The entrepreneur therefore needs access to purchasing power before she realizes success. In Schumpeter’s framework, bank credit is what allows the innovator to bid resources away from old uses and bring something new into existence.

So, credit actually helps reorder the economy for the better, financing the experiment before the market has validated it. Schumpeter therefore treated credit as integral to entrepreneurship, innovation, and economic progress. A society that wants to increase wealth while disdaining credit is like the man who wants to win the lottery but refuses to buy a ticket.

Human beings live through time, which means their wants, incomes, obligations, and plans do not line up neatly. Risk is inescapable, but credit is what makes civilization durable under those conditions. Families can survive shocks, firms can organize production, entrepreneurs can innovate, and savers can grow wealth by providing the capital that helps families, firms, and entrepreneurs.

We can continue to argue about what rules should govern lending, what terms are abusive, and what legal framework best disciplines fraud and excess (although we might do well to lean towards Bentham rather than Smith in this one limited case.) Credit exists wherever people need to juggle the cost of effort now with the delayed benefit of later rewards. In other words, it is credit that allows us to build anything more durable than a day’s subsistence, whatever the experience of Shakespeare’s dad.

The recent rescission of the US Environmental Protection Agency (EPA) Greenhouse Gas Endangerment Finding and Motor Vehicle Greenhouse Gas Emission Standards Under the Clean Air Act marks one of the largest deregulation efforts in a generation. Among the 571,672 comments the EPA received on this issue last September, my colleagues at AIER Drs. Julia Cartwight, Paul Mueller, Ryan Yonk, and I joined the State Financial Officers Foundation (SFOF) and thirteen state financial officers in submitting a public comment in support of the rescission.

The Endangerment Finding was rescinded in February 2026 by President Trump and EPA Administrator Lee Zeldin. This action stands to help make life more affordable, reduce regulatory uncertainty, and rein in an expansive administrative state. 

What Was the EPA’s Endangerment Finding?  

In 2007, the Supreme Court case Massachusetts v. EPA (2007) ruled that the EPA was allowed to regulate greenhouse gases because they qualify as air pollutants under the Clean Air Act. From this ruling and a failed attempt at getting a climate bill through Congress, President Obama leaned on executive rulemaking. 

From his exercise of executive authority came the EPA’s Endangerment Finding. The finding declared six greenhouse gases broadly endangered public health and welfare, thus requiring regulation. The Endangerment Finding was the basis for vehicle emission regulation, but soon spread beyond that, resulting in costly burdens for Americans. 

One hurdle, however, was that the Clean Air Act was designed to regulate industry, not the specific gases themselves. As Judge Glock of Manhattan Institute notes, “The act required federal permits for any source that emitted more than 100 tons per year of an air pollutant. By this measure, some families would need permits.” 

Despite some Supreme Court rulings limiting the EPA, the Endangerment Finding led to regulations that made life less affordable for the average American. Regulations under the Biden Administration EPA alone cost an estimated $1 trillion. Additionally, as we discuss in our comment, these regulations encourage a “ratchet effect,” where the government (in this case, the executive branch and the EPA in particular) expands in size and/or scope of authority due to perceived crises and rarely fully recedes. This, ultimately, decreases accountability. 

In the end, the Endangerment Finding enabled the creation of stringent rules but failed to clearly demonstrate the social benefits of individual policies proportional to their economic costs. The regulations stemming from the finding made life less affordable, but the benefits of said regulations were much more difficult to prove.

The Benefits of Rescission 

Our comment focused on three key areas: the economic benefits of rescission, the dangers of an expansive administrative state, and the effects of the potential rescission on federalism. 

The economic benefits of the rescission stem from the rollback of the burdensome regulations discussed in the previous section. Repealing these regulations could help lower costs of energy production for both producers and consumers. Regulatory reform would also reduce the policy uncertainty from vague statutes, lowering the costs for both producers and consumers. 

Additionally, the rescission helps return rulemaking power to the legislative branch. Returning rulemaking powers to the elected legislative branch can improve transparency and accountability.

Furthermore, the rescission improves the balance between the federal and state governments. While Congress has primacy in climate policy, states have greater autonomy to apply local knowledge to environmental and energy challenges. This is especially important given the Supreme Court rulings of Loper Bright Enterprises v. Raimondo (2024), which compels courts to exercise independent judicial judgment interpreting ambiguous statutes rather than defaulting to agency readings, and West Virginia v. EPA (2022), which ruled that agencies must rely on Congress to grant it authority to regulate on issues of “economic and political significance” and allow states to set the enforceable rules governing existing energy sources. 

What Comes Next? 

The rescission can help shift environmental and energy policy away from command-and-control regulations and toward institutional frameworks that rely on price signals, property rights, and competition. Markets function as a discovery process where entrepreneurs can test alternative technologies, production methods, and energy sources under conditions of profit and loss. When prices reflect relative scarcity, producers are driven to economize on fuel, improve efficiency, and innovate cleaner production techniques to reduce costs.  

Additionally, by returning rulemaking to Congress and discretion to the states, the federal government can focus on sustaining “competitive, ‘market preserving federalism’” while states are free to innovate without inhibiting free entry and exit between states. Successful institutional arrangements will scale as failed approaches exit as Americans vote with their dollars and their feet. Environmental stewardship emerges through clearly defined property rights, liability rules, and localized governance mechanisms that address identifiable harms.  

By allowing market processes to work, people, not government, can drive lower cost abatement strategies while preserving energy reliability and consumer choice. 

Read the full public comment here.