Category

Economy

Category

The Federal Open Market Committee is widely expected to leave its policy rate unchanged at this week’s meeting. The CME Group puts the odds that the FOMC will continue to target the federal funds rate within the 3.5 to 3.75 percent range at 99.5 percent. But the near certainty regarding this week’s decisions masks the growing problem Fed officials face. 

The rise in energy prices tied to the conflict with Iran is the sort of negative supply shock that makes monetary policy especially difficult. It puts upward pressure on inflation even as it threatens to slow growth and weaken employment.

That puts the Federal Reserve in an awkward position. Under its dual mandate, the Fed is supposed to promote both price stability and maximum employment. Ordinarily, Fed officials have the luxury of focusing on one of those objectives at a time. When inflation is increasing, the Fed can raise rates to cool demand. When growth slows and unemployment rises, it can cut rates to support spending and hiring. An adverse supply shock is different because it simultaneously threatens both goals.

What the Rules Say

The difficulty posed by adverse supply shocks makes it all the more important to seek guidance from monetary rules. The latest Monetary Rules Report from AIER’s Sound Money Project shows that the Fed’s current policy rate already sits near the lower end of the recommended range. 

The Taylor Rule remains the most familiar place to start. It says that the Fed should set interest rates higher when inflation runs above target and lower when economic activity or employment fall below sustainable levels. Using the most recent data available, the original version of the rule points to a federal funds rate of 4.66 percent. A modified version that minimizes interest rate volatility and accounts for forecasts of future inflation implies a policy rate of 3.99 percent. If anything, the Taylor Rule suggests Fed officials ought to consider an increase in the federal funds rate target. 

Rules based on nominal gross domestic product, or NGDP, suggest somewhat lower rates, with an NGDP level rule at 3.93 percent and an NGDP growth rule at 3.53 percent. These estimates are in line with the current stance of policy and support the expected decision to hold steady at 3.5–3.75 percent. 

How Rules Account for Supply Shocks

In normal circumstances, both types of rules provide a useful way to translate incoming data into a policy rate prescription. But supply shocks make the Taylor Rule harder to interpret, because they create conflicting signals. Higher energy prices put upward pressure on inflation, which points toward tighter policy. At the same time, they raise production costs and squeeze household budgets, which can weaken output and employment, pointing toward easier policy. As a result, the Taylor Rule gets pulled in opposite directions.

That tension has also shown up in recent commentary from policymakers. Some of the more dovish voices inside the Fed and around the administration — who have been quite eager to lower rates — have admitted that any cuts are more likely to come later in the year, after the current Middle East conflict subsides. That shift reflects how difficult it is to formulate policy when a negative supply shock strains both sides of the Fed’s mandate at once.

A Better Guide During Supply Shocks

This is where rules based on nominal spending become especially useful. NGDP is simply the total dollar value of spending in the economy. Its growth rate combines inflation and real output growth into a single measure. That makes it an especially useful guide when supply shocks hit. Instead of forcing policymakers to weigh inflation and growth separately, NGDP rules ask a broader question: what is happening to total spending?

The NGDP growth rule, for instance, suggests that monetary policymakers aim for annual 4 percent growth in nominal spending. The 4 percent benchmark reflects the fact that the Fed targets a 2 percent inflation rate and annual output growth tends to average 2 percent. It effectively wraps both sides of the Fed’s dual mandate into a single statistic. Importantly, in the context of a negative supply shock, it also accommodates offsetting movements in those two objectves. For instance, if spiking energy prices push inflation to 3 percent and pull real growth down to 1 percent, overall NGDP growth would remain at 4 percent and the Fed would be justified in keeping rates steady — despite inflation moving temporarily above target. 

In other words, if oil prices rise because of geopolitical conflict, inflation may move higher even though overall spending is not accelerating in a way that calls for tighter monetary policy. At the same time, weaker real growth alone does not necessarily mean the Fed should cut, so long as nominal spending remains reasonably stable. Looking at NGDP helps policymakers avoid overreacting to only one dimension of the shock.

In the April report, the NGDP rules are broadly consistent with leaving policy unchanged. The NGDP growth rule, in particular, suggests that current policy is roughly on target, with the most recent data showing NGDP growth of 4.2 percent — very close to the rule’s 4 percent benchmark. That figure is backward-looking, so it is reasonable to worry that it may not fully capture recent developments tied to the conflict in the Middle East. Even so, more recent inflation data and forecasts of real output growth still point to nominal spending growth of around 4 percent. That reinforces the case for keeping policy where it is. 

What This Means for the Fed

Supply shocks create some of the most challenging problems for monetary policymakers because they blur the line between inflation risk and economic weakness. That is what makes the current moment so uncomfortable for Fed officials. But discomfort need not mean confusion. The leading rules still offer a useful signal: when overall nominal spending remains close to trend, policymakers should be careful not to overreact to either dimension of a supply disturbance. Fed officials can remain confident by keeping policy within the range offered by the leading monetary policy rules.

The Federal Open Market Committee is widely expected to leave its policy rate unchanged at this week’s meeting. The CME Group puts the odds that the FOMC will continue to target the federal funds rate within the 3.5 to 3.75 percent range at 99.5 percent. But the near certainty regarding this week’s decisions masks the growing problem Fed officials face. 

The rise in energy prices tied to the conflict with Iran is the sort of negative supply shock that makes monetary policy especially difficult. It puts upward pressure on inflation even as it threatens to slow growth and weaken employment.

That puts the Federal Reserve in an awkward position. Under its dual mandate, the Fed is supposed to promote both price stability and maximum employment. Ordinarily, Fed officials have the luxury of focusing on one of those objectives at a time. When inflation is increasing, the Fed can raise rates to cool demand. When growth slows and unemployment rises, it can cut rates to support spending and hiring. An adverse supply shock is different because it simultaneously threatens both goals.

What the Rules Say

The difficulty posed by adverse supply shocks makes it all the more important to seek guidance from monetary rules. The latest Monetary Rules Report from AIER’s Sound Money Project shows that the Fed’s current policy rate already sits near the lower end of the recommended range. 

The Taylor Rule remains the most familiar place to start. It says that the Fed should set interest rates higher when inflation runs above target and lower when economic activity or employment fall below sustainable levels. Using the most recent data available, the original version of the rule points to a federal funds rate of 4.66 percent. A modified version that minimizes interest rate volatility and accounts for forecasts of future inflation implies a policy rate of 3.99 percent. If anything, the Taylor Rule suggests Fed officials ought to consider an increase in the federal funds rate target. 

Rules based on nominal gross domestic product, or NGDP, suggest somewhat lower rates, with an NGDP level rule at 3.93 percent and an NGDP growth rule at 3.53 percent. These estimates are in line with the current stance of policy and support the expected decision to hold steady at 3.5–3.75 percent. 

How Rules Account for Supply Shocks

In normal circumstances, both types of rules provide a useful way to translate incoming data into a policy rate prescription. But supply shocks make the Taylor Rule harder to interpret, because they create conflicting signals. Higher energy prices put upward pressure on inflation, which points toward tighter policy. At the same time, they raise production costs and squeeze household budgets, which can weaken output and employment, pointing toward easier policy. As a result, the Taylor Rule gets pulled in opposite directions.

That tension has also shown up in recent commentary from policymakers. Some of the more dovish voices inside the Fed and around the administration — who have been quite eager to lower rates — have admitted that any cuts are more likely to come later in the year, after the current Middle East conflict subsides. That shift reflects how difficult it is to formulate policy when a negative supply shock strains both sides of the Fed’s mandate at once.

A Better Guide During Supply Shocks

This is where rules based on nominal spending become especially useful. NGDP is simply the total dollar value of spending in the economy. Its growth rate combines inflation and real output growth into a single measure. That makes it an especially useful guide when supply shocks hit. Instead of forcing policymakers to weigh inflation and growth separately, NGDP rules ask a broader question: what is happening to total spending?

The NGDP growth rule, for instance, suggests that monetary policymakers aim for annual 4 percent growth in nominal spending. The 4 percent benchmark reflects the fact that the Fed targets a 2 percent inflation rate and annual output growth tends to average 2 percent. It effectively wraps both sides of the Fed’s dual mandate into a single statistic. Importantly, in the context of a negative supply shock, it also accommodates offsetting movements in those two objectves. For instance, if spiking energy prices push inflation to 3 percent and pull real growth down to 1 percent, overall NGDP growth would remain at 4 percent and the Fed would be justified in keeping rates steady — despite inflation moving temporarily above target. 

In other words, if oil prices rise because of geopolitical conflict, inflation may move higher even though overall spending is not accelerating in a way that calls for tighter monetary policy. At the same time, weaker real growth alone does not necessarily mean the Fed should cut, so long as nominal spending remains reasonably stable. Looking at NGDP helps policymakers avoid overreacting to only one dimension of the shock.

In the April report, the NGDP rules are broadly consistent with leaving policy unchanged. The NGDP growth rule, in particular, suggests that current policy is roughly on target, with the most recent data showing NGDP growth of 4.2 percent — very close to the rule’s 4 percent benchmark. That figure is backward-looking, so it is reasonable to worry that it may not fully capture recent developments tied to the conflict in the Middle East. Even so, more recent inflation data and forecasts of real output growth still point to nominal spending growth of around 4 percent. That reinforces the case for keeping policy where it is. 

What This Means for the Fed

Supply shocks create some of the most challenging problems for monetary policymakers because they blur the line between inflation risk and economic weakness. That is what makes the current moment so uncomfortable for Fed officials. But discomfort need not mean confusion. The leading rules still offer a useful signal: when overall nominal spending remains close to trend, policymakers should be careful not to overreact to either dimension of a supply disturbance. Fed officials can remain confident by keeping policy within the range offered by the leading monetary policy rules.

A few weeks ago, social media skeptics received their best news in years.

In KGM v. Meta, a jury found Meta and Google negligent for their role in fueling a youth mental health crisis. Now, six million dollars in damages is basically meaningless to companies that gross hundreds of billions in revenue annually. But the reason this case has gotten so much media attention is for what it might represent. Some have compared the case to the beginning of litigation against Big Tobacco last century, which culminated in a $206 billion master settlement with more than 40 states.

In this case, however, the jury got it wrong. It concluded three things:

  • Instagram and YouTube were designed in ways that encouraged uncontrollable use and addictive behaviors.
  • The companies failed to adequately warn users, especially minors, about the risks.
  • The design of their platforms was a considerable factor in causing the plaintiff’s mental health problems.

All three of these things could be true, but neither Meta nor Google should be held liable for any of them. Unlike prior cases involving social media, KGM treated YouTube and Instagram as fundamentally defective products. The central question wasn’t whether malicious users could misuse these platforms, but whether the platforms themselves posed inherent risks. In general, online companies aren’t legally accountable for what users post due to Section 230 protections — Meta, for instance, wouldn’t be held liable for someone using its products to incite violence. In this case, though, Judge Carolyn Kuhl ruled that platform design elements — like algorithm-driven feeds, autoplaying videos, and push notifications — could be challenged. 

In other words, Instagram and YouTube should be held liable because they’re addictive, and too effective at providing content users want.

In a motion denying summary judgment, Judge Kuhl wrote: “The fact that a design feature like ‘infinite scroll’ impelled a user to continue to consume content that proved harmful does not mean that there can be no liability for harm arising from the design feature itself.” In other words, Meta and Google can be held responsible for designing a product that fulfills a consumer desire. Such an argument is dubious. Product innovation exists precisely to meet the demands of consumers — and that’s a good thing.

If such a conclusion holds, where could it not apply? Oreos are delicious — should Mondelez International be forced to make their product less appealing because a “design feature” of Oreos causes repeated consumption of Oreos, with negative health outcomes? Should TV shows that end on a cliffhanger be banned because such a “design feature” creates an addictive cycle, causing the viewer to continue watching? In excess, many other products besides social media can become addictive, but it’s not the government’s job to single out certain products or consumer desires as addictive. 

And then there’s the First Amendment problem. Even assuming that social media is addictive in a way analogous to tobacco, the two differ in a key respect. Social media companies are being held liable for their speech, which is protected by the First Amendment. As Erwin Chemerinsky, Dean of the UC Berkeley School of Law, put it:

The plaintiffs in these lawsuits argued that companies design algorithms that are tailored to individual users to keep them hooked. But algorithms are themselves speech, and there is no reason to treat this speech differently from the code that encourages people to keep playing video games.

Or, as the Supreme Court Justice Elena Kagan wrote in Moody v. NetChoice, “the First Amendment … does not go on leave when social media [is] involved.” And while social media is almost certainly a drain on society — decreasing attention spans, increasing depression, and spreading misinformation — neither restricting First Amendment-protected speech nor regulating the free market is the answer.

Forcing social media companies to restrict access to social media won’t necessarily lead to meaningfully lower social media usage by teenagers. For one, even the most extreme option — simply banning social media usage by teenagers — is easily circumvented by most teenagers. Teenagers have cleared visual age checks. As one Australian teenager put it, “I scrunched my face up to get more wrinkles, so I looked older, and it worked!” Perhaps not a high-tech workaround, but it nevertheless worked, and many other techniques do, too.

And even if the current mainstream social media companies — Meta, Google, TikTok, etc. — were forced to make their products less addictive, that would just open the door for competitors to replace them. And then what? Regulate those products until they’re less addictive, too? At some point, the government will just be playing First Amendment Whac-A-Mole. 

Ultimately, this is not a problem for the courts — nor even legislatures — but rather for civil society. Regulating trillion-dollar companies out of existence won’t fix the underlying problem. If social media were intrinsically detrimental, in the way that cigarettes cause a chemical addiction and subsequent health problems, then almost every teenager who uses social media would struggle with addiction and see some demonstrable negative impact on their life. But that’s not the case. About one in five teens say social media has hurt their mental health. Another study found that social media usage beyond three hours a day increased internalizing problems (like anxiety/depression) by about 60 to 80 percent. Neither of these numbers are great. But they also reveal that a significant percentage of teenagers who use social media are perfectly fine. 

So what explains how one teen could use social media and neither become addicted nor have their mental health suffer, and another teen could experience the opposite? Very likely having access to a robust civil society — family, activities, community organizations, religious groups, and other social supports. Social media accounts for about one percent of the variation in life satisfaction. By contrast, family situations explain about a third of life satisfaction for young adults. Running to government for legislation to fix our minor woes allows these important community bonds to atrophy. An important aspect of the liberal political order is the recognition that voluntary, robust civil society can play a much more effective role in addressing these societal problems than can even well-intentioned meddling by the government. Social media is no exception.

President Donald Trump is discovering what Joe Biden learned the hard way: voters don’t easily forgive price increases. Despite inflation cooling from its peak, two-thirds of Americans disapprove of how Trump is handling inflation, according to an April Economist/YouGov poll.

The Republican Party’s victory lap over no tax on tips and no tax on overtime rings hollow, considering persistent public frustration with the cost of living. It doesn’t help that Trump’s tariff war and the war in Iran are further fueling rising prices.

And voter frustration isn’t just about recent price changes. It’s also about the lasting damage from the inflation surge of 2021–2022, which pushed the overall price level permanently higher.

There’s one cure, however, that Washington continues to miss. Inflation is increasingly driven by unsustainable budget policy, and politicians on both sides of the aisle keep pouring gasoline on the fiscal fire.

When debt grows persistently faster than the economy, it eventually forces difficult choices. Investors begin to question how the government will meet its obligations. There are only three answers: raise taxes, cut spending, or allow inflation to erode the real value of debt. When the first two options are repeatedly postponed, inflation becomes the likely path of least resistance.

This is the risk of so-called fiscal dominance. Even a formally independent Federal Reserve cannot ignore the consequences of excessive borrowing. If interest costs rise rapidly and financial markets come under stress, the Fed will face pressure to lower borrowing costs at the risk of fueling inflation.

In that world, debates about whether a Fed chair is politically independent miss the bigger picture. The real danger is that fiscal policy leaves the central bank with no good options.

Recent experience offers a clear warning. The inflation surge earlier this decade was not primarily caused by pandemic-related supply disruptions. Nor does the corporate greed theory hold any water. It was mostly driven by unprecedented deficit-financed stimulus spending combined with accommodative monetary policy.

In short, the government spent too much, and to enable this excessive government spending,  the Fed printed too much money.

Bringing inflation back down required interest rate hikes, raising borrowing costs across the economy. That painful adjustment underscores a key lesson: restoring credibility after inflation takes hold is far more costly than maintaining discipline in the first place.

Yet Washington is not only failing to change course but doubling down.

Despite campaign promises to rein in spending with efforts like the Department of Government Efficiency (DOGE) and vows by President Trump to balance the budget, the Trump administration and Congress have continued to expand the federal debt.

From extending and expanding the Trump tax cuts without commensurate spending reductions to doing an end-run around the appropriations process to boost defense and immigration enforcement, Republicans have repeatedly sidestepped budget rules to pass deficit-financed, partisan measures.

Interest costs on the national debt now exceed federal spending on national defense. That could soon change, however, as President Donald Trump pushes to reverse the imbalance — not by lowering interest rates, but by increasing defense spending.

Republicans aren’t the only ones to blame. Democrats under Biden also abused the budget process and executive powers to enact green energy subsidies, forgive student loan debt, and accelerate the cost of food stamps.

Meanwhile, neither party is willing to confront the unchecked growth of entitlement programs. Social Security, Medicare, and Medicaid are expanding faster than the economy and faster than federal revenues. Demographic shifts, including an aging population and lower birth rates, mean fewer workers are supporting more beneficiaries.

The bigger problem is poor program design. Social Security benefits grow with wages, exceeding inflation, and federal health care programs are open-ended entitlements devoid of market incentives to control price pressures.

Absent meaningful reform, the conclusion is unavoidable: inflation will rise to reduce the fiscal burden of the debt.

Sound fiscal policy is the only answer. When Congress credibly stabilizes debt, it anchors inflation expectations and reduces the risk premium investors demand. Lower long-term interest rates ease borrowing costs across the economy and slow the growth of federal interest payments.

Congress should adopt a credible and enforceable fiscal target to stabilize debt relative to the economy. Its members should stop the misuse of emergency spending provisions to bypass budget constraints. And most importantly, they must reform the entitlement programs driving long-term spending growth.

That means refocusing Social Security on preventing poverty in old age while adjusting benefits and eligibility to reflect higher earners’ ability to save on their own and longer life expectancies. It means slowing Medicare’s growth through stronger budget constraints and cost discipline, best achieved by giving beneficiaries more control over how their subsidies are spent. And it means restructuring Medicaid to limit federal exposure and improve accountability, with states bearing a larger share of costs.

None of these steps are politically easy. An independent fiscal commission could help break the partisan deadlock and advance these reforms.

Trump’s declining approval ratings on inflation are a warning sign. Voters know something is wrong. Until policymakers confront the underlying source of the problem — unsustainable federal spending — inflation will remain a recurring threat, and the Federal Reserve’s independence will erode under the weight of the nation’s debt.

This year marks the 250th anniversary of both the Declaration of Independence and Adam Smith’s The Wealth of Nations no mere coincidence. The Enlightenment ideals of individual liberty and voluntary exchange that inspired America’s founders also laid the foundation of modern economics. Yet two and a half centuries later, persistent policy blunders — protectionist trade barriers, ballooning national debt, and stubborn inflation — reveal how far we have strayed from the Scotsman’s insights, endangering the principles upon which our republic was founded.

It is tempting to blame these failures solely on politicians. But economists share responsibility. Returning to The Wealth of Nations, one is struck by how little progress has been made in educating the public about sound principles, a task that must be renewed with every generation. While our internal scholarship has grown more sophisticated, the core policy debates have remained largely unchanged since 1776. Smith discredited mercantilism’s fixation on the balance of trade, deeming it “absurd” and a flawed foundation for trade restrictions. He also observed that accumulated public debt is seldom repaid honestly; governments instead print money and erode purchasing power. These debates sound strikingly contemporary.

After 250 years of theoretical and empirical advances, including 99 Nobel laureates, why do governments keep repeating the same mistakes? As Deirdre McCloskey has noted, the field of economics suffers from Smithian specialization without Smithian trade: narrow expertise unaccompanied by broad intellectual exchange. In a 1976 bicentennial assessment, Terence W. Hutchison criticized the profession for narrowing its scope, assuming a stable social and political backdrop so as not to disrupt isolated economic analysis. This approach excels at precision on narrow questions but neglects the wider terrain of political economy, driving a wedge between academic research and policy relevance. Smith’s “system of natural liberty” demanded the comprehensive foundations he providednot fragmented silos.

This internal refinement has come at the expense of teaching basic principles effectively. Smith contrasted the lively instruction at Glasgow, where professors’ pay depended partly on student fees, with the uninspired, often absent lectures at Oxford, where compensation was fixed regardless of enrollment. Incentives shape behavior, even among economists. Modern academia rewards narrow research over conveying fundamentals in the classroom or engaging the public, leading to a widening gap between specialized technical research and actual debates that shape policy. Novelty, not timeless wisdom, drives top-journal publications. Delivering a mundane walkthrough of textbooks or PowerPoint decks passes for “teaching” in far too many classrooms.

Graduate programs tend to emphasize exceptions to Smith’s core ideas, however tenuous, over the principles themselves. As Bryan Caplan has noted, graduate students start their programs already steeped in market-failure arguments, and two additional years of mathematical theory presenting “dozens of esoteric ways for markets to fail” will only reinforce this worldview. The approach neglects the principle that when individuals are free to pursue their own betterment, beneficial social coordination and order emerge spontaneously. The system of liberty called common sense at our nation’s founding reflects how order arises without central design if government is limited to “peace, low taxes, and a tolerable administration of justice.” Market failures are the exception, not the rule.

Focusing economists’ training primarily on market failure is like training physicists only to probe exceptions to natural laws while ignoring the universe’s consistent regularities. It encourages siloed experts to recommend “minor” interventions, as if executed by a host of benevolent bureaucrats, which aggregate into a system of control entrusted to fallible politicians, not angels. 

Hutchison closed his 1976 remarks with hope that by 2026 economists might reclaim Smith’s broad foundations. Fifty years on, the drift has only deepened, underscoring the urgent need for introspection. If not economists themselves, who else will uphold and popularize genuine economic principles and make the case for laissez-faire in the spirit of Adam Smith?

In this shared 250th anniversary of 1776, economists should reclaim their Smithian inheritance: teach the timeless truths of a system of natural liberty, echoing the Enlightenment ideals that birthed both our nation and modern economics.

For nearly a century, economists struggled with the famous diamond-water paradox. Water, while so essential for life, is so cheap. Diamonds, on the other hand, are luxuries that command such a high price. 

The resolution, articulated by Carl Menger, was that value is not inherent in goods themselves but comes from the importance individuals place upon them at the margin. Prices, as such, reflect marginal valuation conditioned by scarcity, not total usefulness in general. 

A similar misunderstanding applies to today’s debate over a “living wage.” Advocates are often quite explicit in their demand. The National Employment Law Project, for example, insists that “every job should pay a living wage.” The moral appeal is clear. Economically, however, such an assertion assumes what needs to be proven: that every job creates enough value to garner such a wage. 

Wages Are Prices

Let us begin with a simple point: wages are prices. Just as the price of bread reflects supply and demand, so do wages for labor in particular occupations. They signal how scarce certain skills are and how much value workers add at the margin. 

As Friedrich Hayek explained, the price system is “a mechanism for communicating information,” and wages are a part of that system. They are not arbitrary. They communicate where labor is most urgently needed and where it is less highly valued. 

A Thought Experiment

Imagine someone who chooses to manufacture horse-drawn carriages in the modern United States. Outside of niche markets, like Jackson Square in New Orleans, demand for such a good is minimal. Call him James. He is producing something very few people want, and the economic value he is, therefore, generating is quite low. Accordingly, the wage that could be sustained by his line of work will also be low. 

James, however, is not discouraged. He insists that he deserves a “living wage” simply by virtue of being employed.

The absurdity of the demand should be apparent. It is not a question of the dignity of the work. Let us assume his craftsmanship is top-notch, and he is obviously not engaged in the production of anything morally objectionable. Yet, the value James creates is limited relative to other uses of labor and capital. So much so that, economically speaking, James is not even engaged in production but consumption.

Paying him a high wage, then, would require diverting resources away from more valuable activities. In effect, this would mean asking others to subsidize James’s “production” that consumers have already overwhelmingly revealed to be of little value. If James wishes to continue this work for personal satisfaction, he is free to do so. But it does not follow that others are obligated to sustain it.

The Living Wage Problem

The problem here is that the living wage argument implicitly assumes that wages should be determined by the needs of the workers rather than by the value of what they produce. 

As Bernie Sanders has said repeatedly, “a job should lift you out of poverty, not keep you in it.” Superficial sentimentality presents this as understandable, but it does not follow that every particular job, in every place and moment, can and should bear a wage set by need rather than productivity, and do so indefinitely. Employment does not exist in the abstract. Jobs are specific — an auto mechanic in Acworth, Georgia in 2026, not simply a “job in the United States.” If local demand for that service is limited, the wage will reflect that reality and it ought to

Once wages are detached from productivity, economic coordination begins to break down. If employers are required to pay wages above the value generated by certain jobs, several outcomes tend to follow:

  • Some jobs disappear entirely
  • Businesses substitute capital for labor
  • Firms reduce hiring or restructure production
  • Opportunities for low-skill or inexperienced workers decline

As economist Thomas Sowell bluntly put it, “the real minimum wage is always zero.” When the cost of hiring exceeds the value a worker can produce, employers will just not hire. This, of course, does not eliminate the need for income, but it does eliminate the opportunity to earn it. 

None of this is to deny that people should wish for wages sufficient to support themselves and their families. In fact, economic progress engineered by capitalism over the last two centuries has made that wish increasingly attainable. That progress, though, followed a clear pattern: higher productivity leads to higher value, which leads to higher wages. 

Policies that try to mandate higher wages in spite of productivity levels undermine the very mechanism generating rising standards of living. The issue lies in demanding that every conceivable job, regardless of its contribution to society, ought to sustain a person and his family. 

Wages Reflect Reality

Wages, like any other price, reflect the economic realities of a particular time and place. If wages appear low, this is not an injustice (assuming they are the result of market, not government, forces). This signals limited value currently generated by that activity relative to other possible uses of labor. 

The lesson needed today is the same as the lesson from the diamond-water paradox. Prices do not reflect how important something feels. Instead, they reflect scarcity, marginal value, and human choices. Wages are no exception.

The ACLU is raising concerns about the abuse of automated license plate reader (ALPR) technology in the wake of a disconcerting story out of Kansas. The technology, which has been described as a tool for mass surveillance, was used by police to track a man who had published an opinion piece critical of the police department in a local paper, and who was subsequently suspected of putting up anti-ICE posters around town a few days before the op-ed was published.

Canyen Ashworth published his guest column in the Kansas City Star on September 30 of last year. A resident of Lenexa — a suburb of Kansas City — Ashworth argued that the city and police department were not doing enough to protect the rights of residents when it came to ICE raids and related immigration issues.

Later that day, as KCUR investigative journalist Sam Zeff later discovered, then-police chief Dawn Layman sent the column to a department crime analyst, suggesting she was considering a criminal investigation into Ashworth.

Some time later — exactly when and why remains unclear — Ashworth was also linked to the “Paper Hanger” case. On September 26, an unidentified suspect had put up four anti-ICE posters around town featuring the words “Remember when we killed fascists.” The posters were promptly taken down and a criminal investigation was opened, ostensibly because the glue was damaging city property.

Based on Ashworth’s alleged connection to the “Paper Hanger” case, an allegation that was suspiciously convenient for those who took issue with his column, a BOLO (“be on the lookout”) email was sent to all patrol officers, dispatchers, and commanders on October 21. The email identified Ashworth as a suspect in the “Paper Hanger” case and featured some blurry images of a hooded suspect along with an image of Ashworth’s car.

It turns out that the police department had been using their ALPR technology to track Ashworth’s movements. “He doesn’t get out much; he last hit a week ago today and appeared to come from McKeevers,” wrote the crime analyst who penned the email, referring to a local market.

The analyst went on to say that “This is MYOC,” that is, “make your own case.” There was no arrest warrant for Ashworth, so police could only stop him if they could come up with a reason.

In the end, Ashworth was never stopped or questioned. He only found out about being suspected and having his car tracked when Zeff told him about what he had uncovered.

“The first emotion that comes to mind is jarring for sure,” Ashworth said upon learning what happened. “And then I think after that comes being pissed off.”

After Zeff started contacting experts about his findings, which were published on February 2, Ashworth was hardly the only one who felt this way.

‘A Rare Public Example’ of Abuse

Micah Kubic, the ACLU of Kansas Executive Director, has put into words what many are no doubt thinking about this story. “The idea of putting out, the equivalent of, an all-points bulletin, BOLO, on an individual for putting up posters is both a rejection of the First Amendment, and a really ridiculous misuse of resources,” said Kubic. “The idea that you can essentially just make something up to throw against the wall and see if it sticks to be able to go after someone, is a really chilling and dangerous thing.”

First Amendment attorney Bernie Rhodes expressed particular concern about the former police chief’s abuse of the ALPR system. “She’s using the city’s license plate readers not to combat a wave of armed robberies, but to track down the everyday movements of an everyday citizen who dared to write the Kansas City Star and express their opinion,” he said.

Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, echoed these concerns. “This is a rare public example of exactly the kind of abuse that we’ve long warned against when it comes to mass-surveillance systems like license plate readers,” he writes.

He goes on to say that this story is “a particularly clear example of the abusive dynamic that mass-surveillance systems always end up falling into.” The dynamic he describes follows a simple three-step process:

Step 1: Authorities identify a target they dislike but have no evidence against.

Step 2: They aim sophisticated surveillance technologies at the targeted person.

Step 3: They try to catch the target doing something they can be charged with, no matter how petty.

Stanley’s comparison is apt. For Step 1, Ashworth wrote an article that made him a target of the local police department. In Step 2, the police weaponized their license plate reader technology against him, tracking his movements. Ostensibly this was only about the posters and had nothing to do with the article, but it looks awfully suspicious. And even if it was genuinely only about the posters, does anyone seriously believe that the reason for the criminal investigation was property damage from glue? “Posters about lost pets and community events were generally not removed,” Zeff notes. So even the posters narrative seems to follow the three steps, except in that case the ire of the police department was initially raised over the message of the posters rather than of the article.

Once the target is being spied on, Step 3 is for the police to find an excuse to arrest him. This is represented in our story by the BOLO email and the “make your own case” rhetoric, which is perhaps extra chilling because they’ve even made an acronym out of it — MYOC — which suggests this is a common practice in the Lenexa Police Department.

No doubt those who have found themselves under arrest by this department would be curious to learn whether their experience was the result of a “make your own case” initiative.

But the broader point is this. Even if we assume the absolute best in this story, even if we assume no foul play, no malicious intent, and no wrongdoing, these events still highlight the immense potential for the abuse of these kinds of surveillance technologies.

At the risk of making myself a target of these three steps, it’s worth reminding everyone that the police are not always saints, and that giving them the power to monitor our daily lives does not necessarily result in the limited, judicious, and well-intended surveillance that is always promised with such sincerity.

Watching the Watchmen in an Age of Mass Surveillance

That those in positions of authority cannot always be trusted to wield their power virtuously is hardly a new idea. As far back as the second century, the Roman poet Juvenal famously asked “Who will watch the watchmen?” But that question becomes more significant in proportion to the power of the watchmen. When modern surveillance technology gives police jaw-dropping powers to monitor our every move, the concern about whether they can be trusted to do the right thing with that power becomes considerably more pressing. This is no longer the second century, nor is it 1920 — the year the ACLU was founded. The world we now inhabit is a world of automated license plate readers, of targeted advertisements for something you merely had a conversation about 12 hours earlier, and now AI. As such, institutional limits on surveillance powers are more important than ever.

The rejoinder will be that these powers help the police to combat crime. By limiting their ability to spy on us, we are limiting their ability to keep us safe. This is an understandable concern, but it overlooks the crucial fact that we need to be kept safe, not only from common criminals, but also from the police themselves. The view that more surveillance power always means more safety is born from the naïve assumption that the police are always interested in protecting the people they watch over, and never in harming them.

Regrettably, this is not the world we live in.

The trade-off therefore needs to be reframed. The choice we are presented with is not really about safety versus privacy. It is about being kept safe from common criminals versus being kept safe from those in authority.

Navigating this trade-off is never easy, but when stories like the Canyen Ashworth case come out, they are a sobering reminder that the need to be protected from the people who are supposed to be our protectors is all too real.

When Polonius tells Laertes in Hamlet, “Neither a borrower nor a lender be,” perhaps Shakespeare was speaking from family experience. In the early 1570s, his father, John Shakespeare, was accused in court several times of lending money at usurious rates. While, in modern terms, he settled one case, he was fined in another. It is unclear if these cases were connected to the decline of Shakespeare Sr’s business, but he managed to get into debt himself, echoing Polonius’ warning. Under the laws at the time, usury, the practice of charging interest on debts, was called “a vice most odious and detestable.”

Yet by the time Adam Smith wrote his Inquiry into the Nature and Causes of the Wealth of Nations two hundred years later, credit was an established element of commercial life. Smith devoted an entire chapter to “Of Stock Lent at Interest.” He noted that the borrower viewed the loan as capital, which could either be consumed or, more productively, used as capital for enterprise. In the intervening two hundred years, credit had become an economic institution.

The gap between these two pillars of British literature was filled with the development of English commerce from its medieval form to something we would recognize today. Part of that development was the realization that time does not always cooperate with our financial undertakings. Costs arrive today when income is expected tomorrow. Bridging that gap requires both credit and interest. Commerce worked that out, but explaining why required the development of economics.

Credit did not arise across the Western world because its societies were uniquely greedy or exploitative, nor because bankers somehow imposed a mechanism to extract rent from happily self-sufficient communities. It arose because advanced commercial life requires its existence. That moral hero, the entrepreneur, must often assemble labor and capital before a single unit is sold. Credit bridges the interval.

That is also why credit appears repeatedly even where kings, priests, or populist politicians have tried to suppress it. It appears in many different forms. Sometimes it is a straightforward loan. Sometimes it is trade credit, deferred payment, or discounting. Sometimes it is tailored to the borrower, sometimes it is offered on similar terms to everyone. The underlying function is always the same: providing funds to those who need them, when they need them.

Yet those kings, priests, and populist politicians keep advancing similar objections: that credit is simply greed, or exploitation. Virtually every Western society has had laws against usury on the books, and many still do. What explains how credit continually overcomes this opposition?

The old case against usury was not completely irrational; it was often a moral response to real abuse. Many anti-usury laws grew out of a world where borrowing was not about business investment but relief from distress. A poor man borrowed only because he had suffered a crop failure, a medical emergency, or other personal tragedy. To profit from another man’s desperation seemed predatory. Medieval theologians considered money to be “barren,” as only a medium of exchange. St. Thomas Aquinas argued that charging interest is intrinsically unjust because it demands a double payment: the return of the principal and a price for its use.

This doctrine weakened when commercial societies discovered, first in practice and then in theory (as is so often the case), that money in a market economy is not, in fact, economically barren. Command over money is valuable because it gives access to opportunities, allows one to bear uncertainty and frees one from waiting. Western society evolved from condemning all interest to distinguishing legitimate interest from exploitative usury, thereby more realistically reflecting time, risk, and opportunity cost.

Yet old beliefs linger. Even Adam Smith thought that interest should be capped to benefit the prudent, which led to correspondence with Jeremy Bentham, who argued that rates should be able to float. Bentham’s argument was one that still has validity today: adults should be free to contract on whatever terms they choose and attempts to suppress high-rate lending will only block risky but potentially productive enterprise.

The debate between Smith and Bentham represented a turning point. The West in general gradually moved from asking whether any payment for the use of money was illicit to asking instead what counts as extortionate or abusive, thereby separating the existence of credit from the abuse of credit, a distinction that matters. A society can condemn fraud, coercion, and rapacious terms; this does not mean that all interest is predation.

If commercial credit had triumphed over usury laws, however, a new critique would soon emerge. Karl Marx approached credit from another direction, treating credit as part of the capitalist system of exploitation. In Das Kapital, he argued that it allowed the capitalist to spend money he hadn’t earned yet, thereby disconnecting reality from expectation and serving as the means by which the capitalist steals the value of his production from the worker. This in turn allowed companies to continue producing goods no one would be interested in purchasing, resulting in overproduction, all based on a mirage of “fictitious capital,” which made the world look wealthier than it was. This was what led to financial crises.

It was Eugen von Böhm-Bawerk, an Austrian economist from the turn of the twentieth century, who refuted Marx’s analysis in his work Capital and Interest and other treatises. He realized that human beings have a time preference and that people and indeed society prefer jam today over jam tomorrow. So, far from stealing from or exploiting the worker, the capitalist is actually paying him a premium by giving him higher wages for producing something that might not be sold for some time. Credit allows the capitalist to do this.

The wages Marx views as low are in fact discounted, because the worker gets $100 today instead of the potential of $110 in a year. The 10 percent discount represents the price of getting money immediately, satisfying the worker’s time-preference. Again, von Böhm-Bawerk shows us that credit allows this to happen.

As for the argument that credit facilitates crises, von Böhm-Bawerk’s theory of value reveals that the failure of companies to sell produced goods is not a phenomenon of the existence of credit, but a miscalculation of subjective value by the company. By articulating a theory of subjective value rather than labor value, von Böhm-Bawerk demolishes Marx’s interpretation of credit.

Thus, a world without credit would not be a world without exploitation in Marx’s sense. It would be a poorer world with fewer enterprises, fewer homes, fewer durable goods, and far less social mobility.

Credit is therefore at the center of production rather than at its margins. It should not be viewed as a device to gratify impatient consumers, but as a way of coordinating stages of production that unfold over time. Interest is the price attached to the use of present goods in a world where future goods are discounted and productive processes take time.

Schumpeter added another important insight. In his Theory of Economic Development, credit is how the entrepreneur acquires command over resources needed to carry out new combinations. As the economist David Henderson succinctly puts it in his “ten pillars of economic wisdom,” the only way to create wealth is to move resources from a lower-valued to a higher-valued use. Innovation requires withdrawing labor and materials from established uses and redirecting them toward untried purposes, which cannot usually be financed out of existing cash reserves. The entrepreneur therefore needs access to purchasing power before she realizes success. In Schumpeter’s framework, bank credit is what allows the innovator to bid resources away from old uses and bring something new into existence.

So, credit actually helps reorder the economy for the better, financing the experiment before the market has validated it. Schumpeter therefore treated credit as integral to entrepreneurship, innovation, and economic progress. A society that wants to increase wealth while disdaining credit is like the man who wants to win the lottery but refuses to buy a ticket.

Human beings live through time, which means their wants, incomes, obligations, and plans do not line up neatly. Risk is inescapable, but credit is what makes civilization durable under those conditions. Families can survive shocks, firms can organize production, entrepreneurs can innovate, and savers can grow wealth by providing the capital that helps families, firms, and entrepreneurs.

We can continue to argue about what rules should govern lending, what terms are abusive, and what legal framework best disciplines fraud and excess (although we might do well to lean towards Bentham rather than Smith in this one limited case.) Credit exists wherever people need to juggle the cost of effort now with the delayed benefit of later rewards. In other words, it is credit that allows us to build anything more durable than a day’s subsistence, whatever the experience of Shakespeare’s dad.

The recent rescission of the US Environmental Protection Agency (EPA) Greenhouse Gas Endangerment Finding and Motor Vehicle Greenhouse Gas Emission Standards Under the Clean Air Act marks one of the largest deregulation efforts in a generation. Among the 571,672 comments the EPA received on this issue last September, my colleagues at AIER Drs. Julia Cartwight, Paul Mueller, Ryan Yonk, and I joined the State Financial Officers Foundation (SFOF) and thirteen state financial officers in submitting a public comment in support of the rescission.

The Endangerment Finding was rescinded in February 2026 by President Trump and EPA Administrator Lee Zeldin. This action stands to help make life more affordable, reduce regulatory uncertainty, and rein in an expansive administrative state. 

What Was the EPA’s Endangerment Finding?  

In 2007, the Supreme Court case Massachusetts v. EPA (2007) ruled that the EPA was allowed to regulate greenhouse gases because they qualify as air pollutants under the Clean Air Act. From this ruling and a failed attempt at getting a climate bill through Congress, President Obama leaned on executive rulemaking. 

From his exercise of executive authority came the EPA’s Endangerment Finding. The finding declared six greenhouse gases broadly endangered public health and welfare, thus requiring regulation. The Endangerment Finding was the basis for vehicle emission regulation, but soon spread beyond that, resulting in costly burdens for Americans. 

One hurdle, however, was that the Clean Air Act was designed to regulate industry, not the specific gases themselves. As Judge Glock of Manhattan Institute notes, “The act required federal permits for any source that emitted more than 100 tons per year of an air pollutant. By this measure, some families would need permits.” 

Despite some Supreme Court rulings limiting the EPA, the Endangerment Finding led to regulations that made life less affordable for the average American. Regulations under the Biden Administration EPA alone cost an estimated $1 trillion. Additionally, as we discuss in our comment, these regulations encourage a “ratchet effect,” where the government (in this case, the executive branch and the EPA in particular) expands in size and/or scope of authority due to perceived crises and rarely fully recedes. This, ultimately, decreases accountability. 

In the end, the Endangerment Finding enabled the creation of stringent rules but failed to clearly demonstrate the social benefits of individual policies proportional to their economic costs. The regulations stemming from the finding made life less affordable, but the benefits of said regulations were much more difficult to prove.

The Benefits of Rescission 

Our comment focused on three key areas: the economic benefits of rescission, the dangers of an expansive administrative state, and the effects of the potential rescission on federalism. 

The economic benefits of the rescission stem from the rollback of the burdensome regulations discussed in the previous section. Repealing these regulations could help lower costs of energy production for both producers and consumers. Regulatory reform would also reduce the policy uncertainty from vague statutes, lowering the costs for both producers and consumers. 

Additionally, the rescission helps return rulemaking power to the legislative branch. Returning rulemaking powers to the elected legislative branch can improve transparency and accountability.

Furthermore, the rescission improves the balance between the federal and state governments. While Congress has primacy in climate policy, states have greater autonomy to apply local knowledge to environmental and energy challenges. This is especially important given the Supreme Court rulings of Loper Bright Enterprises v. Raimondo (2024), which compels courts to exercise independent judicial judgment interpreting ambiguous statutes rather than defaulting to agency readings, and West Virginia v. EPA (2022), which ruled that agencies must rely on Congress to grant it authority to regulate on issues of “economic and political significance” and allow states to set the enforceable rules governing existing energy sources. 

What Comes Next? 

The rescission can help shift environmental and energy policy away from command-and-control regulations and toward institutional frameworks that rely on price signals, property rights, and competition. Markets function as a discovery process where entrepreneurs can test alternative technologies, production methods, and energy sources under conditions of profit and loss. When prices reflect relative scarcity, producers are driven to economize on fuel, improve efficiency, and innovate cleaner production techniques to reduce costs.  

Additionally, by returning rulemaking to Congress and discretion to the states, the federal government can focus on sustaining “competitive, ‘market preserving federalism’” while states are free to innovate without inhibiting free entry and exit between states. Successful institutional arrangements will scale as failed approaches exit as Americans vote with their dollars and their feet. Environmental stewardship emerges through clearly defined property rights, liability rules, and localized governance mechanisms that address identifiable harms.  

By allowing market processes to work, people, not government, can drive lower cost abatement strategies while preserving energy reliability and consumer choice. 

Read the full public comment here.

My colleague Paul Mueller recently published an AIER Paper on Fusionism. He was kind enough to share it with me for review. I agreed with most, and disagreed with some, of Paul’s arguments. This is healthy. You see, Paul was my student at Hillsdale College 15 years ago, when we first discussed the tension between libertarianism and conservatism.

Then, as now, I have major concerns about conservatism. On the one hand, much of what conservatism (at least some brands of conservatism) stands for is essential as a foundation for a free society. On the other hand, much of what conservatism is trying to do runs counter to the free society, as it would make undue impositions on individual liberty.

My purpose here is not to address Dr. Mueller’s paper or to revisit the libertarian-conservative debate. Rather, I will discuss a tension within the classical liberal movement, a tension that is captured in the works of Austrian economist F.A. Hayek. 

As I like to remind readers, Hayek is one of three thinkers, along with Adam Smith and Frédéric Bastiat, who look down on fellows and students in the AIER library.

The tension has to do with the size and scope of a state necessary for the preservation of liberty. In this 250th anniversary year, I would be remiss not to mention that this same tension nourished the debates around the US Constitution. The Federalists thought the young country needed a vigorous — but limited — central government to unify it, protect against enemies foreign and domestic, and preserve liberty. The Anti-Federalists disagreed, foreseeing that any such central government would inevitably impose on the liberties of Americans. 

At least then — unlike the current political fracture in the USA — both sides agreed on the goal: the preservation of liberty. They disagreed on the institutional structure to advance the goal. 

F.A. Hayek

I suspect F.A. Hayek is well known to most readers of The Daily Economy, so I won’t belabor a biography (if you’re interested, I recommend Hayek’s Challenge, by Bruce Caldwell, among others). Hayek was born in 1899 in Vienna, and while serving at the London School of Economics as the Tooke Chair of Economic Science and Statistics, he made a name for himself with his 1944 book, The Road to Serfdom. The book is a warning that the Western democracies were turning to socialism, just as they were defeating national socialism (and about to enter a cold war with international socialism). But it also contains the kernel of a political economy Hayek would develop over his lifetime of thinking, most notably in The Constitution of Liberty (1960), Law, Legislation, and Liberty (1973/1976/1979), and his last book, the Fatal Conceit (1988). Hayek received the Nobel Prize in Economics in 1974, and died in 1992.

Hayek was no slouch in the defense of liberty. The Road to Serfdom is still a clarion call against socialism’s inevitable slide into tyranny. In 1947, he founded the Mont Pelerin Society, an international forum dedicated to advancing the free society. And his entire career was dedicated to preserving rule of law and a “constitution of liberty.” 

And yet, for all that, Hayek was not a small-government libertarian. He saw a place for the state to provide what later scholars dubbed public goods — parks, fire insurance, and limited macroeconomic stimulus. All cautiously, of course, and all with an eye to preserving individual rights and constraining the state.

An Economic Theory of the State

Just as the American Framers agreed on liberty and disagreed on the institutional mechanism to deliver it, so do “sincere friends of freedom” (to use Lord Acton’s phrase) disagree on the size and scope of the state best suited to protect freedom and individual rights.

Anarcho-capitalists believe that the state is immoral — because it is, by its very nature, coercive — but also that it is unnecessary (see Murray Rothbard’s For a New Liberty: the Libertarian Manifesto). Markets will handle allocation of scarce resources among competing wants, incentives for innovation, and security (through private security forces and arbitrators). What markets can’t handle will be left to civil society. Instead of coercing our neighbors through taxation to take care of the poor or protect the environment, we will convince them to participate, through families, clubs, churches, or other voluntary associations.

The minarchists (a large subsection of libertarians) reject anarcho-capitalism as a chimera. Ayn Rand notably argued that anarcho-capitalism would lapse into civil war between competing security agencies (see her essay, “The Nature of Government“). Along with Ludwig von Mises and other minarchists, she argued that a “night watchman” state was necessary for the protection of individual rights — a neutral police force, independent courts, and a military. The rest, however, was to be left to markets and civil society.

Further down the spectrum of liberty, we have the theorists of the minimal but active state (sometimes, if confusingly, known as classical liberals; I prefer to call them “HFB theorists”, after Hayek, Milton Friedman, and James Buchanan, all champions of liberty who saw a more expansive role for the state). According to this camp, minarchy’s protection of individual rights is a necessary, but not sufficient, condition for a thriving and free society. 

The HFB camp believes the state can and should do more to protect liberty, but (1) must limit itself to the necessary, and not lapse into socialism; and (2) must be strictly bound by constitutional constraints. Friedman argued that, because education had network effects (we all benefit from a more educated population), the state should guarantee it for all — but it should not provide it, hence his famous vouchers. Buchanan and Gordon Tullock, in The Calculus of Consent (Logical Foundations for Constitutional Democracy) examined situations in which the state might be necessary. If collective action is cheaper than market action, or feasible where markets might fail due to high organization costs, they argued, the state can provide public goods, like education or environmental protection. But it should do so within strict constitutional conditions (see also Buchanan’s The Limits of Liberty: Between Anarchy and Leviathan) and not by launching federal departments to administer them.

Naturally, the three schools disagree with each other, and some are more persuasive than others. But I agree with Paul Mueller that all three belong in the tent of sincere friends of freedom. I will now use Hayek as an example of the tensions.

Hayek’s Constitutional Theory of the Liberal State

Hayek was a fierce advocate of rule of law, and deeply worried about central planning. Nevertheless, he advocated an active, if constrained, role for the state. In The Constitution of Liberty, he explicitly explained that the rule of law does not imply a complete absence of government intervention in the economy — but, rather, intervention constrained by careful rules. 

Hayek argues that the state can assure a basic minimum income for all; provide catastrophic insurance and disaster relief; offer basic macro stabilization policy (if not outright Keynesianism!); use subsidies (so long as they advance the general welfare, and not individual interests); fight pollution; and provide public goods where markets sputter… if carefully.

He summarized his philosophy in The Constitution of Liberty in 1960:

We have already seen… that there is undeniably a wide field for non-coercive activities of government and that there is a clear need for financing them by taxation… All modern governments have made provision for the indigent, unfortunate, and disabled and have concerned themselves with questions of health and the dissemination of knowledge… common needs that can be satisfied only by collective action and which can be thus provided for without restricting individual liberty. 

…that some of the aims of the welfare state can be realized without detriment to individual liberty, though not necessarily by the methods which seem the most obvious and are therefore most popular; that others can be similarly achieved to a certain extent, though only at a cost much greater than people imagine or would be willing to bear, or only slowly and gradually as wealth increases; and that, finally, there are others—and they are those particularly dear to the hearts of the socialists—that cannot be realized in a society that wants to preserve personal freedom.  

While seeking to provide public goods to support liberty and human flourishing, Hayek was always worried about respecting rule of law. His solution was a three-part test for state action. In a 1973 lecture to the Institute for Economic Affairs in London, Hayek offers a simple and straightforward articulation of the three conditions under which “government services are entirely compatible with [classical] liberal principles”:  

1. government does not claim a monopoly and new methods of rendering services through the market are not prevented; 

2. the means are raised by taxation on uniform principles and taxation is not used as an instrument for the redistribution of income; and, 

3. the wants satisfied are collective wants of the community as a whole and not merely collective wants of particular groups.   

Misjudging The Welfare State?

While he was an enriching thinker — and he remains an intellectual hero to my coauthor Chris Martin and me — Hayek does seem to allow a bit too much latitude for the state. While some functions may indeed be necessary for human flourishing, it’s hard to see how they will not violate rule of law or nudge us dangerously forward on the road to serfdom. Still, we are hesitant to push too hard against such a hero of liberty.

Murray Rothbard shared no such compunction. In a memo, he commented that “F.A. Hayek’s Constitution of Liberty is, surprisingly and distressingly, an extremely bad, and, I would even say, evil book. Since Hayek is universally regarded, by Right and Left alike, as the leading right-wing intellectual, this will also be an extremely dangerous book.”

In a 1960 review of The Constitution of Liberty, Ludwig von Mises bluntly wrote that “Professor Hayek has misjudged the character of the Welfare State.” Hayek’s concessions would inevitably lead to a “system of all-round planning” — even if they were initially modest and circumscribed. Mises softens his critique, though, when he argues that Hayek’s fundamental misjudgment of the welfare state “does not seriously distract from the character of his great book.” He concluded:

“[Hayek’s] searching analysis of the policies and concerns of the Welfare State show to every thoughtful reader why and how these much praised welfare policies inevitably always fail. These policies never attain those, allegedly beneficial, ends which the government and the self-styled Progressives who advocated them wanted to attain, but, on the contrary, bring about a state of affairs which — from the very point of view of the government and its supporters — is even more unsatisfactory than the previous state of affairs they wanted to ‘improve’.”

Ayn Rand, characteristically blunt, referred to Hayek’s work as “real poison,” because he was willing to balance freedom with various “collectivist” interventions. For Rand, Hayek’s compromises made him a “pernicious enemy” of the freedom movement.

Does Fusionism Have Room For All?

Hayek is as rich as he is puzzling, as delicious as he is infuriating. For my money, he remains the single most important thinker on these questions. This may be because, in the words of my mentor Roger Koppl, Hayek is not a system builder, but an honest muddler.

Hayek explicitly explained, in the postscript to The Constitution of Liberty, “Why I am not a Conservative.” Conservatism, for him, was too static, and too ready to impose its views on society through the state. But he is also clearly not a small-government libertarian.

Unfortunately, Hayek left many puzzles and challenges. Fortunately, his careful thinking helps prepare us to be better advocates of liberty in his absence.

Listen on Apple: The Future of Fusionism: Liberty, Virtue, and Conservatism’s Path Forward

The February 2026 AIER Business Conditions Monthly (BCM) highlights a notable divergence across the economic cycle, with forward-looking indicators softening, contemporaneous measures deteriorating, and lagging data continuing to reflect earlier resilience, although some of this pattern may be influenced by incomplete data and ongoing catch-up from prior reporting gaps. 

LEADING INDICATOR (46)

The Leading Indicator registered 46, with five of 12 components improving, one unchanged, and six declining.

Advances were concentrated in a handful of forward-looking and demand-related measures. The 1-Year to 10-Year US Treasury Yield Spread narrowed sharply by 40.4 percent but was scored positively given its inversion. Labor-market forward conditions strengthened as US Initial Jobless Claims SA declined 7.0 percent (a positive after inversion). Adjusted Retail and Food Services Sales Total SA increased 0.7 percent, while the Conference Board US Manufacturers New Orders Nondefense Capital Goods Ex Aircraft rose 0.5 percent. The Conference Board US Leading Index Manufacturers’ New Orders Consumer Goods and Materials edged higher by 0.1 percent.

These gains were outweighed by declines across several key areas. The Conference Board US Leading Index Stock Prices 500 Common Stocks fell 0.6 percent, and the University of Michigan Consumer Expectations Index declined 0.7 percent. The Inventory-to-Sales Ratio Total Business dropped 1.5 percent, and Debit Balances in Customers’ Securities Margin Accounts decreased 2.0 percent. United States Heavy Truck Sales SAAR fell 3.3 percent, while US New Privately Owned Housing Units Started by Structure Total SAAR declined 3.4 percent. US Average Weekly Hours All Employees Manufacturing SA was unchanged.

Taken together, the leading components point to a loss of momentum in forward indicators, with isolated areas of strength unable to offset broader softness in housing, expectations, and financial market signals.

ROUGHLY COINCIDENT INDICATOR (17)

The Roughly Coincident Indicator came in at 17, with one component improving and five declining.

US Industrial Production increased 0.7 percent, representing the sole area of strength. Elsewhere, conditions weakened: Conference Board Coincident Manufacturing and Trade Sales slipped 0.1 percent, while US Employees on Nonfarm Payrolls Total SA was essentially flat, posting a slight decline. US Labor Force Participation Rate edged down 0.2 percent, and Conference Board Coincident Personal Income Less Transfer Payments fell 0.4 percent. Conference Board Consumer Confidence Present Situation SA dropped 2.5 percent.

Overall, the coincident data depict a soft and weakening current environment, where declines in income, participation, and sentiment outweigh modest gains in production.

LAGGING INDICATOR (67)

The Lagging Indicator stood at 67, with four components improving and two declining.

Strength was concentrated in credit and inventory measures. Conference Board US Lagging Commercial and Industrial Loans rose 2.1 percent, while US Commercial Paper Placed Top 30 Day Yield increased 1.1 percent. US Manufacturing and Trade Inventories Total SA advanced 0.4 percent, and Census Bureau US Private Construction Spending Nonresidential SA increased 0.2 percent. Against this, US CPI Urban Consumers Less Food and Energy Year over Year NSA declined 1.9 percent, and the Conference Board US Lagging Average Duration of Unemployment rose 8.4 percent and was scored negatively after inversion.

The lagging components continue to reflect underlying firmness in credit conditions and inventories, though the increase in unemployment duration and easing inflation suggest that slack is beginning to emerge beneath the surface.

The February 2026 BCM readings point to a deterioration in forward and contemporaneous conditions alongside continued firmness in trailing measures. The Leading Indicator (46) signals fading momentum, with gains in select demand indicators overshadowed by declines in housing, expectations, and market-based measures. The Roughly Coincident Indicator (17) underscores a weak present, marked by broad-based softness in income, participation, and sentiment. By contrast, the Lagging Indicator (67) reflects residual strength in credit and inventory accumulation, even as labor market strains begin to build. Compared with January’s configuration (63, 42, 33), the shift is notable: forward and current indicators have weakened materially, while lagging measures have strengthened, consistent with a loss of economic momentum. Taken together, the pattern suggests an economy transitioning from uneven expansion toward a more fragile footing. 

As with the prior release, these figures should be interpreted with caution, as the data continue to fill in following recent gaps and may not yet provide a fully reliable picture of underlying trends.

DISCUSSION, March–April 2026

March’s CPI report reflects a sharp but narrowly driven acceleration in headline inflation, with gasoline prices — surging in the wake of the Iran conflict — accounting for the bulk of the increase. Headline CPI rose 0.87 percent month-over-month, the fastest pace in nearly four years, lifting the year-over-year rate to 3.3 percent from 2.4 percent. In contrast, core CPI remained subdued at 0.20 percent on the month and 2.6 percent year-over-year, as easing in services and stability across several heavily weighted goods categories helped contain broader price pressures. Energy alone contributed roughly 70 basis points to the monthly headline gain, with gasoline prices jumping over 21 percent on a seasonally adjusted basis — the largest increase on record. Outside of energy and a modest pickup in airfares, the transmission of higher commodity prices into consumer prices remains limited for now, reflecting the typical lag in pass-through. Food prices were largely flat, while core goods inflation held at 0.1 percent amid a mix of declines in categories such as used cars, household furnishings, and prescription drugs, offset by strength in technology-related goods tied to memory-chip shortages and selective increases in apparel and recreation items.

Beneath the surface, inflation dynamics remain mixed but relatively contained, with some evidence of modest firming in the breadth of price increases alongside continued softness in key service categories. The share of core components running above the Fed’s two-percent target edged higher, though still below levels seen in prior years, while core services inflation slowed to 0.2 percent despite a slight reacceleration in rents. Discretionary services — including travel, lodging, and recreation — showed signs of weakening, potentially reflecting early consumer sensitivity to higher fuel costs. Looking ahead, near-term inflation is likely to remain elevated as energy prices continue to feed through, with additional pressure expected in airfares and a possible one-time firming in rents. However, a recently announced ceasefire and still-muted core trends suggest that underlying inflation may remain manageable. Against this backdrop, the Federal Reserve is likely to maintain a wait-and-see posture, holding rates steady through much of 2026 as it assesses the balance between transient energy-driven inflation and a still-cooling core, with markets continuing to price limited near-term policy easing.

Complementing this picture, February’s PCE data indicate that underlying inflation pressures were already firming before the Iran conflict, particularly within goods categories tied to global supply chains. Core PCE — the Fed’s preferred gauge — remained elevated near 3 percent year-over-year, with shorter-run annualized measures moving higher, driven in part by durable goods such as vehicles, apparel, and technology-related components linked to memory-chip shortages. At the same time, service-sector inflation showed signs of moderation, with categories such as health care and recreation contributing less to overall price growth. Notably, core PCE appears to have accelerated relative to core CPI, reversing its typical relationship and widening the gap between the two measures to unusually large levels. This divergence reflects differences in composition and weighting — particularly PCE’s greater exposure to financial services, food services, and technology-related goods — while also highlighting how sector-specific shocks can shape broader inflation readings.

Taken together, the data suggest an inflation environment that is neither reaccelerating broadly nor decisively cooling, but instead being shaped by a shifting mix of supply-side pressures and uneven demand. Energy and goods-related shocks are pushing headline measures higher and adding volatility to core readings, while services — still the dominant driver of inflation — are gradually easing but remain firm enough to prevent a rapid return to target. As of early April 2026, U.S. inflation appears increasingly bifurcated: headline measures are being driven by geopolitical and commodity shocks, while underlying inflation is moderating only slowly and unevenly. This leaves the overall trajectory uncertain, with inflation neither clearly reaccelerating nor convincingly returning to the Federal Reserve’s two-percent objective.

Recent labor market data present a picture of resilience in the near term, though much of the apparent strength reflects temporary factors rather than a clear reacceleration in underlying demand. March payrolls rebounded by 178,000 following a weather- and strike-depressed February, with gains concentrated in sectors that had previously been disrupted, including leisure and hospitality, construction, and health care. The resolution of the Kaiser Permanente strike alone mechanically boosted employment, while improved weather conditions supported hiring in cyclical sectors. Private payrolls rose 186,000, led by services, though pockets of weakness persisted in professional and business services, finance, and information — areas likely facing structural adjustments, including ongoing layoffs tied to technological change. Wage growth moderated to 0.2 percent, and a slight decline in hours worked weighed on aggregate income, suggesting that labor-market momentum remains modest beneath the headline rebound. The unemployment rate fell to 4.26 percent, though this was driven in part by a decline in labor-force participation, pointing to a labor market that is stable but not tightening materially.

Broader indicators continue to signal a gradual cooling in labor demand, reinforcing the view that conditions are settling rather than strengthening. Job openings declined to 6.88 million in February, while the hiring rate fell to its lowest level since 2020, reflecting employer caution amid rising costs and growing uncertainty even before the escalation of the Iran conflict. The ratio of vacancies to unemployed workers remains below one, indicating that labor supply is no longer being outpaced by demand, while the quits rate has fallen to pandemic-era lows, suggesting reduced worker confidence in job mobility. Initial jobless claims have edged higher but remain historically contained and geographically limited, underscoring the absence of broad-based layoffs. Taken together, the data suggest a labor market that is holding up in the short run — supported by temporary rebounds and seasonal factors — but gradually losing dynamism. Looking ahead, higher input costs and tighter financial conditions could weigh more heavily on hiring in the second half of the year, leaving policymakers inclined to remain patient as they assess whether current stability gives way to more meaningful softening.

March’s ISM Manufacturing report suggests continued expansion, though the underlying details point to a more nuanced and less robust picture than the headline implies. The PMI rose to 52.7, supported largely by slower supplier deliveries and inventory drawdowns tied to supply disruptions stemming from the Iran conflict, rather than a broad-based strengthening in demand. In contrast, forward-looking components softened: new orders eased to 53.5, export demand declined, and order backlogs grew more slowly, indicating some loss of momentum on the demand side. Production picked up, and inventories were depleted more quickly, consistent with firms working through constrained supply chains rather than responding to accelerating end demand. At the same time, the prices-paid index surged to 78.3, reflecting a sharp rise in input costs, while employment continued to contract modestly. Taken together, the report points to an industrial sector still expanding but increasingly shaped by the tension between rising costs and moderating demand, with supply-side disruptions playing an outsized role in recent strength.

The ISM Services report, by contrast, highlights continued growth but with clearer signs of strain from rising costs and operational pressures. The headline index declined to 54.0 from 56.1, remaining in expansion territory but marking a loss of momentum, even as new orders strengthened further and export demand improved. Beneath the surface, however, firms are facing a sharp increase in input costs, with the prices index jumping to 70.7 — the largest monthly gain in nearly 14 years — and supply chains showing renewed signs of stress. These pressures appear to be weighing on hiring, with the employment component falling into contraction for the first time in several months, and production slowing. Business commentary points to rising fuel costs, logistical disruptions, and broader geopolitical uncertainty as key challenges, particularly in transportation and travel-related industries. Overall, the services sector remains a source of growth, but the combination of firm demand and intensifying cost pressures introduces a more balanced and uncertain outlook, especially as firms adjust hiring and investment decisions in response to the evolving environment.

Recent sentiment data across consumers and small businesses suggest a modest improvement in current conditions but a growing sense of caution about the outlook, particularly as rising energy costs and geopolitical uncertainty begin to weigh on expectations. The Conference Board’s consumer confidence index edged higher in March, driven by a stronger assessment of present conditions and a still-stable view of the labor market, with slightly more consumers reporting jobs as plentiful. However, forward-looking components weakened, including expectations for income, employment, and spending across services categories. At the same time, inflation expectations moved sharply higher, with year-ahead expectations rising above 6 percent, signaling that higher oil prices are beginning to shape household perceptions. The University of Michigan survey paints a somewhat softer picture overall, with headline sentiment declining and both current conditions and expectations deteriorating, particularly after the onset of the Iran conflict. While short-term inflation expectations rose, longer-term expectations remained relatively anchored, suggesting consumers still view the current price pressures as at least partially transitory.

Small-business sentiment also softened meaningfully in March, reflecting rising costs and elevated uncertainty, though the deterioration remains concentrated in expectations rather than current activity. The NFIB Small Business Optimism Index fell below its long-run average for the first time in nearly a year, driven by weaker profit trends, declining expectations for business conditions, and reduced plans for capital spending. At the same time, hiring plans held steady and expected real sales eased only modestly, indicating that firms have yet to materially pull back on operations. The sharp rise in the uncertainty index underscores the role of policy, cost pressures, and geopolitical developments in shaping business outlooks. Taken together, the data suggest a sentiment environment that is holding up at present but becoming increasingly fragile, with rising inflation expectations and uncertainty posing risks to both consumer spending and business investment in the months ahead.

Recent retail and consumption data point to a consumer sector that remains nominally resilient but is showing clearer signs of strain once adjusted for inflation and income dynamics. Headline retail sales rose a strong 1.7 percent in March, boosted in large part by higher gasoline prices, with gains broadly distributed across categories including furniture, general merchandise, and online spending. Even excluding autos and gas, sales increased a solid 0.6 percent, and control-group sales — a key input into GDP — rose 0.7 percent. However, much of this strength appears to reflect price effects and temporary supports such as tax refunds and higher-income spending, rather than a sustained acceleration in real demand. In real terms, control-group sales were likely flat, consistent with a slowdown in first-quarter consumption growth to around 1.0 percent from 1.9 percent previously. Earlier PCE data reinforce this softer underlying picture: real personal spending rose just 0.1 percent in February, with services — typically the more stable component — slowing to its weakest pace in several months, while goods spending was constrained by rising prices in categories such as vehicles and technology-related items.

At the same time, income growth has softened, adding to pressure on household finances. Personal income declined modestly in February, with slower gains in compensation and declines in transfer payments and dividend income weighing on the headline. With spending continuing to outpace income, the saving rate fell to 4.0 percent, while real disposable income growth has slowed to one of its weakest rates in recent years. This combination suggests that consumers are increasingly relying on reduced savings and selective spending adjustments to maintain consumption levels. Evidence of this adjustment is already visible in weaker discretionary services spending and in goods categories where higher prices appear to be dampening volumes. Taken together, the data suggest that while consumer spending has held up in nominal terms, underlying real demand is softening, leaving consumption increasingly vulnerable to further increases in energy prices and broader cost pressures.

Recent data on business investment and production suggest that underlying industrial momentum was solid heading into the Iran conflict, though near-term readings have been distorted by sector-specific swings and early supply disruptions. February’s durable goods report showed a headline decline of 1.4 percent driven largely by a sharp drop in aircraft orders, but underlying demand was firm, with orders excluding transportation rising 0.8 percent and broad-based gains across primary metals, machinery, and motor vehicles. Core capital goods orders and shipments — key indicators of equipment investment — both strengthened, pointing to a healthy pace of business spending prior to the escalation in geopolitical tensions. By contrast, March industrial production fell 0.5 percent, reflecting a combination of payback from strong February revisions, weaker vehicle output, declining utilities production, and emerging supply-chain constraints tied to the conflict. Manufacturing output edged lower despite prior signs of strength in survey data, while capacity utilization slipped modestly. Taken together, the data suggest that the investment cycle entered the current period on relatively firm footing, but faces a more uncertain outlook as defense-related demand and supply disruptions offset potentially softer private-sector activity amid rising costs and heightened uncertainty. 

Pulling back to a wider view, recent macro data suggest the U.S. economy entered 2026 with firmer underlying momentum than headline figures imply, even as the current environment has become more uncertain. Fourth-quarter GDP was revised down to a modest 0.5 percent growth rate, but income-based measures point to stronger activity, with real gross domestic income rising 2.6 percent and corporate profits posting robust gains. Much of this divergence reflects technical distortions tied to last fall’s government shutdown, which depressed measured output while leaving underlying income and demand comparatively intact; real final sales to private domestic purchasers still advanced a solid 1.8 percent. This stronger foundation is broadly consistent with the Federal Reserve’s latest Beige Book, which describes continued, if modest, expansion across most regions through early April, with consumer spending holding up and manufacturing activity generally improving. At the same time, the report highlights a shift in tone, with firms increasingly citing geopolitical tensions and rising costs as sources of uncertainty, leading many to adopt a more cautious, wait-and-see approach to hiring, pricing, and investment decisions.

Against that backdrop, the policy outlook has tilted more cautious and incrementally hawkish, with the Federal Reserve signaling a clear preference to remain on hold amid heightened uncertainty and persistent inflation risks. Minutes from the March FOMC meeting indicate that while officials still see a path to eventual rate cuts, that timing has been pushed further out, with greater concern that inflation may prove more durable — even raising the possibility of additional tightening if price pressures fail to ease. At the same time, policymakers recognize a two-sided risk environment, in which the same oil-driven shock that lifts inflation could ultimately weigh on employment and growth, reinforcing a data-dependent and “nimble” approach. Near-term data are expected to show continued firm activity alongside elevated cost pressures, though the partial easing in oil prices following the early-April ceasefire may help reduce immediate risks to both inflation and growth. Fiscal factors are also providing some near-term support to demand, with tax refunds helping to sustain consumption despite rising energy costs. Overall, the policy stance reflects a balancing act: holding steady in the face of competing risks, while deferring any easing until clearer evidence emerges that inflation is durably moving back toward target. 

Stepping back, the broader picture is one of an economy that remains fundamentally intact and, in several respects, more resilient than headline volatility might suggest. Growth entered 2026 on solid footing, inflation — while temporarily elevated by energy — remains contained beneath the surface, and both labor markets and business activity continue to expand, albeit at a more measured pace. At the same time, the composition of recent data reveals a system under pressure rather than in decline: real consumption is softening, hiring is becoming more selective, and firms are navigating rising input costs and supply disruptions with caution rather than retrenchment. The outlook, therefore, is cautiously constructive — supported by stable income growth, still-positive demand, and the likelihood that current inflation shocks will fade — but increasingly clouded by policy ambiguity, lingering tariff risks, and the unpredictable path of the Iran conflict. These crosscurrents leave the economy in a delicate but not unfavorable position: resilient in the present, but with confidence and momentum vulnerable to further shocks or missteps in policy.

LEADING INDICATORS

ROUGHLY COINCIDENT INDICATORS

LAGGING INDICATORS

CAPITAL MARKETS PERFORMANCE