Category

Economy

Category

In January 2026 the AIER Everyday Price Index (EPI) rose 0.33 percent to 298.0, starting the year with its largest increase since June 2025. Seventeen of its 24 constituents rose in price in January, with five declining and two unchanged. Pets and pet products, gardening and lawncare services, and housing fuels and utilities saw the largest monthly price increases, while alcoholic beverages at home, personal care products, and intercity transportation saw the largest declines. 

AIER Everyday Price Index vs. US Consumer Price Index (NSA, 1987 = 100)

(Source: Bloomberg Finance, LP)

Also on February 13, 2026, the US Bureau of Labor Statistics (BLS) released its January 2026 Consumer Price Index (CPI) data. On the month-over-month side, headline CPI rose 0.2 percent (versus a surveyed 0.3 percent) while core increased the forecast 0.3 percent.

January 2026 US CPI headline and core month-over-month (2016 – present)

(Source: Bloomberg Finance, LP)

Consumer prices in January were driven largely by a moderate rise in core inflation, with the index excluding food and energy increasing 0.3 percent for the month. Price gains were broad-based across services and discretionary categories, including sharp increases in airline fares alongside advances in personal care, recreation, medical care, communication, apparel, and new vehicles. These increases were partly offset by declines in used cars and trucks, household furnishings and operations, and motor vehicle insurance, reflecting ongoing normalization in some durable-goods and insurance-related costs. Within medical care, hospital services and physicians’ services moved higher while prescription drug prices were unchanged, contributing to steady upward pressure in healthcare costs.

Food prices rose 0.2 percent in January, led by gains across most grocery categories, including cereals and bakery products, dairy, meats, nonalcoholic beverages, and fruits and vegetables, while the “other food at home” category declined modestly. Prices for meals away from home edged up 0.1 percent, with increases in limited-service meals offset by flat pricing at full-service establishments. Energy prices, by contrast, fell 1.5 percent over the month, driven primarily by a 3.2 percent decline in gasoline prices and a slight drop in electricity, although natural gas prices increased. Overall, the mix of softer energy costs and firmer core categories left headline inflation shaped by continued resilience in services and selective goods inflation even as energy provided a temporary offset.

Over the prior 12 months, the headline Consumer Price Indices rose 2.4 percent against an expected 2.5 percent, with core year-over-year rising an expected 2.5 percent from January 2025 to January 2026.

January 2026 US CPI headline and core year-over-year (2016 – present)

(Source: Bloomberg Finance, LP)

Over the 12 months ending in January, food prices continued to firm, with grocery costs rising 2.1 percent on the year as most major categories posted gains. Prices for nonalcoholic beverages led the increase, climbing 4.5 percent, while cereals and bakery products advanced 3.1 percent and meats, poultry, fish, and eggs rose 2.2 percent. The “other food at home” category also increased 2.1 percent, and fruits and vegetables posted a more modest 0.8 percent gain, partially offset by a slight 0.3 percent decline in dairy and related products. Dining out remained a notable source of inflation, with the food away from home index rising 4.0 percent over the year, driven by a 4.7 percent increase in full-service meals and a 3.2 percent rise in limited-service meals.

Energy prices were broadly flat over the year, edging down 0.1 percent overall as a sharp 7.5 percent decline in gasoline prices was largely counterbalanced by sizable increases in electricity and natural gas, which rose 6.3 percent and 9.8 percent respectively. Excluding food and energy, core consumer prices increased 2.5 percent over the past year, with shelter costs advancing 3.0 percent and continuing to anchor underlying inflation. Additional upward pressure came from medical care, household furnishings and operations, recreation, and personal care — the latter posting a notable 5.4 percent gain — even as some goods categories such as used vehicles and certain household items showed signs of cooling.

The January report came in milder than many economists expected — especially for a month that typically runs hot as businesses reset prices at the start of the year. Yet beneath the surface, the inflation story remains uneven. Core goods prices were flat overall, masking a split between rising recreation-related items — such as consumer electronics, sporting goods, and toys — and declines in used vehicles, medical commodities, and some household goods. These crosscurrents reflect several forces at work simultaneously: lingering tariff pass-through in certain goods, AI-driven demand for electronic inputs, regulatory changes holding down medical costs, and fading supply disruptions in groceries. Services inflation, however, continues to run warmer, led by airfares, car rentals, and admission prices for sporting events. Shelter inflation moderated, with both rents and owner-equivalent rents slowing, offering a potential sign that one of the largest drivers of recent inflation is gradually cooling. Notably, prescription drug prices were unchanged — unusual for January — partly due to negotiated Medicare pricing that offset typical annual increases.

Taken together, the report suggests inflation pressures are shifting rather than disappearing. Discretionary services tied to travel, recreation, and wealth-effect spending remain firm, even as goods prices soften and everyday essentials such as energy and groceries show signs of relief. Price increases also became more widespread across categories — a common January phenomenon — but the overall pace was far more restrained than in recent years, hinting that underlying disinflation may dominate in coming months if current trends hold. Financial markets interpreted the data as supportive of potential Federal Reserve rate cuts later this year, though bond-market reactions were mixed given persistent strength in services inflation. For households, the takeaway is that while inflation hasn’t vanished, the early-2026 trend looks less like a renewed surge and more like a gradual cooling — with pockets of stubborn price growth that policymakers will continue to watch closely.

Sports betting has become an epidemic, especially among young men. The Guardian recently aggregated some alarming statistics about its prevalence. The story notes:

Somewhere between 60 and 80 percent of high school students reported having gambled in the last year, the National Council on Problem Gambling reported in 2023. A study commissioned by the NCAA found that 58 percent of 18-to-22-year-olds had bet on sports – although it should be said that in most states this is illegal before the age of 21. 

Prediction markets have contributed to the normalization of gambling by blurring the line between investment and gambling. You can now essentially place betting parlays on Robinhood, an app previously dedicated to retail stock trading. 

In order to see why this uptick qualifies as an epidemic, we can use some economics to see how sports betting will necessarily make the average participant poorer. 

Investing vs Gambling 

To see why, it is helpful to contrast gambling with investing. After all, what makes betting on your favorite team different from buying some index funds for retirement? 

Well, first of all, investing has the ability to be a positive-sum game. In other words, when you buy stock, it can be a win-win. If you buy stock in a company, the company receives money today, which it can use to grow, and in exchange, you get equity in a company that grows in value. It’s a potential win-win. If you buy from a broker, this same logic holds, just with more steps between you and the company. 

This ability to have a positive-sum game is why, when individuals diversify into a large number of stocks, their portfolios grow. If someone invests in an index fund like the S&P 500, their money has historically grown at an average of 10 percent annually. This doesn’t require any special insider information or in-depth research. It’s just riding the wave of positive-sum exchanges. 

From the perspective of monetary return, sports betting is not positive-sum. If two people bet against each other on the outcome of a game, one person wins and the other loses. This is an example of a zero-sum game. If Jon and I bet $100 on the Bears-Packers game, one person loses $100, and one gains $100. If you add those gains ($100 and -$100), you get zero. 

Betting on a sports betting platform is even worse for participants. Betting platforms need to make money on the bets as well; so, one way or another, they take out of that $200 pool. This makes the game negative sum as regards monetary return for participants. If Jon and I use a platform to bet and I win, I get $100 minus whatever the platform takes, say $5, and Jon loses $100. In that case, the net return to the bettors is $95 (my return) minus $100 (Jon’s loss). 

If you consider the platform’s $5 gain, it remains zero sum. But after the mandatory house take, the exchange is negative sum for the bettors. This is the first reason why sports betting is financially a bad idea for participants. 

Point Shaving and Efficient Markets 

The downside of gambling can be even further exposed by economic reasoning. In particular, we’re going to talk about the efficient market hypothesis (EMH). 

When economists talk about efficiency, often people scoff. People don’t like the results of markets, and therefore, they believe they can’t be efficient. 

Efficiency doesn’t mean we like the results, though. All efficiency means (for the EMH) is that markets incorporate available information when pricing assets. 

For example, let’s say Apple discovers a way to improve the speed of iPhones by 10x, and this information becomes publicly available. The company plans to implement this technology in the next-generation iPhone, which will be released next year. Let’s further say that this improvement will lead to many Android users switching phones when the new phone comes out. These sales will mean higher profits and, therefore, a higher stock price for Apple. 

Question—will Apple stock prices go up as soon as people discover this, or not until after the new version is released? If investors believe the above information is accurate, they will buy stocks immediately in order to gain from the future improvement in profits. Since everyone rushes to buy the stock today, the information about the future is incorporated into the price today, not when the new version is released. 

For an extreme example, imagine a company publicly announced it would declare bankruptcy next week. Do you think stock prices would wait a week to tumble? Of course not. 

Markets reflect all publicly available relevant information, and this includes betting markets. If a major player for a team gets injured in practice, gamblers will bet against the team in question and shift the odds. 

Since the stock market is positive-sum, others having more information than you may not be a problem. If Apple is in your diversified portfolio, you don’t need to scan headlines to learn about the company’s technological innovations before anyone else does. 

Sports betting, though, is negative sum for bettors. If someone has information that you don’t have, they can exploit that asymmetry to earn money off of you.  

On average, a person betting randomly on sports will lose money because the game is zero sum for bettors. That means in order to make money consistently, you need to have access to knowledge or information that others don’t.

Here’s the thing, the average person cannot have more information than everyone else, by definition. There are people who bet on sports for a living. Does an average Joe have access to more information and prediction tools than industry insiders? It seems very unlikely. Most participants, over the course of their lives, will be net losers in sports betting. 

The best evidence of this can be seen by the onslaught of point-shaving scams being uncovered in college and professional sports. Investigations by federal officers, coach firings, and indictments of players for point shaving saturate recent headlines in the world of college basketball. 

Many people involved in these schemes will get caught, but it seems likely that other insiders either knew about these schemes or learned about them by being close to the industry. When you compete in sports betting, you compete against people with this sort of information.

In other words, the odds in any sports betting situation are set by information that average people don’t have access to. In order to win money, you need to beat those odds. In other words, you need to have better knowledge and information than those who make the odds. 

The implication here seems straightforward. The average person in sports betting loses money. The statistics match the logic. NBC reported on a study that showed: 

…compared with states that did not implement sports gambling, states that did so saw credit scores drop by a statistically significant, though modest, amount, while bankruptcies increased 28 percent and debt transferred to debt collectors climbed 8 percent. Auto loan delinquencies and use of debt consolidation loans also increased, they found. 

The structure of sports betting laws is beyond the control of the average person as well, but personal behavior is not. The personal implications here are clear. Sports betting is bad for your personal finances, and average joes won’t win (even if you think you will). You may know a lot about basketball, but do you know as much as the professional bettor whose cousin happens to be a physical therapist or coach of a team?  

Every time you see a point-shaving headline, it should be a strong reminder. In a zero-sum game, the winners are likely those exploiting information you can’t access.

“Life is like a box of chocolates. You never know what you’re gonna get.” 

The line from Forrest Gump is meant to capture uncertainty in love and life, but every Valentine’s Day, it accidentally describes markets just as well. Chocolate prices rise, products take different shapes, and consumers are surprised once again at the checkout line. The usual explanation immediately turns to corporate greed. Yet what Forrest Gump’s chocolate box really reminds us is that uncertainty, timing, and expectations shape outcomes, and that prices exist to navigate uncertainty, not to exploit it.

Xocolātl, the beverage we now call chocolate, originated in tropical Mesoamerica, across what is today Mexico to Costa Rica. Before it became a sweet confection, xocolātl was a bitter mixture of cacao beans, water, and spices, cultivated, traded, and consumed for elite, ceremonial, and everyday uses. Only after 1492, through the “Columbian Exchange,” a term coined by Alfred W. Crosby, did cacao enter the wider Atlantic economy, where ingredients, capital, and know-how recombined across continents. New World cacao met Old World sugar, dairy, and manufacturing, and the modern chocolate industry was born.

Although centuries removed from the Maya and Aztec civilizations, chocolate remains a symbol of affection today. The transatlantic transformation of cacao into chocolate, combined with medieval courtship traditions, helped produce Valentine’s Day as we know it. Last year, among the cards, flowers, and jewelry, Americans bought 75 million pounds of chocolate or roughly the weight of 15,000 elephants. For 2026, the National Retail Federation and Prosper Insights & Analytics project record spending: “Consumer spending on Valentine’s Day is expected to reach a record $29.1 billion…surpassing the previous record of $27.5 billion in 2025.” Record spending, however, is often mistaken for evidence of record prices. When prices rise, many are quick to draw back their bow and let their arrow fly even when the true source of higher costs lies elsewhere. 

Rising prices around holidays are often attributed to a familiar story of corporate tricks, rather than treats, known as “greedflation.” Supermarkets and chocolate companies are accused of exploiting a sentimental holiday, padding margins under the cover of romance. In recent years, this narrative has resurfaced almost reflexively whenever grocery prices rise. However, retailers do not set prices in a vacuum; they respond to constrained supply and higher input costs. To understand why chocolate costs more, we need to look past the supermarket aisle to the governments and growing conditions that shape the cocoa market itself.

The International Cocoa Organization notes that roughly 70 percent of cocoa is produced in Africa, with Côte d’Ivoire and Ghana leading output at about 1,850 and 650 thousand tons, respectively, in 2025. Cocoa is central to both economies, accounting for about 15 percent of Côte d’Ivoire’s GDP and seven percent of Ghana’s GDP. In 2018, the two nations formed the Côte d’Ivoire–Ghana Cocoa Initiative (CIGCI), informally referred to as “COPEC.” Its stated aim is to correct perceived market failures by raising prices: “Without correcting the many market failures, the cocoa economy is destined to become a counter-model of sustainability.”

Switzerland’s national broadcaster, SWI, documents a sharp price movement beginning in early 2018, coinciding with the cartel’s creation, suggesting that coordinated policy had immediate market effects.

According to World Finance, COPEC may also have served domestic political goals, with promises of higher prices timed around election cycles to win farmer support. Regardless of motivation, both countries have announced higher prices for the 2025/26 crop season. Côte d’Ivoire will raise prices by 39 percent, which pales in comparison to Ghana’s 63 percent price increase. 

These administratively set prices add to a system already strained by corruption within Ghana’s Cocoa Board (COCOBOD) and black-market activity in Côte d’Ivoire. Highlighting growing smuggling operations, Ivorian authorities last year seized 110 shipping containers, about 2,000 metric tons, of cocoa beans falsely declared as rubber, worth $19 million. ¨The tax on this shipment should have been 19.5 percent, including the 14.5 percent tax on cocoa exports and the five percent registration tax. In that case, the Ivorian state would have collected 2.9 million pounds in taxes. Meanwhile, the tax on rubber exports is only 1.5 percent.¨

Needless to say, Côte d’Ivoire and Ghana have constructed a highly interventionist system around their most important export. Compounding these policy distortions, the 2025/26 crop season is expected to see a 10 percent fall in output due to “shifting weather patterns, ageing tree stocks, disease, and destructive small-scale gold mining.” This shortage has intensified pressure in an already volatile cocoa market. According to FRED, cocoa prices have risen by more than 70 percent in the last five years. 

Last year, North America’s largest chocolate producer, Hershey, announced price increases across household names such as Reese’s, Kit Kat, and Kisses: “It reflects the reality of rising ingredient costs, including the unprecedented cost of cocoa.” In the earnings Q&A call on February 5, 2026, CEO Kirk Tanner stated, “Our actions…are anchored in consumer insights and the brands remain affordable and accessible. Seventy-five percent of our portfolio is still under $4.” Tanner framed their strategy as keeping products as affordable and accessible as possible despite rising cocoa costs.

Given cocoa price volatility, Hershey’s effort to keep chocolate affordable, and supermarket margins of just one to three percent, “greedflation” melts away like a chocolate kiss on Valentine’s Day — leaving scarcity and policy, not corporate greed, as the real culprits. The bitterness in chocolate prices comes from constraints and institutions, not from greed.

The new year brought new developments in the world of financial services: specifically, the role of artificial intelligence (AI). In January, JPMorgan Chase announced it would replace its proxy advisory services with artificial intelligence. Chief Executive Jamie Dimon even went as far as to say that proxy advisors are “incompetent” and “should be gone and dead, done with.” 

For those who have been following issues related to environmental, social, and governance (ESG) and diversity, equity, and inclusion (DEI), this is a major event. The two major proxy advisory firms, Institutional Shareholder Services (ISS) and Glass, Lewis, & Co. (Glass Lewis), have been criticized for using their recommendations on shareholder voting to push politically motivated ESG/DEI crusades (sometimes unbeknownst to the shareholders they represent). This has made the industry the target of a recent executive order aiming to increase federal oversight in the proxy advisory industry. 

Ultimately, though, the proxy advisory industry was born out of regulation. Further government intervention could invite greater cronyism. If the proxy advisory industry wants to win customers back, it needs to focus on fiduciary obligations, not politics. If federal officials want greater transparency and accountability in the proxy advisory market, they should focus on rolling back unnecessary regulations and simplifying any regulations that remain to encourage a competitive proxy market. 

How Did We Get Here? 

A proxy vote is a vote where a shareholder of a publicly traded company authorizes another party to vote their shares at a corporate meeting. Proxy voting involves electing company directors, approving executive compensation, voting on mergers, and considering shareholder proposals. It allows shareholders to participate even if they cannot attend the meeting in person or submit a ballot electronically.

Research on proxy advisory firms notes that institutional investors – those who manage large numbers of shares on behalf of many clients – began paying attention to shareholder voting matters after a “wave of hostile takeover actions” during the 1980s. Around the same time, private retirement funds were legally required to vote their shares based on a “prudent man” standard of care. By the early 2000s, this legal requirement was expanded to include mutual funds and other registered investment companies. 

The proxy advisory industry as we know it today emerged from two main sources. Small and midsize funds sought guidance on shareholder voting practices to meet their legal obligations. Then, in 2003, the SEC introduced a regulation requiring all institutional investors—including mutual funds and index funds—to develop and disclose both their proxy voting policies and their actual votes. These policies and guidelines must be free from conflicts of interest, yet the regulation explicitly allows institutional investors to rely on third-party proxy advisors to meet this requirement. Notably, these third-party firms are not held to the same fiduciary standards as the institutional investors they advise.

Enter Glass Lewis and ISS.

Although there are technically five proxy advisory firms, the two largest (ISS and Glass Lewis) have a roughly 97 percent share of the market for proxy advisory services. These services have a major influence over corporate governance decisions, company-wide equity compensation, and a host of other issues. 

Having such a large market share made them an enticing target for political activists. Before long, activists manipulated proxy guidelines to recommend voting for political crusades such as ESG and DEI. As one of the authors wrote in a recent white paper, these ideas are often incoherent, contradictory, and even run counter to successful business performance and high financial returns. Unbeknownst to many shareholders, who put their voting on autopilot based on proxy recommendations (known as robovoting), their votes pushed political objectives to the detriment of their own financial security. 

Can Proxy Advisory Firms Win Back Trust? 

As Dimon’s comments suggest, the two big proxy advisory firms have a PR and a business problem. Institutional investors are looking for exits or have already taken them. New advisory firms are forming. And bigger clients like JP Morgan believe they can harness AI to bring their proxy work in-house. 

If ISS and Glass Lewis want to win back investor and shareholder trust, the best thing they can do is dump the political crusades. These services came about because there was a demand for providing voting guidelines that were compliant with an overbearing SEC. Proxy advisory services would do well to demonstrate that they follow a prudent man standard of care and follow the sole interest rule: that the proxy advisory services make decisions based solely on the financial well-being of their clients.

By voluntarily committing to these standards and delivering recommendations that benefit clients, they can refute claims of incompetence and prove they may be less biased than an AI program.

Markets Ensure Accountability & Transparency

Now, the White House wants to intervene again in response to the problems created by regulations and interventions. We’ve seen this pattern before: politicians see a problem, they intervene. Then the intervention leads to new, unforeseen problems, prompting a renewed urge for government intervention. Unfortunately, this approach to “fixing” problems leaves people worse off, creates unintended consequences, and gives greater power to government officials. 

If policymakers are concerned about proxy advisors and political crusades, they should focus on deregulation. Instead of adding an additional layer of regulatory complexity, federal policymakers will improve accountability for proxy advisory services by promoting market competition and removing government regulations. 

Currently, proxy advisory services can advertise their business as a means of helping funds comply with onerous regulations rather than increase the value of their shares. If the SEC relinquishes requirements to publish voting guidelines and shareholder votes, proxy advisory services will have to entice clients by showing the value they add to a potential client’s business. If they fail to do so, potential clients will happily pass them over for other service providers, bring shareholder voting guidelines in-house (as many public pension systems have done), or rely on emerging technology.  

There is no doubt that the proxy advisory industry, once firmly planted in American finance, is now facing regulatory threats and existential crises from AI. If these businesses hope to survive, they would do well to focus on serving customers instead of political ideologies.

In 1988, when Robert Lawson was a first-year economics graduate student at Florida State University, he was surprised one day to look up and see Dr. James D. Gwartney standing in front of him. He had come down from a different floor of the Bellamy Building to find Lawson. That was unusual, because grad students were normally summoned by tenured professors, not sought out by them.

But in this case, Gwartney had an assignment that was considerably more interesting than grading papers or returning a library book. He had received a letter inviting him into a group attempting to construct an index to measure economic freedom. Gwartney’s first reaction was that it was a “harebrained” idea. How could you quantify such a thing? Then he checked the letter’s sender: Milton Friedman. 

Gwartney decided this might be a rabbit hole worth going down. He offered Lawson the chance to go with him.

The Economic Freedom of the World Index

In 1996, the Economic Freedom of the World (EFW) Index debuted. The model aggregated dozens of variables into a single figure for each nation, between 0 (the least economic freedom) and 10 (the most economic freedom). The report officially launching the index was co-authored by Gwartney, Lawson (who had finished his PhD in 1992), and Walter Block (then of Holy Cross). Friedman wrote the foreword.

Since that time, the EFW Index has offered researchers the only objective, mathematically transparent measure of economic freedom on a country-by-country basis (a competing index from The Heritage Foundation includes a subjective component). It incorporates variables from five areas (size of government, legal system and property rights, sound money, freedom to trade internationally, and regulation).

As of 2022, the index had been cited in over 1,300 peer-reviewed journal articles. An annual report now includes readings for 165 nations, with many going back to 1970. And the data are filled with stories.

Chile

In 1970, for instance, Chile’s EFW Index was in the bottom quartile globally at 4.69. This was the year socialist Salvador Allende won the presidency with only 36 percent of the popular vote (no candidate having won a majority, the legislature chose him). A slew of socialist reforms followed. Banks were nationalized, price controls were instituted and money printed like there was no tomorrow. Predictably, private investment plummeted and inflation spiked as the nation plunged into a recession.

A military coup overthrew Allende in 1973, with an alleged but uncertain level of help from the Nixon Administration and in particular Secretary of State Henry Kissinger. The new Chilean leader, Augusto Pinochet, was no socialist. But he did wield power like one—through brutal repression. And while his advisors included free-market economists such as Hernán Büchi, the regime’s policies were at best a burlesque of economic freedom.

Consequently, in 1975 Chile’s EFW Index reached an all-time low of 3.82. But after Pinochet was defeated in a 1988 plebiscite, the nation began to liberalize its society and its economy. In 1990, it moved into the top quartile of EFW rankings for the first time, with a reading of 6.89. While the nation’s economic and political path since has not always been smooth, Chile has stayed in the top quartile every year. What does such economic freedom mean on the ground? 

According to the current CIA World Factbook, since the 1980s Chile’s poverty rate has fallen by more than half.

Zimbabwe 

Zimbabwe is another story. It began 1970 in a slightly better position than Chile, with an EFW reading of 4.96. It was known as Rhodesia then, a new republic trying to transition from British rule. The decade of the 1970s was one of political instability as a government led by Prime Minister Ian Smith contended with both Marxist and Maoist communist groups for the country’s future. The Maoist Zimbabwe African National Union (ZANU) prevailed, changing the nation’s name to Zimbabwe in 1980. ZANU has been in control of Zimbabwe ever since, with Robert Mugabe serving as prime minister or president from 1980-2017.

While ZANU has not remained strictly loyal to the Maoist model of communism, and has attempted some pro-business policies, government intrusion in the economy remains high. Property rights are not well enforced. Corruption is systemic and regulations stifle both new business formation and foreign investment. Consequently, since 2000 Zimbabwe has remained in the bottom quartile of EFW Index scores, with a 2023 reading of 3.91, a 21 percent decline from 1970. 

These numbers have tragic implications, especially for the least privileged. In 2023, Zimbabwe’s poverty rate was over 70 percent and an estimated half the population lived on less than $1.90 per day.

Apart from humanitarian concern, should we worry about these things in the US? Economic freedom here is too deep to ever uproot, right?

If the EFW Index teaches us anything, it’s that economic freedom, like freedom in general, is inherently fragile. No one understands that better than Lawson. 

Today he directs the Bridwell Institute for Economic Freedom at Southern Methodist University and continues to manage the EFW Index as a senior fellow of the Fraser Institute in Canada, which sponsors the index. In 2024, he wrote a remembrance of James Gwartney in The Daily Economy.

After decades of involvement with the EFW Index, Lawson remains optimistic about the prospects of global economic freedom, but guardedly so.

“The general trend is still toward freedom,” he says, “but since 2000 it’s less steep.”

If history is any guide, increasing the slope would have an amazing impact on human flourishing worldwide. If national leaders worried about their EFW Index the way college football teams do their playoff rankings, we might see more stories like Chile, including in places like Zimbabwe.

Social Security is drifting toward a cliff, and Congress keeps pretending the shortfall will fix itself. It won’t.

Absent reform, benefits will be cut across the board by roughly 23 percent within six years. That outcome would harm retirees who depend on Social Security the most — while barely affecting the living standards of those who do not need financial support in old age. 

There is a better option: reduce distributions to the wealthiest retirees, preserving them for those most dependent on benefits. 

This should not be a radical idea. Government income transfers should be targeted to those who need financial support — not used to subsidize consumption among well-off seniors at the expense of younger working Americans. This approach is grounded in what Social Security was meant to do in the first place: “give some measure of protection to the average citizen and to his family against…poverty-ridden old age,” in the words of Franklin D. Roosevelt. 

A report by the Congressional Budget Office, titled “Trends in the Distribution of Family Wealth, 1989 to 2022,” elucidates the role that Social Security plays in total household wealth. By counting not just financial assets and home equity, but also the present value of future Social Security benefits, it becomes clear that Social Security represents a substantial share of total resources for lower-wealth families and only a marginal share for wealthy households.  

For families in the bottom quarter of the wealth distribution, accrued Social Security benefits account for about half of everything they own. For those near the top, Social Security represents only about eight percent of total assets for the top 10 percent, compared to holdings in financial assets, real estate, and business equity (see Figure 1). Yet under current law, wealthy retirees who claim at age 70 can still receive annual Social Security benefits exceeding $62,000 — roughly four times the poverty threshold for seniors. 

This is an upside-down safety net. When automatic benefit cuts kick in in 2032, the retirees who rely most on Social Security will be hurt the most, while wealthy households will scarcely notice the change.  

According to the CBO, that uniform 23-percent cut would reduce the total wealth of families in the bottom half of the distribution by more than 10 percent. For the top one percent, the hit would be barely noticeable: about two-tenths of one percent (see Figure 2). 

This outcome is not inevitable; Congress can target benefit reductions where they are most easily absorbed.  

Opponents of top-end benefit reductions argue that Social Security is an earned benefit, not welfare, and that cutting benefits for high earners violates that principle. They are right about one thing: workers pay payroll taxes with the expectation of receiving benefits. But that expectation was never a guarantee of open-ended, inflation-beating returns — especially for retirees who already enjoy substantial private wealth. 

Social Security, if it is to exist at all, should focus on preventing old-age poverty, not provide wealthy retirees with an ever-growing worker-funded annuity layered on top of substantial private savings. When benefits grow faster than inflation and flow disproportionately to those who don’t need them, the program drifts away from its stated purpose and becomes increasingly difficult to justify. 

The solution is not higher payroll taxes. Eliminating the payroll tax cap would push marginal tax rates above 60 percent in some states, reducing work and innovation, while still failing to target benefits where they matter most. Increasing payroll taxes for all workers would deprive younger working families of resources with which to grow their fortunes and build their own futures. 

Nor is the solution more borrowing. Social Security is already projected to add trillions to federal deficits over the next decade. Borrowing to preserve full benefits for wealthy retirees is fiscally reckless and economically unnecessary.

The sensible path forward is targeted benefit restraint. 

That means:

  • Slowing the growth of initial benefits for higher earners by adjusting the benefit formula and indexing those initial benefits to prices rather than wages.
  • Using a more accurate measure of inflation for cost-of-living adjustments for ongoing benefits, and phasing out adjustments entirely for high-income retirees.
  • Adjusting retirement ages to reflect longer life expectancy, with protections for workers who truly cannot work longer — which is the aim of the disability component of Social Security. 

In practice, these changes amount to a gradual shift away from an earnings-related benefit and toward a flat, anti-poverty payment. If Social Security is going to persist, its role should be limited to what market earnings and private savings cannot reliably provide. Every step that trims excessive benefits at the top moves the program closer to that defensible boundary. 

Congress should act to prevent across-the-board benefit cuts, without more deeply indebting younger generations, nor sucking up more resources from working Americans. Instead, lawmakers should focus reforms where they do the least harm and the most good — by trimming earned benefits at the top to secure endangered benefits for those at the bottom. 

It may not be “fair,” but it’s the only plausible path forward. The goal of reform should not be to preserve Social Security in its current form, but to prevent the worst outcomes. Preserving benefits for those who depend on the program, while slowing benefit growth for those who do not, is the only way to reduce Social Security’s role as a reverse transfer from younger workers to wealthy retirees who do not need the support.

The nomination of Kevin Warsh to replace Jerome Powell as Federal Reserve Chair has many people wondering: What makes a good Fed chair? The answer, it turns out, depends on the environment in which the chair will operate. 

The characteristics that matter most for running an independent central bank differ from those for a central bank under pressure from political actors. Understanding this distinction is important for evaluating the president’s nominee.

The Case for Technical Competence

In an environment of genuine central bank independence, technical competence matters most. A qualified chair is a reputable monetary economist with strong academic credentials, someone who commands respect in financial markets and the economics profession.

Independent central banks with technically competent leadership achieve measurably better outcomes. They deliver lower inflation rates and more stable inflation expectations. When markets believe the Fed will respond appropriately to economic data, inflation expectations remain anchored even during temporary price shocks. This anchoring effect makes the Fed’s job easier and prevents above-target inflation from becoming entrenched in wage negotiations and pricing decisions.

When a chair’s analysis carries weight in the economics profession, its policy explanations are more persuasive to market participants. This credibility is a form of capital that takes years to accumulate and can be spent during crises when the Fed needs public trust most.

If the central bank is left to conduct monetary policy as it sees fit, a technically competent Fed chair is crucial.

The Case for Character

If the Fed lacks independence, the strength of character and a willingness to resist political pressure are more important traits. Technical competence is of little use when the central bank is not able to do what its members think it should. Indeed, it might be worth trading some technical competence for a strong spine in such a case.

History provides clear examples. Fed Chair Arthur Burns was technically competent. Prior to becoming Fed chair, he was a well-respected economics professor at Columbia University, where he taught Milton Friedman. He did pioneering work on business cycles with Wesley Clair Mitchell, which has been carried on by the National Bureau of Economic Research. Few economists possessed the technical expertise of Burns at the time. But all that expertise was of little consequence: Burns gave in to President Nixon’s pressure campaign before the 1972 election, lowering interest rates when economic conditions didn’t warrant it. His decision contributed to the high inflation of the 1970s and damaged the Fed’s credibility for years.

Paul Volcker was a sharp economist, to be sure. But he was not as technically competent as Burns. He did not hold a prestigious professorship. He had not done pioneering work in macroeconomics or monetary economics. But he had a strong spine. When President Reagan urged Volcker to commit to not raising interest rates ahead of the 1984 election, Volcker refused. In doing so, he preserved the principle that the Fed chair doesn’t make policy commitments to the White House. His unwillingness to compromise on institutional boundaries helped restore price stability and solidified the Fed’s reputation for independence.

When political pressure threatens independence, the chair’s character matters more.

The Independence Dilemma

When independence is in doubt, credibility and good policy choices may be at odds with each other. Consider a scenario where White House pressure happens to align with the economically correct policy decision. Perhaps the administration wants rate cuts, and economic data genuinely support easing. The Fed then faces a difficult choice.

If the Fed cuts rates, the public may view the decision as capitulation to political demands. If the Fed refuses to cut rates to signal its independence, it makes the wrong economic decision to preserve the appearance of autonomy. Either way, the Fed’s reputation suffers. The public will come to believe the Fed responds to political factors rather than economic data, regardless of which choice the institution makes.

The first-best solution in such a situation is clear: restore independence. An independent Fed can focus on conducting policy well, without risk to its credibility due to perceived political capitulation. But first-best solutions are not always possible.

Recent events provide much support for the view that we need a second-best solution. The president has consistently called for lower interest rates. He has attempted to fire Fed Governor Lisa Cook. He nominated his CEA Chair, Stephen Miran, who is widely believed to be a Trump loyalist, to fill the balance of Adriana Kugler’s term. And, in January, his Department of Justice subpoenaed Chair Powell. The Fed, in other words, is under pressure. 

How Warsh Stacks Up 

With all of this in mind, how should one evaluate Trump’s pick to replace Powell?

Although Kevin Warsh is not a traditional academic economist, he nonetheless possesses a high degree of technical competence. He previously served on the Fed Board from 2006 to 2011. He is currently the Shepard Family Distinguished Visiting Fellow in Economics at Stanford University’s Hoover Institution and lectures at Stanford’s Graduate School of Business. Before joining the Fed, Warsh also served as Vice President and Executive Director of Morgan Stanley & Co. in New York.

Warsh also has a strong spine. While initially on board with the Fed’s large-scale asset purchases as an emergency liquidity tool, he later came to oppose using the balance sheet as a permanent tool. Fed liquidity, he warned, is a “poor substitute” for functioning private markets. This view was decidedly out-of-fashion at the Fed. And yet, Warsh stuck to his guns. Today’s Fed, under political pressure as it is, would be well served by his strong character—provided that it is used to bolster the Fed’s independence.

How will Warsh use his strong spine? That’s an open question. If he pursues the facts as he sees them, he might deliver a much-needed dose of credibility to a struggling institution. If he does the president’s bidding—or is perceived to be doing the president’s bidding—he will further erode the Fed’s credibility.

A year or so ago, I met my friend’s mother for the first time at a wedding. She told me that she was Mississippi born and raised, but that after her kids were born she and her husband decided to move to North Carolina. Turns out the whole extended family was from Mississippi, still lives there, still loves it there.

“Why did you leave?” I asked.

“Because we had little kids, and the schools were terrible.”

Her answer didn’t surprise me – I’d heard about Mississippi’s bad schools before. But while its schools were terrible enough to induce a cross-country move when her kids (now in their mid-twenties) were young, that’s no longer the case. 

Mississippi has become an educational role model, a shining example of what’s possible inside public schools. It’s a turnaround story no one expected.

Mississippi is, on average, a state that people leave. It has the fourth-lowest in-migration rate in the country (only Louisiana, Michigan, and Ohio have fewer transplants from other states), while 36 percent of its young people move out-of-state. On net, its population is shrinking. Between 2020 and 2024, 16,000 more Mississippi residents died than were born.

Mississippi is a state known for its poverty, its unreliable infrastructure, and its substandard health care system – as well as its poor overall public health. It leads the nation in pregnancy-related deaths and high infant mortality rates. Its capital city, Jackson, has contamination issues with its water supply (with an annual average of 55 breaks per 100 miles of water line, nearly four times the national safety limit of 15). Mississippi consistently comes in as the poorest state in the country, with one in four Mississippi children living below the poverty line.

It’s not a state most Americans look to as a role model.

But over the past fifteen years, this unassuming Deep South state has been quietly pulling off one of the most impressive feats in American public education: while literacy rates around the nation have been falling, Mississippi’s have been steadily rising.

Historically, Mississippi’s school system performed about as well as its health care system and its economy: that is, near the bottom in the national rankings. For years, Mississippi ranked 50 out of 50 in the country for K-12 education. But all that changed in 2013, when Mississippi implemented the Literacy-Based Promotion and embraced the science of reading, overhauling its K-3 literacy curriculum and its teacher training.

Since 2013, Mississippi’s overall K-12 achievement scores have improved significantly. In 2013, Mississippi came in 49 out of 50 states on the NAEP (Nation’s Report Card) for fourth grade reading. In 2021, that number jumped to 21 – and in 2024, it rose all the way to ninth in the nation.

All of this was achieved while Mississippi faced a slew of challenges: teacher shortages, low teacher pay, and under-resourced special education programs, to name a few – the things critics so often point to as the culprits for poor educational outcomes. And all of this was achieved too in a state where 26-28 percent of its students are living below the poverty line – the children who are historically the most underserved (and therefore the lowest performing) students in the country.

All these challenges make Mississippi’s achievements more impressive, and the conclusion more irrefutable: reading science works. A measured, methodical, science-driven approach to teaching literacy results in – you guessed it – unprecedented levels of literacy.

That should not be a headline. And yet it is, printed and reprinted all over the country, colloquially referred to as “the Mississippi Miracle” – because the comeback story is so impressive, so unprecedented, so unexpected.

And yet, the strange thing isn’t that one of the poorest and most under-resourced states in the country implemented this – the strange thing is that it’s so rare as to be noteworthy.

Mississippi’s turnaround story is, as most things in life, a story of cause and effect – and in this case, the causes are quite few: a scientific approach to reading, a teacher education program consistent with that scientific approach, early identification and intensive intervention for students who are struggling, and a commitment to honoring the integrity of grade level standards (if a child isn’t reading at a third grade level, they don’t get advanced to third grade).

The “scientific approach to reading” in question is – no surprise – teaching via phonics, the time-tested approach to literacy that has worked for centuries, but which modern public schools seem strangely allergic to.

The simplest headline summary of the Mississippi Miracle is that Mississippi started teaching its kids to read using phonics – and stopped advancing kids who hadn’t learned the material. Their literacy scores turned around seemingly overnight. But of course, the story is more complicated than that.

Mississippi’s comeback started all the way back in 2000, in the private sector, when corporate executive and philanthropist Jim Barksdale donated $100m to launch the Barksdale Reading Institute, a nonprofit intended to turn around Mississippi’s poor literacy rates. Barksdale, whose résumé included serving as the COO of FedEx, the CEO of AT&T, and the CEO of Netscape, was deeply committed to his home state of Mississippi and deeply concerned about the literacy rates in its schools.

He saw the literacy crisis for what it is: the deficit of a fundamental life skill, with lasting implications for the entire life trajectory of children robbed of the chance to learn to read.

As sociology professor Beth Hess wrote to The New York Times after Barksdale’s donation was announced (after praising Barksdale himself): “It is disturbing that the state of Mississippi will be rewarded for its continuing failure to tax its citizens fairly and to allocate enough money to educate students, especially in predominantly black districts. This should have been a public rather than private responsibility.”

Yet as is so often the case, it was private sector efforts that led to change, unfettered by bureaucracy and untethered from the slow-moving weight of the public sector machine.

The Barksdale Reading Institute tackled the reading crisis at every level: teaching reading instruction inside Mississippi’s teachers’ colleges, engaging with parents and early childhood programs (like Head Start), and educating teachers on teaching phonics.

In 2013, Mississippi’s public sector followed suit, implementing two critical steps: passing a law that required all third graders to pass a “reading gate” assessment to advance to fourth grade, and appointing Casey Wright as Mississippi’s superintendent of education, who in the words of journalist Holly Korbey, “reorganized the entire education department to focus on literacy and more rigorous standards.”

Under the stewardship of Wright, Mississippi trained over 19,000 of its teachers in teaching phonics using the science-backed instructional program LETRS. In the early days of the literacy push, the state focused more on teacher training than on curriculum, but in 2016 it expanded its efforts to promote the use of curricula it felt best supported literacy training.

Compared with a full curriculum overhaul, the third-grade reading gate might sound like a small change, but it’s a critically important piece of the puzzle. Across the country, grade advancement is largely treated as a product of age, not of academic ability. Students with “failing grades” can be held back (and often are), but a passing grade is a low bar: a “D,” often considered a passing grade, usually means proficiency of 60 percent, meaning a child can miss 40 percent of the third-grade material and still advance to fourth grade.

The third-grade reading assessment ensures that children aren’t advancing to harder material with large gaps in their knowledge, that they’re set up with the skills they need to succeed, rather than being thrown in the deep end to fail. It’s also an important milestone: third-grade reading proficiency is a leading indicator of long-term academic success, with poor third-grade readers far more likely to drop out of high school. And as evidenced by Mississippi’s rising math scores (even though most of its energy is being directed toward literacy), the ability to read correlates with better performance across all subjects.

All this effort, unsurprisingly, led to swift and measurable results. Not only did Mississippi come in ninth in the nation in fourth-grade reading in 2025, but it scores even higher when weighted for demographic factors like poverty.

None of this should be scientifically surprising (because obviously teaching kids to read using the scientifically backed approach was going to work). But it’s politically shocking because, despite ample research, schools across the country resist teaching students to read using phonics, and their literacy rates flounder as a result.

Other states across the South (coined by Karen Vaites as the “Southern Surge states”) have followed Mississippi’s lead. Louisiana implemented a similar reading program in tandem with Mississippi, beginning in 2012 and seeing similar results. Tennessee implemented approaches borrowing from Mississippi and Louisiana in the school year of 2018-19, and Alabama followed suit in the 2019 legislative session. Each state is seeing success in its amended reading education approach.

None of these states have ample funding; each is in the bottom half nationally for per-pupil spending. All of these states have large numbers of students below the poverty line. Some have teacher and resource shortages. And yet, by implementing a pure phonics approach to reading instruction, they’re blowing past states that have more funding and more resources but are using a less rigorous approach.

In the words of writer Kelsey Piper, “illiteracy is a policy choice.” We know that teaching reading via phonics works. We know how to do it. And, thanks to Mississippi, we know it can be effective even with a limited budget and limited staff. 

Thanks to Jim Barksdale, we know that private sector pushes toward better policy can be effective. And thanks to states using non-phonics literacy approaches (and whose test scores are falling while Mississippi’s are rising), we know what not to do, too. 

The challenge now is to stop doing what doesn’t work, and start moving toward what does – not just in Mississippi and the Southern Surge states, but all across the country.

For most of us, especially those of us who think about it a lot, the Roman Empire conjures up famous names of such men as Caesar, Augustus, Nero, Marcus Aurelius, and a few others of the imperial elite. We might also think of grand structures like the Colosseum, the Appian Way, and the Pantheon, or massive spectacles from gladiator duels to races at the Circus Maximus. Dozens of books explore the Empire’s wars against Dacia in southeastern Europe, the Iceni in Britannia, Germania in northern Europe, and the Jews in Palestine.  

The point is, we tend to think of the extraordinary, not the ordinary or, to put it another way, the macro, instead of the micro. Why? As Kim Bowes, a professor of classical archaeology at the University of Pennsylvania, explains in Surviving Rome: The Economic Lives of the Ninety Percent, until recently the ordinary lives of ordinary Romans eluded us for lack of evidence. Only in the last three or four decades, thanks to an explosion of archeological digs often triggered by construction projects across Europe, have we been showered with new knowledge about the lives of what Bowes labels “the 90 percent.”  

“It’s a delicious irony,” she writes, “that more information about the rural Roman 90 percent has emerged from the construction of Euro Disney [about 15 miles east of Paris] than from the well-intentioned excavations designed to find them.”

Perhaps we assumed that “everyday working people” in ancient Rome didn’t write much about themselves. Certainly, the well-known chroniclers of the day — Sallust, Livy and Tacitus — didn’t focus on them; they mostly wrote instead about the big names who wielded political power. But thanks to discoveries of the past four decades — including graffiti, and writings on broken pottery and wooden tablets, coins and documents and farm implements, and scientific analysis of soil samples and ancient ruins — we’ve learned more about the lives of ordinary people in the Empire than historians ever knew before. 

This “shower of information about Roman farms and fields, crops and herds, and the geology and soil science,” Bowes argues, is transforming our understanding of life at the time. Her book is the first notable effort to tell the world what these recent findings reveal. 

Let’s remember that the history of ancient Rome did not begin with the Empire. For 500 years before its first emperor, it was a remarkable res publica (a republic) known for the rule of law, substantial liberty, and the dispersion of power. When that crumbled into imperial autocracy late in the first century BCE, what we know as “the Empire” took root and lasted another 500 years. Weakened internally by its own welfare-warfare state, the Western Roman Empire centered in Rome fell to barbarians in 476 CE. Bowes’ attention is drawn exclusively to that second half-millennia.  

The Empire evolved into a very different place from the old Republic. By the dawn of the second century CE, it would have been unrecognizable to Roman citizens of the second century BCE Loyalty to the state and one-man rule had largely superseded the old republican virtues. Many emperors were ghastly megalomaniacs to whom earlier Romans would never have groveled.

Despite the general decline in morals and governance that characterized much of the imperial era, ordinary Romans fared better than you might surmise, at least until the decline overwhelmed them in the late fifth century. Bowes attributes this to “cagey managers of small resources” who lived in “precarity” but “doubled down on opportunities.” The picture she paints with recent evidence is one of hard-working, resourceful farmers, tradesmen, and shopkeepers making the very best of a tough situation and, for the most part, doing remarkably well at it. 

Ongoing excavations at Pompeii, destroyed by a volcanic eruption in 79 CE, have yielded fascinating details of commerce and coinage in the city: 

Bar and shop owners did more of their business in bronze and less with the prestige metals. Resellers of bulk oil and wine, and above all artisans producing for larger-scale markets, used gold and particularly silver. 

Bowes reveals that much of the Roman world experienced a “consumer revolution” as the Empire stretched from the border with Scotland to the Levant and across north Africa. Roman roads and trade, even while government grew more tyrannical in Rome, facilitated it. Consider this finding:

New data from archeology, and newly reconsidered texts like the Pompeii graffito, find working people, even some of the poorest, consuming far in excess of our previous expectations. From small-time traders to enslaved servants, farmers to craftsmen, Romans ate more and different foods, purchased rather than made many of the items they used…Their levels of household consumption were thus historically quite high…

The immense quantity of data gleaned from the recent discoveries shows up in numerous tables, charts, graphs and illustrations in Bowes’ book. From those entries, we learn of the accounts of a Roman beer-buyer; the percentage of farms with lamps, candlesticks and window glass; which cereal crops were grown on small farms; the real incomes of artisans and shopkeepers; the prevalence of metabolic disease among children, and so much more.  

For comedians who often poke fun today at British teeth, there’s this tidbit: Data suggest that the Roman conquest of Britain brought dramatic declines in dental health and that “British urbanites had the same or perhaps even worse dental health as the mostly urban Italian sample.” 

Nonetheless, new evidence suggests that “the majority of Romans were consuming a relatively robust caloric package.” Bowes tells us, 

This meant a lot more energy to do work, and thus a lot more work could be done. The Coliseum was not built on 1,900 calories per day…The Coliseum was not built by workers scraping their porridge out of a single pot. 

So, we now know that ordinary Romans during the Empire likely lived better than historians previously believed. They exhibited “relentless persistence and shrewdness,” “perseverance and ingenuity,” a degree of “grit and hustle” we can appreciate more than ever. For several hundred years, their accomplishment “was their ability to wrest a living from a hard and complex world.”  

We also know that it didn’t last. As the Empire disintegrated in the fifth century, it became ever more difficult for many, and impossible for a great number, to eke out a living. The “Dark Ages” that commenced with the fall of Rome saw economic and cultural decline and a massive depopulation. Life spans shortened, mortality rose, and standards of living plummeted. At its height, the city of Rome itself was home to a million people; a few centuries later, it plunged to a nadir of barely 30,000.  

Though Bowes falls short of saying so herself, I think the moral of the story is this: A resourceful people can endure a great deal before they throw in the towel, but a thriving civilization depends on what the Roman Empire ultimately forfeited: peace, freedom, property rights, and the rule of law. 

One of the most robust findings in economics is that, with few exceptions, people respond to incentives, rather than intentions or moral principles. Individuals operate under constraints of time, information, and risk, and as such, they will predictably and understandably adjust their behavior to whatever metrics ensure success. To do otherwise is irrational. When performance is evaluated and rewarded using metrics like quotas, behavior shifts toward satisfying those quotas to secure the benefits thereof. This happens in firms, schools, hospitals, police departments, and regulatory agencies, even when everyone understands, at least in the abstract, that the metric is distinct from the goals to be achieved.

Immigration enforcement provides a vivid case study of this general institutional failure mode. Under recent policy changes, US Immigration and Customs Enforcement has operated under explicit arrest targets in the form of daily and annual numerical goals meant to demonstrate enforcement intensity and resolve. The political rationale for these targets is straightforward. It is to signal to voters and political supporters that the current administration is serious about protecting the border and clamping down on illegal immigration.

But economics teaches that what gets measured gets optimized and gamed for various reasons, mostly having to do with incentives. In the case of immigration enforcement, when success is defined in numerical terms, agents will pursue the cheapest path to those numbers, rather than pursuing individuals and groups that are harder to find and detain. That is a rational given the incentives created by the Administration, namely rewarding aggressive arrest quotas. It makes sense that whenever institutions or individuals face quotas, they are likely to focus on the low-hanging fruit. Time spent achieving an easy unit of output eats up time spent pursuing a hard one. Effort that is devoted to high-risk targets, like violent criminals and well-entrenched gangs, threatens performance metrics in ways that low-risk targets do not. When failure to meet quotas carries professional consequences, agents will avoid activities that jeopardize the count, even if those activities are more closely aligned with the stated mission.

The logic is straightforward. Violent criminals, gang leaders, and professional smugglers are difficult to locate and expensive to apprehend, often relying on networks of other people to help them evade detection. Pursuing such criminal organizations requires investigations, coordination across jurisdictions, surveillance, and uncertain outcomes, making it easy for agents to come up empty-handed. By contrast, unauthorized immigrants who are otherwise law-abiding are comparatively easy to find. They have fixed residences, work regular jobs, and their children often attend the local school. Many are already interacting with the state through legal channels, including standard immigration check-ins.

When arrest quotas rise, then, it’s no surprise that arrests have accelerated disproportionately among those who are easiest to find and arrest rather than those who pose the greatest threat. Recent data confirm this pattern. Enforcement activity has surged, but the majority of arrests involve individuals without prior criminal convictions, a distribution consistent with quota-driven optimization rather than threat-based prioritization. And given the career and political incentives behind meeting those quotas, it is what we should expect. This behavior is rational given the incentives; it would be surprising if agents behaved otherwise.

There is a deeper problem here, though, that Hayek can help us diagnose. Quotas assume that central authorities know in advance how enforcement effort should be allocated across a vast and heterogeneous landscape. They assume that arrests are sufficiently homogeneous, such that merely counting them captures what matters. They assume that the marginal value of the next arrest is roughly constant across contexts. And they make these assumptions, often, without the salient local knowledge needed. 

Here the analogy to central planning becomes illuminating. Central planners, like those in Cuba or the former Soviet Union, fail because they lack access to the dispersed, tacit, and constantly changing knowledge required to allocate resources efficiently. As Hayek argued, markets work not because anyone knows the right answer in advance, but because competition allows agents to discover it through decentralized experimentation and feedback information that would otherwise be unavailable. Enforcement environments share this complexity because, among other reasons, threats vary by region, network, industry, and time. A centralized quota cannot incorporate this information, partly because it treats arrests as interchangeable units in the same way that central plans treat tons of steel or bushels of grain as interchangeable.

This helps explain why quota-driven enforcement is insensitive to conditions on the ground. It cannot adapt to local threat profiles because it does not reward adaptation. It cannot prioritize effectively because prioritization is costly and quotas reward speed, and it cannot learn from failure because in most cases it lacks the local knowledge needed for the adjustment. Of course, politicians can pivot when citizens and voters push back, but it is necessarily a less detailed and efficient process than, for example, markets and prices. 

Worse still, enforcement that deliberately and disproportionately targets working, embedded individuals produces sudden and uneven labor supply shocks. Industries that rely heavily on immigrant labor, like construction and agriculture, experience disruptions that cascade via prices, output, and complementary employment. These are downstream consequences of enforcement choices shaped by quotas. When enforcement prioritizes ease of arrest over social cost, it predictably targets workers rather than criminals, disrupting productive relationships that markets had already coordinated. The result resembles what happens when planners disrupt supply chains without understanding their internal complementarities.

A common defense of quotas appeals to accountability. Without numerical targets, agencies may underperform, selectively enforce, or drift away from their mandates. That said, the existence of a real problem, namely accountability, is hardly a defense of a flawed solution based on quotas that measure a single dimension without the necessary local knowledge.

The central lesson is rooted in institutional design and incentive structures under which these immigration agents operate. When complex, knowledge-intensive activities are governed by centralized numerical targets, agents will rationally pursue targets in ways that undermine the broader purpose of the institutional effort. Perverse incentives and poor institutional design are not the only explanatory factors here —personal choice and moral character matter, too—but they are a big part of the explanatory pie.