Category

Economy

Category

The new year brought new developments in the world of financial services: specifically, the role of artificial intelligence (AI). In January, JPMorgan Chase announced it would replace its proxy advisory services with artificial intelligence. Chief Executive Jamie Dimon even went as far as to say that proxy advisors are “incompetent” and “should be gone and dead, done with.” 

For those who have been following issues related to environmental, social, and governance (ESG) and diversity, equity, and inclusion (DEI), this is a major event. The two major proxy advisory firms, Institutional Shareholder Services (ISS) and Glass, Lewis, & Co. (Glass Lewis), have been criticized for using their recommendations on shareholder voting to push politically motivated ESG/DEI crusades (sometimes unbeknownst to the shareholders they represent). This has made the industry the target of a recent executive order aiming to increase federal oversight in the proxy advisory industry. 

Ultimately, though, the proxy advisory industry was born out of regulation. Further government intervention could invite greater cronyism. If the proxy advisory industry wants to win customers back, it needs to focus on fiduciary obligations, not politics. If federal officials want greater transparency and accountability in the proxy advisory market, they should focus on rolling back unnecessary regulations and simplifying any regulations that remain to encourage a competitive proxy market. 

How Did We Get Here? 

A proxy vote is a vote where a shareholder of a publicly traded company authorizes another party to vote their shares at a corporate meeting. Proxy voting involves electing company directors, approving executive compensation, voting on mergers, and considering shareholder proposals. It allows shareholders to participate even if they cannot attend the meeting in person or submit a ballot electronically.

Research on proxy advisory firms notes that institutional investors – those who manage large numbers of shares on behalf of many clients – began paying attention to shareholder voting matters after a “wave of hostile takeover actions” during the 1980s. Around the same time, private retirement funds were legally required to vote their shares based on a “prudent man” standard of care. By the early 2000s, this legal requirement was expanded to include mutual funds and other registered investment companies. 

The proxy advisory industry as we know it today emerged from two main sources. Small and midsize funds sought guidance on shareholder voting practices to meet their legal obligations. Then, in 2003, the SEC introduced a regulation requiring all institutional investors—including mutual funds and index funds—to develop and disclose both their proxy voting policies and their actual votes. These policies and guidelines must be free from conflicts of interest, yet the regulation explicitly allows institutional investors to rely on third-party proxy advisors to meet this requirement. Notably, these third-party firms are not held to the same fiduciary standards as the institutional investors they advise.

Enter Glass Lewis and ISS.

Although there are technically five proxy advisory firms, the two largest (ISS and Glass Lewis) have a roughly 97 percent share of the market for proxy advisory services. These services have a major influence over corporate governance decisions, company-wide equity compensation, and a host of other issues. 

Having such a large market share made them an enticing target for political activists. Before long, activists manipulated proxy guidelines to recommend voting for political crusades such as ESG and DEI. As one of the authors wrote in a recent white paper, these ideas are often incoherent, contradictory, and even run counter to successful business performance and high financial returns. Unbeknownst to many shareholders, who put their voting on autopilot based on proxy recommendations (known as robovoting), their votes pushed political objectives to the detriment of their own financial security. 

Can Proxy Advisory Firms Win Back Trust? 

As Dimon’s comments suggest, the two big proxy advisory firms have a PR and a business problem. Institutional investors are looking for exits or have already taken them. New advisory firms are forming. And bigger clients like JP Morgan believe they can harness AI to bring their proxy work in-house. 

If ISS and Glass Lewis want to win back investor and shareholder trust, the best thing they can do is dump the political crusades. These services came about because there was a demand for providing voting guidelines that were compliant with an overbearing SEC. Proxy advisory services would do well to demonstrate that they follow a prudent man standard of care and follow the sole interest rule: that the proxy advisory services make decisions based solely on the financial well-being of their clients.

By voluntarily committing to these standards and delivering recommendations that benefit clients, they can refute claims of incompetence and prove they may be less biased than an AI program.

Markets Ensure Accountability & Transparency

Now, the White House wants to intervene again in response to the problems created by regulations and interventions. We’ve seen this pattern before: politicians see a problem, they intervene. Then the intervention leads to new, unforeseen problems, prompting a renewed urge for government intervention. Unfortunately, this approach to “fixing” problems leaves people worse off, creates unintended consequences, and gives greater power to government officials. 

If policymakers are concerned about proxy advisors and political crusades, they should focus on deregulation. Instead of adding an additional layer of regulatory complexity, federal policymakers will improve accountability for proxy advisory services by promoting market competition and removing government regulations. 

Currently, proxy advisory services can advertise their business as a means of helping funds comply with onerous regulations rather than increase the value of their shares. If the SEC relinquishes requirements to publish voting guidelines and shareholder votes, proxy advisory services will have to entice clients by showing the value they add to a potential client’s business. If they fail to do so, potential clients will happily pass them over for other service providers, bring shareholder voting guidelines in-house (as many public pension systems have done), or rely on emerging technology.  

There is no doubt that the proxy advisory industry, once firmly planted in American finance, is now facing regulatory threats and existential crises from AI. If these businesses hope to survive, they would do well to focus on serving customers instead of political ideologies.

In 1988, when Robert Lawson was a first-year economics graduate student at Florida State University, he was surprised one day to look up and see Dr. James D. Gwartney standing in front of him. He had come down from a different floor of the Bellamy Building to find Lawson. That was unusual, because grad students were normally summoned by tenured professors, not sought out by them.

But in this case, Gwartney had an assignment that was considerably more interesting than grading papers or returning a library book. He had received a letter inviting him into a group attempting to construct an index to measure economic freedom. Gwartney’s first reaction was that it was a “harebrained” idea. How could you quantify such a thing? Then he checked the letter’s sender: Milton Friedman. 

Gwartney decided this might be a rabbit hole worth going down. He offered Lawson the chance to go with him.

The Economic Freedom of the World Index

In 1996, the Economic Freedom of the World (EFW) Index debuted. The model aggregated dozens of variables into a single figure for each nation, between 0 (the least economic freedom) and 10 (the most economic freedom). The report officially launching the index was co-authored by Gwartney, Lawson (who had finished his PhD in 1992), and Walter Block (then of Holy Cross). Friedman wrote the foreword.

Since that time, the EFW Index has offered researchers the only objective, mathematically transparent measure of economic freedom on a country-by-country basis (a competing index from The Heritage Foundation includes a subjective component). It incorporates variables from five areas (size of government, legal system and property rights, sound money, freedom to trade internationally, and regulation).

As of 2022, the index had been cited in over 1,300 peer-reviewed journal articles. An annual report now includes readings for 165 nations, with many going back to 1970. And the data are filled with stories.

Chile

In 1970, for instance, Chile’s EFW Index was in the bottom quartile globally at 4.69. This was the year socialist Salvador Allende won the presidency with only 36 percent of the popular vote (no candidate having won a majority, the legislature chose him). A slew of socialist reforms followed. Banks were nationalized, price controls were instituted and money printed like there was no tomorrow. Predictably, private investment plummeted and inflation spiked as the nation plunged into a recession.

A military coup overthrew Allende in 1973, with an alleged but uncertain level of help from the Nixon Administration and in particular Secretary of State Henry Kissinger. The new Chilean leader, Augusto Pinochet, was no socialist. But he did wield power like one—through brutal repression. And while his advisors included free-market economists such as Hernán Büchi, the regime’s policies were at best a burlesque of economic freedom.

Consequently, in 1975 Chile’s EFW Index reached an all-time low of 3.82. But after Pinochet was defeated in a 1988 plebiscite, the nation began to liberalize its society and its economy. In 1990, it moved into the top quartile of EFW rankings for the first time, with a reading of 6.89. While the nation’s economic and political path since has not always been smooth, Chile has stayed in the top quartile every year. What does such economic freedom mean on the ground? 

According to the current CIA World Factbook, since the 1980s Chile’s poverty rate has fallen by more than half.

Zimbabwe 

Zimbabwe is another story. It began 1970 in a slightly better position than Chile, with an EFW reading of 4.96. It was known as Rhodesia then, a new republic trying to transition from British rule. The decade of the 1970s was one of political instability as a government led by Prime Minister Ian Smith contended with both Marxist and Maoist communist groups for the country’s future. The Maoist Zimbabwe African National Union (ZANU) prevailed, changing the nation’s name to Zimbabwe in 1980. ZANU has been in control of Zimbabwe ever since, with Robert Mugabe serving as prime minister or president from 1980-2017.

While ZANU has not remained strictly loyal to the Maoist model of communism, and has attempted some pro-business policies, government intrusion in the economy remains high. Property rights are not well enforced. Corruption is systemic and regulations stifle both new business formation and foreign investment. Consequently, since 2000 Zimbabwe has remained in the bottom quartile of EFW Index scores, with a 2023 reading of 3.91, a 21 percent decline from 1970. 

These numbers have tragic implications, especially for the least privileged. In 2023, Zimbabwe’s poverty rate was over 70 percent and an estimated half the population lived on less than $1.90 per day.

Apart from humanitarian concern, should we worry about these things in the US? Economic freedom here is too deep to ever uproot, right?

If the EFW Index teaches us anything, it’s that economic freedom, like freedom in general, is inherently fragile. No one understands that better than Lawson. 

Today he directs the Bridwell Institute for Economic Freedom at Southern Methodist University and continues to manage the EFW Index as a senior fellow of the Fraser Institute in Canada, which sponsors the index. In 2024, he wrote a remembrance of James Gwartney in The Daily Economy.

After decades of involvement with the EFW Index, Lawson remains optimistic about the prospects of global economic freedom, but guardedly so.

“The general trend is still toward freedom,” he says, “but since 2000 it’s less steep.”

If history is any guide, increasing the slope would have an amazing impact on human flourishing worldwide. If national leaders worried about their EFW Index the way college football teams do their playoff rankings, we might see more stories like Chile, including in places like Zimbabwe.

Social Security is drifting toward a cliff, and Congress keeps pretending the shortfall will fix itself. It won’t.

Absent reform, benefits will be cut across the board by roughly 23 percent within six years. That outcome would harm retirees who depend on Social Security the most — while barely affecting the living standards of those who do not need financial support in old age. 

There is a better option: reduce distributions to the wealthiest retirees, preserving them for those most dependent on benefits. 

This should not be a radical idea. Government income transfers should be targeted to those who need financial support — not used to subsidize consumption among well-off seniors at the expense of younger working Americans. This approach is grounded in what Social Security was meant to do in the first place: “give some measure of protection to the average citizen and to his family against…poverty-ridden old age,” in the words of Franklin D. Roosevelt. 

A report by the Congressional Budget Office, titled “Trends in the Distribution of Family Wealth, 1989 to 2022,” elucidates the role that Social Security plays in total household wealth. By counting not just financial assets and home equity, but also the present value of future Social Security benefits, it becomes clear that Social Security represents a substantial share of total resources for lower-wealth families and only a marginal share for wealthy households.  

For families in the bottom quarter of the wealth distribution, accrued Social Security benefits account for about half of everything they own. For those near the top, Social Security represents only about eight percent of total assets for the top 10 percent, compared to holdings in financial assets, real estate, and business equity (see Figure 1). Yet under current law, wealthy retirees who claim at age 70 can still receive annual Social Security benefits exceeding $62,000 — roughly four times the poverty threshold for seniors. 

This is an upside-down safety net. When automatic benefit cuts kick in in 2032, the retirees who rely most on Social Security will be hurt the most, while wealthy households will scarcely notice the change.  

According to the CBO, that uniform 23-percent cut would reduce the total wealth of families in the bottom half of the distribution by more than 10 percent. For the top one percent, the hit would be barely noticeable: about two-tenths of one percent (see Figure 2). 

This outcome is not inevitable; Congress can target benefit reductions where they are most easily absorbed.  

Opponents of top-end benefit reductions argue that Social Security is an earned benefit, not welfare, and that cutting benefits for high earners violates that principle. They are right about one thing: workers pay payroll taxes with the expectation of receiving benefits. But that expectation was never a guarantee of open-ended, inflation-beating returns — especially for retirees who already enjoy substantial private wealth. 

Social Security, if it is to exist at all, should focus on preventing old-age poverty, not provide wealthy retirees with an ever-growing worker-funded annuity layered on top of substantial private savings. When benefits grow faster than inflation and flow disproportionately to those who don’t need them, the program drifts away from its stated purpose and becomes increasingly difficult to justify. 

The solution is not higher payroll taxes. Eliminating the payroll tax cap would push marginal tax rates above 60 percent in some states, reducing work and innovation, while still failing to target benefits where they matter most. Increasing payroll taxes for all workers would deprive younger working families of resources with which to grow their fortunes and build their own futures. 

Nor is the solution more borrowing. Social Security is already projected to add trillions to federal deficits over the next decade. Borrowing to preserve full benefits for wealthy retirees is fiscally reckless and economically unnecessary.

The sensible path forward is targeted benefit restraint. 

That means:

  • Slowing the growth of initial benefits for higher earners by adjusting the benefit formula and indexing those initial benefits to prices rather than wages.
  • Using a more accurate measure of inflation for cost-of-living adjustments for ongoing benefits, and phasing out adjustments entirely for high-income retirees.
  • Adjusting retirement ages to reflect longer life expectancy, with protections for workers who truly cannot work longer — which is the aim of the disability component of Social Security. 

In practice, these changes amount to a gradual shift away from an earnings-related benefit and toward a flat, anti-poverty payment. If Social Security is going to persist, its role should be limited to what market earnings and private savings cannot reliably provide. Every step that trims excessive benefits at the top moves the program closer to that defensible boundary. 

Congress should act to prevent across-the-board benefit cuts, without more deeply indebting younger generations, nor sucking up more resources from working Americans. Instead, lawmakers should focus reforms where they do the least harm and the most good — by trimming earned benefits at the top to secure endangered benefits for those at the bottom. 

It may not be “fair,” but it’s the only plausible path forward. The goal of reform should not be to preserve Social Security in its current form, but to prevent the worst outcomes. Preserving benefits for those who depend on the program, while slowing benefit growth for those who do not, is the only way to reduce Social Security’s role as a reverse transfer from younger workers to wealthy retirees who do not need the support.

The nomination of Kevin Warsh to replace Jerome Powell as Federal Reserve Chair has many people wondering: What makes a good Fed chair? The answer, it turns out, depends on the environment in which the chair will operate. 

The characteristics that matter most for running an independent central bank differ from those for a central bank under pressure from political actors. Understanding this distinction is important for evaluating the president’s nominee.

The Case for Technical Competence

In an environment of genuine central bank independence, technical competence matters most. A qualified chair is a reputable monetary economist with strong academic credentials, someone who commands respect in financial markets and the economics profession.

Independent central banks with technically competent leadership achieve measurably better outcomes. They deliver lower inflation rates and more stable inflation expectations. When markets believe the Fed will respond appropriately to economic data, inflation expectations remain anchored even during temporary price shocks. This anchoring effect makes the Fed’s job easier and prevents above-target inflation from becoming entrenched in wage negotiations and pricing decisions.

When a chair’s analysis carries weight in the economics profession, its policy explanations are more persuasive to market participants. This credibility is a form of capital that takes years to accumulate and can be spent during crises when the Fed needs public trust most.

If the central bank is left to conduct monetary policy as it sees fit, a technically competent Fed chair is crucial.

The Case for Character

If the Fed lacks independence, the strength of character and a willingness to resist political pressure are more important traits. Technical competence is of little use when the central bank is not able to do what its members think it should. Indeed, it might be worth trading some technical competence for a strong spine in such a case.

History provides clear examples. Fed Chair Arthur Burns was technically competent. Prior to becoming Fed chair, he was a well-respected economics professor at Columbia University, where he taught Milton Friedman. He did pioneering work on business cycles with Wesley Clair Mitchell, which has been carried on by the National Bureau of Economic Research. Few economists possessed the technical expertise of Burns at the time. But all that expertise was of little consequence: Burns gave in to President Nixon’s pressure campaign before the 1972 election, lowering interest rates when economic conditions didn’t warrant it. His decision contributed to the high inflation of the 1970s and damaged the Fed’s credibility for years.

Paul Volcker was a sharp economist, to be sure. But he was not as technically competent as Burns. He did not hold a prestigious professorship. He had not done pioneering work in macroeconomics or monetary economics. But he had a strong spine. When President Reagan urged Volcker to commit to not raising interest rates ahead of the 1984 election, Volcker refused. In doing so, he preserved the principle that the Fed chair doesn’t make policy commitments to the White House. His unwillingness to compromise on institutional boundaries helped restore price stability and solidified the Fed’s reputation for independence.

When political pressure threatens independence, the chair’s character matters more.

The Independence Dilemma

When independence is in doubt, credibility and good policy choices may be at odds with each other. Consider a scenario where White House pressure happens to align with the economically correct policy decision. Perhaps the administration wants rate cuts, and economic data genuinely support easing. The Fed then faces a difficult choice.

If the Fed cuts rates, the public may view the decision as capitulation to political demands. If the Fed refuses to cut rates to signal its independence, it makes the wrong economic decision to preserve the appearance of autonomy. Either way, the Fed’s reputation suffers. The public will come to believe the Fed responds to political factors rather than economic data, regardless of which choice the institution makes.

The first-best solution in such a situation is clear: restore independence. An independent Fed can focus on conducting policy well, without risk to its credibility due to perceived political capitulation. But first-best solutions are not always possible.

Recent events provide much support for the view that we need a second-best solution. The president has consistently called for lower interest rates. He has attempted to fire Fed Governor Lisa Cook. He nominated his CEA Chair, Stephen Miran, who is widely believed to be a Trump loyalist, to fill the balance of Adriana Kugler’s term. And, in January, his Department of Justice subpoenaed Chair Powell. The Fed, in other words, is under pressure. 

How Warsh Stacks Up 

With all of this in mind, how should one evaluate Trump’s pick to replace Powell?

Although Kevin Warsh is not a traditional academic economist, he nonetheless possesses a high degree of technical competence. He previously served on the Fed Board from 2006 to 2011. He is currently the Shepard Family Distinguished Visiting Fellow in Economics at Stanford University’s Hoover Institution and lectures at Stanford’s Graduate School of Business. Before joining the Fed, Warsh also served as Vice President and Executive Director of Morgan Stanley & Co. in New York.

Warsh also has a strong spine. While initially on board with the Fed’s large-scale asset purchases as an emergency liquidity tool, he later came to oppose using the balance sheet as a permanent tool. Fed liquidity, he warned, is a “poor substitute” for functioning private markets. This view was decidedly out-of-fashion at the Fed. And yet, Warsh stuck to his guns. Today’s Fed, under political pressure as it is, would be well served by his strong character—provided that it is used to bolster the Fed’s independence.

How will Warsh use his strong spine? That’s an open question. If he pursues the facts as he sees them, he might deliver a much-needed dose of credibility to a struggling institution. If he does the president’s bidding—or is perceived to be doing the president’s bidding—he will further erode the Fed’s credibility.

A year or so ago, I met my friend’s mother for the first time at a wedding. She told me that she was Mississippi born and raised, but that after her kids were born she and her husband decided to move to North Carolina. Turns out the whole extended family was from Mississippi, still lives there, still loves it there.

“Why did you leave?” I asked.

“Because we had little kids, and the schools were terrible.”

Her answer didn’t surprise me – I’d heard about Mississippi’s bad schools before. But while its schools were terrible enough to induce a cross-country move when her kids (now in their mid-twenties) were young, that’s no longer the case. 

Mississippi has become an educational role model, a shining example of what’s possible inside public schools. It’s a turnaround story no one expected.

Mississippi is, on average, a state that people leave. It has the fourth-lowest in-migration rate in the country (only Louisiana, Michigan, and Ohio have fewer transplants from other states), while 36 percent of its young people move out-of-state. On net, its population is shrinking. Between 2020 and 2024, 16,000 more Mississippi residents died than were born.

Mississippi is a state known for its poverty, its unreliable infrastructure, and its substandard health care system – as well as its poor overall public health. It leads the nation in pregnancy-related deaths and high infant mortality rates. Its capital city, Jackson, has contamination issues with its water supply (with an annual average of 55 breaks per 100 miles of water line, nearly four times the national safety limit of 15). Mississippi consistently comes in as the poorest state in the country, with one in four Mississippi children living below the poverty line.

It’s not a state most Americans look to as a role model.

But over the past fifteen years, this unassuming Deep South state has been quietly pulling off one of the most impressive feats in American public education: while literacy rates around the nation have been falling, Mississippi’s have been steadily rising.

Historically, Mississippi’s school system performed about as well as its health care system and its economy: that is, near the bottom in the national rankings. For years, Mississippi ranked 50 out of 50 in the country for K-12 education. But all that changed in 2013, when Mississippi implemented the Literacy-Based Promotion and embraced the science of reading, overhauling its K-3 literacy curriculum and its teacher training.

Since 2013, Mississippi’s overall K-12 achievement scores have improved significantly. In 2013, Mississippi came in 49 out of 50 states on the NAEP (Nation’s Report Card) for fourth grade reading. In 2021, that number jumped to 21 – and in 2024, it rose all the way to ninth in the nation.

All of this was achieved while Mississippi faced a slew of challenges: teacher shortages, low teacher pay, and under-resourced special education programs, to name a few – the things critics so often point to as the culprits for poor educational outcomes. And all of this was achieved too in a state where 26-28 percent of its students are living below the poverty line – the children who are historically the most underserved (and therefore the lowest performing) students in the country.

All these challenges make Mississippi’s achievements more impressive, and the conclusion more irrefutable: reading science works. A measured, methodical, science-driven approach to teaching literacy results in – you guessed it – unprecedented levels of literacy.

That should not be a headline. And yet it is, printed and reprinted all over the country, colloquially referred to as “the Mississippi Miracle” – because the comeback story is so impressive, so unprecedented, so unexpected.

And yet, the strange thing isn’t that one of the poorest and most under-resourced states in the country implemented this – the strange thing is that it’s so rare as to be noteworthy.

Mississippi’s turnaround story is, as most things in life, a story of cause and effect – and in this case, the causes are quite few: a scientific approach to reading, a teacher education program consistent with that scientific approach, early identification and intensive intervention for students who are struggling, and a commitment to honoring the integrity of grade level standards (if a child isn’t reading at a third grade level, they don’t get advanced to third grade).

The “scientific approach to reading” in question is – no surprise – teaching via phonics, the time-tested approach to literacy that has worked for centuries, but which modern public schools seem strangely allergic to.

The simplest headline summary of the Mississippi Miracle is that Mississippi started teaching its kids to read using phonics – and stopped advancing kids who hadn’t learned the material. Their literacy scores turned around seemingly overnight. But of course, the story is more complicated than that.

Mississippi’s comeback started all the way back in 2000, in the private sector, when corporate executive and philanthropist Jim Barksdale donated $100m to launch the Barksdale Reading Institute, a nonprofit intended to turn around Mississippi’s poor literacy rates. Barksdale, whose résumé included serving as the COO of FedEx, the CEO of AT&T, and the CEO of Netscape, was deeply committed to his home state of Mississippi and deeply concerned about the literacy rates in its schools.

He saw the literacy crisis for what it is: the deficit of a fundamental life skill, with lasting implications for the entire life trajectory of children robbed of the chance to learn to read.

As sociology professor Beth Hess wrote to The New York Times after Barksdale’s donation was announced (after praising Barksdale himself): “It is disturbing that the state of Mississippi will be rewarded for its continuing failure to tax its citizens fairly and to allocate enough money to educate students, especially in predominantly black districts. This should have been a public rather than private responsibility.”

Yet as is so often the case, it was private sector efforts that led to change, unfettered by bureaucracy and untethered from the slow-moving weight of the public sector machine.

The Barksdale Reading Institute tackled the reading crisis at every level: teaching reading instruction inside Mississippi’s teachers’ colleges, engaging with parents and early childhood programs (like Head Start), and educating teachers on teaching phonics.

In 2013, Mississippi’s public sector followed suit, implementing two critical steps: passing a law that required all third graders to pass a “reading gate” assessment to advance to fourth grade, and appointing Casey Wright as Mississippi’s superintendent of education, who in the words of journalist Holly Korbey, “reorganized the entire education department to focus on literacy and more rigorous standards.”

Under the stewardship of Wright, Mississippi trained over 19,000 of its teachers in teaching phonics using the science-backed instructional program LETRS. In the early days of the literacy push, the state focused more on teacher training than on curriculum, but in 2016 it expanded its efforts to promote the use of curricula it felt best supported literacy training.

Compared with a full curriculum overhaul, the third-grade reading gate might sound like a small change, but it’s a critically important piece of the puzzle. Across the country, grade advancement is largely treated as a product of age, not of academic ability. Students with “failing grades” can be held back (and often are), but a passing grade is a low bar: a “D,” often considered a passing grade, usually means proficiency of 60 percent, meaning a child can miss 40 percent of the third-grade material and still advance to fourth grade.

The third-grade reading assessment ensures that children aren’t advancing to harder material with large gaps in their knowledge, that they’re set up with the skills they need to succeed, rather than being thrown in the deep end to fail. It’s also an important milestone: third-grade reading proficiency is a leading indicator of long-term academic success, with poor third-grade readers far more likely to drop out of high school. And as evidenced by Mississippi’s rising math scores (even though most of its energy is being directed toward literacy), the ability to read correlates with better performance across all subjects.

All this effort, unsurprisingly, led to swift and measurable results. Not only did Mississippi come in ninth in the nation in fourth-grade reading in 2025, but it scores even higher when weighted for demographic factors like poverty.

None of this should be scientifically surprising (because obviously teaching kids to read using the scientifically backed approach was going to work). But it’s politically shocking because, despite ample research, schools across the country resist teaching students to read using phonics, and their literacy rates flounder as a result.

Other states across the South (coined by Karen Vaites as the “Southern Surge states”) have followed Mississippi’s lead. Louisiana implemented a similar reading program in tandem with Mississippi, beginning in 2012 and seeing similar results. Tennessee implemented approaches borrowing from Mississippi and Louisiana in the school year of 2018-19, and Alabama followed suit in the 2019 legislative session. Each state is seeing success in its amended reading education approach.

None of these states have ample funding; each is in the bottom half nationally for per-pupil spending. All of these states have large numbers of students below the poverty line. Some have teacher and resource shortages. And yet, by implementing a pure phonics approach to reading instruction, they’re blowing past states that have more funding and more resources but are using a less rigorous approach.

In the words of writer Kelsey Piper, “illiteracy is a policy choice.” We know that teaching reading via phonics works. We know how to do it. And, thanks to Mississippi, we know it can be effective even with a limited budget and limited staff. 

Thanks to Jim Barksdale, we know that private sector pushes toward better policy can be effective. And thanks to states using non-phonics literacy approaches (and whose test scores are falling while Mississippi’s are rising), we know what not to do, too. 

The challenge now is to stop doing what doesn’t work, and start moving toward what does – not just in Mississippi and the Southern Surge states, but all across the country.

For most of us, especially those of us who think about it a lot, the Roman Empire conjures up famous names of such men as Caesar, Augustus, Nero, Marcus Aurelius, and a few others of the imperial elite. We might also think of grand structures like the Colosseum, the Appian Way, and the Pantheon, or massive spectacles from gladiator duels to races at the Circus Maximus. Dozens of books explore the Empire’s wars against Dacia in southeastern Europe, the Iceni in Britannia, Germania in northern Europe, and the Jews in Palestine.  

The point is, we tend to think of the extraordinary, not the ordinary or, to put it another way, the macro, instead of the micro. Why? As Kim Bowes, a professor of classical archaeology at the University of Pennsylvania, explains in Surviving Rome: The Economic Lives of the Ninety Percent, until recently the ordinary lives of ordinary Romans eluded us for lack of evidence. Only in the last three or four decades, thanks to an explosion of archeological digs often triggered by construction projects across Europe, have we been showered with new knowledge about the lives of what Bowes labels “the 90 percent.”  

“It’s a delicious irony,” she writes, “that more information about the rural Roman 90 percent has emerged from the construction of Euro Disney [about 15 miles east of Paris] than from the well-intentioned excavations designed to find them.”

Perhaps we assumed that “everyday working people” in ancient Rome didn’t write much about themselves. Certainly, the well-known chroniclers of the day — Sallust, Livy and Tacitus — didn’t focus on them; they mostly wrote instead about the big names who wielded political power. But thanks to discoveries of the past four decades — including graffiti, and writings on broken pottery and wooden tablets, coins and documents and farm implements, and scientific analysis of soil samples and ancient ruins — we’ve learned more about the lives of ordinary people in the Empire than historians ever knew before. 

This “shower of information about Roman farms and fields, crops and herds, and the geology and soil science,” Bowes argues, is transforming our understanding of life at the time. Her book is the first notable effort to tell the world what these recent findings reveal. 

Let’s remember that the history of ancient Rome did not begin with the Empire. For 500 years before its first emperor, it was a remarkable res publica (a republic) known for the rule of law, substantial liberty, and the dispersion of power. When that crumbled into imperial autocracy late in the first century BCE, what we know as “the Empire” took root and lasted another 500 years. Weakened internally by its own welfare-warfare state, the Western Roman Empire centered in Rome fell to barbarians in 476 CE. Bowes’ attention is drawn exclusively to that second half-millennia.  

The Empire evolved into a very different place from the old Republic. By the dawn of the second century CE, it would have been unrecognizable to Roman citizens of the second century BCE Loyalty to the state and one-man rule had largely superseded the old republican virtues. Many emperors were ghastly megalomaniacs to whom earlier Romans would never have groveled.

Despite the general decline in morals and governance that characterized much of the imperial era, ordinary Romans fared better than you might surmise, at least until the decline overwhelmed them in the late fifth century. Bowes attributes this to “cagey managers of small resources” who lived in “precarity” but “doubled down on opportunities.” The picture she paints with recent evidence is one of hard-working, resourceful farmers, tradesmen, and shopkeepers making the very best of a tough situation and, for the most part, doing remarkably well at it. 

Ongoing excavations at Pompeii, destroyed by a volcanic eruption in 79 CE, have yielded fascinating details of commerce and coinage in the city: 

Bar and shop owners did more of their business in bronze and less with the prestige metals. Resellers of bulk oil and wine, and above all artisans producing for larger-scale markets, used gold and particularly silver. 

Bowes reveals that much of the Roman world experienced a “consumer revolution” as the Empire stretched from the border with Scotland to the Levant and across north Africa. Roman roads and trade, even while government grew more tyrannical in Rome, facilitated it. Consider this finding:

New data from archeology, and newly reconsidered texts like the Pompeii graffito, find working people, even some of the poorest, consuming far in excess of our previous expectations. From small-time traders to enslaved servants, farmers to craftsmen, Romans ate more and different foods, purchased rather than made many of the items they used…Their levels of household consumption were thus historically quite high…

The immense quantity of data gleaned from the recent discoveries shows up in numerous tables, charts, graphs and illustrations in Bowes’ book. From those entries, we learn of the accounts of a Roman beer-buyer; the percentage of farms with lamps, candlesticks and window glass; which cereal crops were grown on small farms; the real incomes of artisans and shopkeepers; the prevalence of metabolic disease among children, and so much more.  

For comedians who often poke fun today at British teeth, there’s this tidbit: Data suggest that the Roman conquest of Britain brought dramatic declines in dental health and that “British urbanites had the same or perhaps even worse dental health as the mostly urban Italian sample.” 

Nonetheless, new evidence suggests that “the majority of Romans were consuming a relatively robust caloric package.” Bowes tells us, 

This meant a lot more energy to do work, and thus a lot more work could be done. The Coliseum was not built on 1,900 calories per day…The Coliseum was not built by workers scraping their porridge out of a single pot. 

So, we now know that ordinary Romans during the Empire likely lived better than historians previously believed. They exhibited “relentless persistence and shrewdness,” “perseverance and ingenuity,” a degree of “grit and hustle” we can appreciate more than ever. For several hundred years, their accomplishment “was their ability to wrest a living from a hard and complex world.”  

We also know that it didn’t last. As the Empire disintegrated in the fifth century, it became ever more difficult for many, and impossible for a great number, to eke out a living. The “Dark Ages” that commenced with the fall of Rome saw economic and cultural decline and a massive depopulation. Life spans shortened, mortality rose, and standards of living plummeted. At its height, the city of Rome itself was home to a million people; a few centuries later, it plunged to a nadir of barely 30,000.  

Though Bowes falls short of saying so herself, I think the moral of the story is this: A resourceful people can endure a great deal before they throw in the towel, but a thriving civilization depends on what the Roman Empire ultimately forfeited: peace, freedom, property rights, and the rule of law. 

One of the most robust findings in economics is that, with few exceptions, people respond to incentives, rather than intentions or moral principles. Individuals operate under constraints of time, information, and risk, and as such, they will predictably and understandably adjust their behavior to whatever metrics ensure success. To do otherwise is irrational. When performance is evaluated and rewarded using metrics like quotas, behavior shifts toward satisfying those quotas to secure the benefits thereof. This happens in firms, schools, hospitals, police departments, and regulatory agencies, even when everyone understands, at least in the abstract, that the metric is distinct from the goals to be achieved.

Immigration enforcement provides a vivid case study of this general institutional failure mode. Under recent policy changes, US Immigration and Customs Enforcement has operated under explicit arrest targets in the form of daily and annual numerical goals meant to demonstrate enforcement intensity and resolve. The political rationale for these targets is straightforward. It is to signal to voters and political supporters that the current administration is serious about protecting the border and clamping down on illegal immigration.

But economics teaches that what gets measured gets optimized and gamed for various reasons, mostly having to do with incentives. In the case of immigration enforcement, when success is defined in numerical terms, agents will pursue the cheapest path to those numbers, rather than pursuing individuals and groups that are harder to find and detain. That is a rational given the incentives created by the Administration, namely rewarding aggressive arrest quotas. It makes sense that whenever institutions or individuals face quotas, they are likely to focus on the low-hanging fruit. Time spent achieving an easy unit of output eats up time spent pursuing a hard one. Effort that is devoted to high-risk targets, like violent criminals and well-entrenched gangs, threatens performance metrics in ways that low-risk targets do not. When failure to meet quotas carries professional consequences, agents will avoid activities that jeopardize the count, even if those activities are more closely aligned with the stated mission.

The logic is straightforward. Violent criminals, gang leaders, and professional smugglers are difficult to locate and expensive to apprehend, often relying on networks of other people to help them evade detection. Pursuing such criminal organizations requires investigations, coordination across jurisdictions, surveillance, and uncertain outcomes, making it easy for agents to come up empty-handed. By contrast, unauthorized immigrants who are otherwise law-abiding are comparatively easy to find. They have fixed residences, work regular jobs, and their children often attend the local school. Many are already interacting with the state through legal channels, including standard immigration check-ins.

When arrest quotas rise, then, it’s no surprise that arrests have accelerated disproportionately among those who are easiest to find and arrest rather than those who pose the greatest threat. Recent data confirm this pattern. Enforcement activity has surged, but the majority of arrests involve individuals without prior criminal convictions, a distribution consistent with quota-driven optimization rather than threat-based prioritization. And given the career and political incentives behind meeting those quotas, it is what we should expect. This behavior is rational given the incentives; it would be surprising if agents behaved otherwise.

There is a deeper problem here, though, that Hayek can help us diagnose. Quotas assume that central authorities know in advance how enforcement effort should be allocated across a vast and heterogeneous landscape. They assume that arrests are sufficiently homogeneous, such that merely counting them captures what matters. They assume that the marginal value of the next arrest is roughly constant across contexts. And they make these assumptions, often, without the salient local knowledge needed. 

Here the analogy to central planning becomes illuminating. Central planners, like those in Cuba or the former Soviet Union, fail because they lack access to the dispersed, tacit, and constantly changing knowledge required to allocate resources efficiently. As Hayek argued, markets work not because anyone knows the right answer in advance, but because competition allows agents to discover it through decentralized experimentation and feedback information that would otherwise be unavailable. Enforcement environments share this complexity because, among other reasons, threats vary by region, network, industry, and time. A centralized quota cannot incorporate this information, partly because it treats arrests as interchangeable units in the same way that central plans treat tons of steel or bushels of grain as interchangeable.

This helps explain why quota-driven enforcement is insensitive to conditions on the ground. It cannot adapt to local threat profiles because it does not reward adaptation. It cannot prioritize effectively because prioritization is costly and quotas reward speed, and it cannot learn from failure because in most cases it lacks the local knowledge needed for the adjustment. Of course, politicians can pivot when citizens and voters push back, but it is necessarily a less detailed and efficient process than, for example, markets and prices. 

Worse still, enforcement that deliberately and disproportionately targets working, embedded individuals produces sudden and uneven labor supply shocks. Industries that rely heavily on immigrant labor, like construction and agriculture, experience disruptions that cascade via prices, output, and complementary employment. These are downstream consequences of enforcement choices shaped by quotas. When enforcement prioritizes ease of arrest over social cost, it predictably targets workers rather than criminals, disrupting productive relationships that markets had already coordinated. The result resembles what happens when planners disrupt supply chains without understanding their internal complementarities.

A common defense of quotas appeals to accountability. Without numerical targets, agencies may underperform, selectively enforce, or drift away from their mandates. That said, the existence of a real problem, namely accountability, is hardly a defense of a flawed solution based on quotas that measure a single dimension without the necessary local knowledge.

The central lesson is rooted in institutional design and incentive structures under which these immigration agents operate. When complex, knowledge-intensive activities are governed by centralized numerical targets, agents will rationally pursue targets in ways that undermine the broader purpose of the institutional effort. Perverse incentives and poor institutional design are not the only explanatory factors here —personal choice and moral character matter, too—but they are a big part of the explanatory pie.

Substantive change has occurred in the subjects examined in my second book, Gold and Liberty (AIER, 1995), since it was published three decades ago. That change has been mostly negative, unfortunately, especially during the first quarter of this century. As economic liberty has decreased, gold has increased, a historical pattern which is by no means random. 

The theme of Gold and Liberty is straightforward: the statuses of gold-based money and political-economic liberty are intimately related. When a government is sound, so also is money. One of the book’s premises is that sound money is gold money (or gold-based money) because it’s economically grounded, non-political, and exhibits a fairly steady purchasing power over long periods. A second premise is that while sound government makes sound money possible, sound money alone can’t ensure fiscal-monetary integrity in public affairs.

A sound government is, in this sense, one that respects private property and the sanctity of contract, a state that’s constitutionally limited in its legal, monetary, and fiscal powers. Sound money is a predictable and reliable medium of exchange, serving as a reliable yardstick due precisely to the relative stability of its real value (that is, “the golden constant”). Unrestrained states “redistribute” wealth rather than protect it, tending to spend, tax, borrow, and print money to excess. That erodes an economy’s financial infrastructure.

The dollar-gold price reached yet another all-time high milestone ($4,000/ounce) in October, having surpassed $3,000/ounce last March and $2,000/ounce only thirty months ago. So far this month, it has averaged $4400/ounce – triple its level in March 2020 (when COVID lockdowns and subsidies began). Gold breached $1,000/ounce sixteen years ago, amid the financial crisis and “Great Recession” of 2009. Only two decades ago, it was $500/ounce. 

What Bretton Woods Got Right

Under the relative discipline of the Bretton-Woods gold-exchange standard (1948-1971), when the dollar-gold ratio was officially maintained at a steady level ($35/ounce), the Fed’s main job was to keep it there and issue neither too few nor too many dollars. Its job was not to manipulate the economy by gyrating interest rates. The dollar wasn’t a plaything in foreign exchange. Both US inflation and interest rates were relatively low and stable – not so since.

Gold and Liberty illustrates how the commonly cited dollar-price of gold is really the dollar’s value (purchasing power) in terms of real money, such that a rising “gold price” reflects the dollar’s debasement by profligate politicians. When this occurs – due largely to perpetual expansion of the fundamentally unnatural, unaffordable, and unsustainable welfare-warfare state – public finance (spending, taxing, borrowing, money creation) becomes both political and capricious. At the base there’s an erosion of real liberty. From that comes money debasement, the trend since the US left the gold-exchange standard in 1971.

The value of a monopoly-issued (fiat) currency reflects the competence and quality of public governance no less than a stock price reflects the competence and governing quality of a private-sector company. The empirical record makes clear that the US dollar held its real value (in gold ounces) for most of the period from 1790 to 1913, when government spending was minimal and there was no federal income tax or central (government) bank. In contrast, since the US established its money monopolist (the Federal Reserve) in 1913, the dollar, whether measured as a basket of commodities or consumer goods, has lost roughly 99 percent of its real purchasing power, most of that since the abandonment of the gold-linked dollar in 1971.

Debasement doesn’t get much worse than 99 percent – unless the loss occurs quickly and catastrophically, as in a hyperinflation. That’s not impossible in America’s future.

Having re-read Gold and Liberty recently, I feel both pride and chagrin. I’m proud that it refutes many monetary myths, gets the analysis basically right, and is prescient. It’s got solid data, history, economics, and investment advice. But its main, most helpful purpose is to make clear that money, banking, and the economic activity they support remain sound only in a capitalistic setting. That is, when government sticks to protecting rights to life, liberty, and property, by providing the three necessary functions of police, courts, and national defense.

Measures of Gold and Freedom

Why was this not the path taken this century? Why has government been expanded so much that it now routinely violates rights and spreads chronic fiscal-monetary uncertainty? Where’s the case, made so well in the second half of the twentieth century, for a more classically liberal political economy? In short, where have all the pro-capitalists gone? They were once dominant – and influential. Given the foundation laid by “Reaganomics” (1980s), the end of the USSR and the Cold War (1990s), plus US budget surpluses four years running (1998-2001), the first quarter of the twenty-first century could have entailed a still-purer capitalist renaissance. Instead, vacuous voters and pandering politicians from everywhere along the ideological spectrum have preferred more welfare, more warfare, and more lawfare. Many youths in recent decades tell pollsters they prefer socialism to capitalism (whether from ignorance or malevolence isn’t clear). New York City now has an overtly socialist mayor. Compared to 1995, America now has a larger, but still-burgeoning, welfare-warfare state that necessitates massive borrowing and money printing, as tax avoidance and evasion tend to cap the state’s direct “take” (see Hauser’s Law).

Unfortunately, the gains of the last quarter of the last century seem to have been squandered in the first quarter of this century. Monetary myths persist. Some have proliferated and worsened. Statists push a capricious “modern monetary theory” in hopes of more easily funding a burgeoning welfare-warfare state with minimal resort to taxation. Influential Keynesians and policymakers still insist that inflation is caused by “greed” or by an economy that “overheats” and thus warrants a periodic recession. Central banks this century seem more reckless and resistant to rules compared to the 1990s, as they unabashedly fund profligate states by chronic debt monetization, and their supposed “independence” dissipates.

One metric not available for the 1995 book was an index of economic freedom by nation, globally. This was still in the early stages of development. Two main measures have been constructed by the Heritage Foundation in Washington (since 1995) and the Fraser Institute in Canada (with indexes in five-year intervals extending back to 1955). An account of long-term trends in both gold and liberty would be interesting.

Figure One plots the dollar-gold price and the index of economic freedom (a splice of the Heritage-Fraser measures) since 1975. If the gold-liberty thesis is plausible, we should see an inverse relationship, or negative correlation, between the variables: liberty up, gold down or instead liberty down, gold up. That’s just what we observe. Indeed, it’s a more inverse relationship in the latter half of the period (2000-25) compared to the first half (1975-2000). Since 2008, the gold price has ascended while US economic freedom has descended.

Figure Two plots the dollar-gold price and US economic freedom for this century only. The inverse relation between gold and liberty is even more noticeable.

A few years ago, only three countries were economically freer than the US in 2007, but by 2015 11 nations were freer (I documented this as “The Multiyear Decline in US Economic Freedom”). Today, 24 are freer than the US. I wrote then:

most people, including many professional economists and data analysts (who should know better) seem to cling to the impression that US economic freedom is high and stable, while China has become less free economically. The facts say otherwise, and the facts should shape our perceptions and theories. Human liberty also should matter; much of our lives are spent engaged in market activity, pursuing our livelihoods, not in political activity. Finally, as a rule (which is empirically supported) less economic freedom results in less prosperity. Neither major US political party today seems much bothered by the loss of economic freedom. They don’t talk about it.” I added that “without a reversal in the trend of declining economic freedom in the US, we’ll likely be suffering more from less liberty, less supply growth, and less prosperity.

Recently, the relative holdings of foreign central banks are shifting away from large portfolios of US debt securities to gold. In dollar value, the world’s central banks now hold more gold than US securities. But this is due mostly to gold’s price boom relative to the prices of US Treasury notes and bonds, not to any material rise in the banks’ physical gold holdings. In short, the shift is an effect, not the cause, of gold’s price rise. The latter is due to the Fed’s excessive issuance of dollars, which is due to the US Treasury’s excessive issuance of debt to be monetized, which is due to the US Congress’s excessive spending, which in turn reflects a government no longer limited by a constitution or a gold standard.

Central banks could have sold some gold in recent years to rebalance the composition of their reserve holdings, but they haven’t. Why not? Are they now “gold bugs?” One might hope that their refusal to diversify would make it easier to return to the gold standard, but that seems unlikely given the fiscal-monetary prodigality we’ve witnessed so far this century. Here’s how I explained it in the book, when I was more optimistic about a return to monetary integrity.

If there is ever a return to a gold standard, it will not be accomplished by convening government commissions, which do no real scholarship and are purely bureaucratic undertakings, which perpetuate existing policy. Nor will a gold standard ever be properly managed by central banking, which is inimical to gold. The return to gold will require a sustained intellectual effort from academic economists and monetary reformers who uphold free markets, the gold standard, and free banking. It will require a major shift away from the welfare state that central banks are enlisted to support. Above all, it will require a return to classical liberalism based on a sound philosophic footing of respect for individuals and their right to be as free as possible from coercive government.

Meanwhile, it is encouraging that gold increasingly is in the hands of market participants instead of central banks. Since 1971, investors all over the world have been buying gold in the form of coins, bullion, and gold mining shares, primarily to protect their savings against the ravages of unstable government money. Meanwhile, although central banks and national treasuries continue to sit atop most of the gold they last held as reserves under the Bretton Woods system, they have somewhat reduced their gold holdings via sales and, more significantly, greatly increased their holdings of government debt. Gold now is a far smaller proportion of official reserves than it was in 1971. If these trends persist, the world’s central banks will be known solely as repositories of government debt, not of gold. In 1913, central banks and government agencies held about 30 percent of the world stock of gold. This proportion reached a peak of 62 percent in 1945 before falling back to 30 percent today. Where is this percentage headed? Central banks and governments as a group tend not to accumulate gold anymore and occasionally they sell it. Meanwhile, the world’s gold stock grows 2 percent every year. So the portion of gold held by governments should continue falling, absent a policy shift. With less and less of the total world stock of gold held by central banks and national treasuries, a greater portion is held privately. This was the situation before the rise of central banking. With the legalization of gold ownership and gold clauses, one might envision a return to gold de facto

Why Gold Has Won

In 2020, recognizing that a return to a gold standard was less likely with every passing year (and crisis), I advised a gold-based price rule the Fed could follow (“Real and Pseudo Gold Price Rules,” Cato Journal). It’s a practicable, efficient system, but the Fed doesn’t consider it – and I can guess why. Central banks can’t afford to listen to reason, given their powerful and needy clients: deficit-spending treasuries and legislatures. As I wrote:

Most central banks in contemporary times attempt monetary central planning without a clear or coherent plan, consulting an eclectic array of measures without focus. In effect, they rule without rules. Economists by now are reluctant to recommend rules that central banks are neither motivated nor required to adopt and would drop in haste in the heat of the next crisis. Much monetary policymaking now embodies the subjective preferences of policymakers and their clients: overleveraged states.

It’s as clear now as it was in 1995 that gold is an ideal monetary standard, even though sovereign powers at times (and for the entirety of this century) have refused to recognize or use gold for that crucial purpose. But consider just one important implication, pertaining to investments. Precisely because (and to the extent) sovereigns refuse to recognize gold, to be fiscally free (profligate), the result is nonetheless bullish for gold. By not making their money “as good as gold” and precluding a return to a gold standard, sovereigns make possible returns on gold that are very good indeed – often superior to those on the most popular alternative: equities. Table One illustrates how fiscal profligacy and monetary excess have favored returns on gold relative to those on the S&P 500. Not shown is that gold has outperformed the S&P 500 in nearly two-thirds of the years this century, by an average of +17 percentage points per year; it underperformed only one-third of the time by an average of -14 percentage points. Oddly, most investment advisors eschew gold and routinely recommend large portfolio shares in equities.

Friends of liberty and prosperity may feel chagrin, as do I, about this century’s innumerable, unnecessary financial debacles. But they can also feel consoled, satisfied, and even gleeful if they’ve trusted gold more than central bank alchemists. They’ve likely been mocked – by fans of fiat currency or cryptocurrency alike – for clinging to their “mystical metal,” “shiny rock,” or “barbaric relic.” But name-calling isn’t a good argument – nor good investment strategy.

The US Census Bureau just released state population data for mid-year 2025, along with updates for all previous years back to the 2020 Census. The Census estimates population growth with data on births, deaths, international migration, and “domestic migration” (among states and territories of the US). I always enjoy looking at the domestic migration data because they tell us a lot about where Americans prefer to live.

Freedom predicts net domestic migration fairly well. A lot of people have been pointing out the fact that Americans tend to move from “blue” to “red” states. The driving factor is not partisanship itself, but the different policies offered to residents of these states. The federal level has long been more complicated, but at the state level, Republican-led states still tend to enact “Reaganite” policies of limited government and free enterprise, while Democrat-led states tend to enact special-interest-oriented regulations and spending programs.

I’ve also seen a lot of people ranking states by total number of net domestic migrations (interstate moves in or out). Obviously, bigger states are going to dominate the top and bottom end of these rankings. Net domestic migrants over a period, as a percentage of the initial population, is much more useful. To make comparisons across periods of different lengths (as reported data often differ), divide by the number of years to yield an estimated average annual rate of net domestic migration.

Table 1 ranks the top and bottom ten states on average annual net migration rate for the April 2020 to July 2025 period, encompassing virtually all of the pandemic.

RankStateRateRankStateRate
1Idaho1.52%41Rhode Island–0.18%
2South Carolina1.48%42Maryland–0.44%
3Montana1.10%43New Jersey–0.48%
4Delaware1.08%44Massachusetts–0.52%
5North Carolina0.91%45Louisiana–0.62%
6Tennessee0.85%46Alaska–0.66%
7Maine0.83%47Illinois–0.71%
8Florida0.83%48Hawaii–0.82%
9Arizona0.79%49California–0.86%
10Nevada0.62%50New York–1.09%

Table 1: State Average Annual Net Domestic Migration Rates, 2020–2025

Some of these rates are quite large! New York, for example, is losing fully one percent of its population to other states every year, on average. At the other extreme, Idaho, South Carolina, Montana, and Delaware are growing by more than one percent of their population moving in from other states, on average per year. 

The only Democratic-leaning states in the top 10 are Delaware and Maine, and the only Republican-leaning states in the bottom 10 are Louisiana and Alaska. When we look at these exceptions, unusual levels of freedom stand out. Louisiana is quite low on freedom for its region, #31 overall according to the Ruger-Sorens index of economic and personal freedom. The only Deep South state worse than Louisiana is its neighbor, Mississippi (#40). Mississippi, not coincidentally, was the only other Deep South state to experience net domestic out-migration over that five-year period (–0.15 percent per year).

While Delaware and Maine score low on freedom (#44 and #43, respectively), both were much higher on freedom relatively recently, and even now they score a lot better than New Jersey (#47), California (#48), Hawaii (#49), and New York (#50). Delaware was #15 on freedom as recently as 2001 and only fell consistently into the bottom 10 from 2017 on. Maine fell into the bottom 10 for the very first time in 2020 and is still #3 on personal freedom alone.

More sophisticated evidence from our study suggests that both economic and personal freedom independently drive in-migration.

Paul Krugman once criticized our findings on the grounds that housing costs supposedly explain migration better than freedom. (The lefty Center for Budget and Policy Priorities more recently made a similar claim.) But that’s wrong. Now, housing costs certainly do help explain state-to-state migration, and migration in turn affects housing costs, but our results stand up even when we control for overall state-level cost of living (the lion’s share of which reflects housing costs).

How else do you explain why Louisiana and Mississippi do so poorly? Their housing costs are low. And the parts of New York that have had the most out-migration are upstate New York, where housing costs are also low. New York City and Long Island have held up better. Illinois isn’t super-expensive either, and the affordable low-freedom states of New Mexico, Minnesota, and Nebraska are also losing people.

The strongest evidence might be from changes over time. West Virginia has had one of the biggest increases in freedom in recent years as a result of its partisan shift from heavily Democratic to heavily Republican. As its freedom has risen, it’s flipped from a net out-migration state to a net in-migration state (Figure 1).

Figure 1: Freedom and Net Migration in West Virginia Over Time

I also checked out Wisconsin, the #1 state for increase in freedom since 2010, largely as a result of Scott Walker’s governorship and the transformative changes Republicans made to the first state to adopt an income tax. Lo and behold, the same pattern emerges (Figure 2). Wisconsin’s big increase in freedom has been followed by a turnaround in its migration fortunes.

Figure 2: Freedom and Net Migration in Wisconsin Over Time

It’s a similar story with New Hampshire. New Hampshire’s always been high on freedom, unlike West Virginia and Wisconsin, but it’s also increased a great deal in recent years because of the efforts of the Free State Project. While the FSP has been around for a long time, people who moved to New Hampshire for the movement first took office in significant numbers in 2011 after the 2010 wave election. And their influence really built after about 2017, with the election of Chris Sununu as governor. Thus, New Hampshire’s increase in freedom has been plausibly a result of an exogenous political change, like the revolutions that have happened in West Virginia and Wisconsin. And we see a similar result (Figure 3). Again, New Hampshire always had a lot of freedom and was a state people wanted to move to, but those differences have strongly reasserted themselves in the 2020s.

Figure 3: Freedom and Net Migration in New Hampshire Over Time

What about states that have gone the other way and become less free as a result of exogenous political changes? Colorado and Virginia come to mind as states that have moved from the Republicans toward the Democrats, but Virginia did elect a Republican governor recently, and Colorado and Virginia merely had slightly lower than average increases in freedom between 2010 and 2020. Still, their migration rates fell off historic norms, with Virginia actually losing population on net to other states since 2020.

The two states that lost the most freedom between 2010 and 2020 were Hawaii and Oregon. We’ve already seen that these states have poor migration records. Figure 4 shows how freedom and migration have changed over time in Oregon.

Figure 4: Freedom and Net Migration in Oregon Over Time

The relationship isn’t perfect, because people moved to Oregon in greater numbers in the 2010s even after freedom had fallen. But by the 2020s, people had noticed – or so I would surmise. Freedom kept falling, and Oregon’s usual flood of in-migrants not only slowed to a trickle, but actually reversed to an outflow. It’s noteworthy that this happened even as the state of Oregon and the city of Portland made substantial reforms to increase the supply and reduce the cost of housing – reforms I support, but which are not enough to turn a state around economically from damaging taxes and regulation.

The latest Census data confirm what most of us knew all along: state policy regimes matter, and Americans prefer to live in states that offer more economic and personal freedom. Legislatures and governors, take notice!

Prediction markets seem to be everywhere these days. Now you can bet not only on the outcomes of sporting events, but also elections, wars, and natural disasters. Yet many people react to these markets with disgust. For instance, in a recent article in Jacobin, political commentator David Moscrop calls them “demented” and “grotesque.”

The main moral objection to prediction markets seems to be that it’s wrong to profit from someone’s misfortune. And intuitively there does seem to be something immoral about raking in thousands of dollars because you correctly predicted that a hurricane would hit a particular city or a particular war would break out, resulting in tremendous amounts of suffering. As Moscrop puts it, “Bettors will hold financial stakes in particular outcomes, including some of the most heinous events imaginable. It’s a fundamentally cynical and dehumanizing turn.” But as natural as the gut-level unease with prediction markets is, we shouldn’t trust it. Prediction markets are both useful and morally benign.

Prediction markets are useful precisely because they incentivize accurate forecasting. The prospect of making or losing money gives participants a strong reason to seek out new information and to process that information in an unbiased way. Think about sports betting. When you don’t have any money on the line, you probably indulge in wishful thinking that your favorite team is going to win this week, even though they’re 14 point underdogs. But if you suddenly stood to lose $1,000 if you turn out to be wrong, you’ll quickly start to think more rationally about the team’s chances.

In short, prediction markets tend to deliver accurate forecasts for the simple reason that they reward accuracy and punish inaccuracy. And at the risk of making an obvious point, accurate forecasts are useful because people plan their lives around expectations about what the future holds. For instance, if you live in an area where a hurricane will hit or a war will start, that’s important information for you to know. It could quite literally be lifesaving. 

This point also helps explain why you shouldn’t accept the objection that prediction markets are morally bad because they enable people to profit from catastrophes. As the ethicists Jason Brennan and Peter Jaworski have noted, many people routinely make money by accurately predicting bad things will happen without the use of prediction markets, and no one finds them immoral. Meteorologists make money forecasting hurricanes, epidemiologists make money forecasting disease outbreaks, political analysts make money forecasting electoral outcomes and wars, and so on.

The reason why no one thinks that these forecasters are doing something morally wrong is because, as already mentioned, accurately predicting bad events is actually beneficial; accurate predictions of bad events help people prepare for the bad events. (This should go without saying, but the fact that someone earns money by being right about something bad happening doesn’t mean they caused it or wanted it to happen.) Maybe the action of the bettor feels different than the action of the meteorologist, but morally, it’s the same. Someone who correctly predicts hurricanes and profits by getting a job with the Weather Channel “makes money from a catastrophe” just as much as someone who correctly predicts hurricanes and profits by placing bets on Kalshi.

You might worry that prediction markets incentivize what is in effect insider trading—they reward people for acting on information others don’t have. But that’s a feature, not a bug. If you see someone place a huge bet on an outcome that seems highly unlikely, that suggests that the outcome is more likely than you thought; maybe someone has inside information that makes them confident it’s going to happen. You don’t have to act on this signal, of course, but at least you have it in case you want to.

Critics of prediction markets also overlook the possibility that the money bettors earn can be used to mitigate the harms of the very disasters they predict. Suppose someone correctly predicts that a hurricane will make landfall and profits from a prediction market as a result. They now have additional resources that can be used to mitigate the suffering caused by the hurricane. They can donate to emergency relief, help fund rebuilding efforts, support local clinics, or contribute to flood mitigation projects. So if you’re concerned that a catastrophe is likely to occur, making an accurate prediction and allocating your winnings to help those harmed by the catastrophe is far more productive than simply watching it unfold.

Lastly, consider the objection that using prediction markets isn’t immoral, but self-destructive. These markets allow people to make risky bets that they might lose and, in turn, put them in serious financial straits.

Note, though, that it doesn’t follow from the fact that prediction markets enable people to take unwise financial risks that government officials should ban them. Suppose your neighbor asks you to make a large investment in her startup producing perpetual motion machines. That investment would be an unwise financial risk to say the least. Nevertheless, government officials shouldn’t intervene because you have the right to take that risk. It’s your money after all.

Prediction markets don’t cause or celebrate disasters, nor do they force people to gamble recklessly. Instead, they allow people to test their predictions in a system that rewards them for being right and penalizes them for being wrong. The result is accurate information that others can use to help plan their lives. If anything, that’s a positive moral good.