Author

admin

Browsing

In 1895, Greek journalist Vlasis Gavriilidis traveled to Cambridge University seeking advice from three leading economists — Alfred Marshall, Henry Sidgwick, and John Neville Keynes — on the most urgent economic problem facing his country: a collapsing market for currants (Corinthian raisins), which then accounted for roughly half of all Greek exports. 

Overproduction, fueled by earlier government policies and a temporary export boom, threatened widespread rural unemployment and poverty. The economists offered divided counsel. That ambiguity gave organized currant growers the opening they needed to lobby successfully for a price-support system — a “temporary” intervention that promised stable incomes for growers while shifting costs onto taxpayers and distorting the broader economy. 

The Greek currant crisis of the 1890s offers enduring lessons in policy hubris, the stubborn longevity of supposedly temporary measures, and the lasting damage caused by interfering with market incentives. 

Boom, Bust, and the Roots of Overproduction 

Currant cultivation in Greece had ancient roots, but the crisis was modern. French vineyards were devastated by the phylloxera pest in the 1860s and 1870s, creating massive demand for Greek currants to produce “raisin wine.” This surge encouraged rapid expansion. 

The First Agrarian Reform of 1871 had distributed national lands (former Ottoman holdings) in small plots to create a broad class of peasant proprietors. Many new landowners, often with credit secured against their holdings, rushed to plant currants — the most profitable crop at the time. Currants quickly became Greece’s dominant export. 

Then the boom reversed. French vineyards recovered. French producers, noting consumer preference for the taste and shelf-life of currant-based wine, successfully lobbied for the Méline Tariff of 1892 and the Turrell Act of 1896, which effectively shut Greek currants out of the French winemaking market. 

At the same time, high-quality, consistent California raisins entered global markets as strong competitors. The result was a sharp and sudden price collapse. As Patras merchant Theodoros Burmuli warned in 1899 in the Economic Journal, prices fell to the bare cost of production, threatening “disastrous and far-reaching consequences” for the Greek economy. 

The Retention Scheme and the Cambridge Debate 

Burmuli advocated a state retention system: exporters would be required to deliver 10–15 percent of their currants to a government depot (initially for supposed domestic use), artificially restricting supply to prop up prices and shield small growers from market reality. 

A group of anti-retentionists — largely free-trade liberals — opposed the plan. They argued it would distort markets, encourage even more overproduction, impose heavy administrative and fiscal costs, and fail to address the underlying imbalance between supply and demand. They also warned that any “temporary” program would prove difficult to end. 

The debate reached Cambridge in 1895. Sidgwick and John Neville Keynes leaned toward supporting the retention idea, with Keynes suggesting it “might prove temporarily effective” in easing growers’ distress. Alfred Marshall opposed it, though the exact record of his reasoning has not survived; his broader body of work aligns clearly with the anti-retentionist emphasis on allowing prices to adjust and resources to reallocate. 

The divided expert opinion helped the well-organized currant growers prevail politically. Greece enacted the retention law in 1895 as a supposedly short-term measure. 

What Actually Happened 

The results vindicated the critics. In his 1906 Economic Journal article “The Currant Crisis in Greece,” economist Andreas Andréadès documented how the program backfired. By guaranteeing inflated prices, it subsidized rather than discouraged production. Growers planted more vines, including on marginal land. Terraced hillsides and drained wetlands were converted to currants long after global demand had shifted. 

The measure was anything but temporary. Renewed annually at first, it was reorganized in 1899 as the “Currant Bank” and extended for another decade. Variants and references to retention schemes lingered into the early 1930s. 

The government accumulated debt and stockpiles of unsold currants. Andréadès concluded that the real crisis was no longer the initial overproduction but the intervention itself. By interfering with the law of supply and demand, policymakers turned a painful but localized adjustment into a prolonged national problem. He wrote: “Consequently, the only result [of the program] was to render permanent a crisis which could have been only temporary if the ‘economic laws’ had been respected.” 

Public Choice in Action 

Classic public-choice dynamics explain why the program persisted. As land values rose to capitalize on the artificial support, farmers came to depend on continuation of the policy. Any attempt to repeal it would impose visible, concentrated losses on a politically powerful group, while the costs (higher taxes, misallocated resources, and slower economic adjustment) were diffuse and borne by the broader economy and future generations. 

Greece’s heavy reliance on a single crop left the country economically fragile, and the fiscal burden of the scheme contributed to its chronic debt difficulties. 

Lessons for Today 

The nineteenth-century Greek currant saga remains highly relevant in an era of widespread agricultural subsidies, “temporary” assistance programs, and industry bailouts. 

  1. Price signals matter. When demand falls or competition rises, the healthy response is reduced production and reallocation of resources — not government price floors that delay inevitable adjustments and lock capital and labor into unproductive uses. 
  2. “Temporary” support rarely is. Programs sold as short-term relief tend to become entrenched when concentrated interest groups benefit and develop a stake in their continuation. 
  3. Concentrated benefits, diffuse costs. Vocal, organized groups often succeed in capturing gains for themselves while spreading the bill across taxpayers and the wider economy — frequently at the expense of long-term growth and resilience. 

Greece’s currant crisis shows that good intentions and political expediency can transform a manageable market correction into decades of distortion. Policymakers tempted by price supports or industry rescues would do well to remember how a “temporary” Greek retention scheme outlived its justification by generations and left the economy weaker for it. 

Economics is a peculiar science. On the one hand, it is the queen of the social sciences and offers a powerful logic for understanding the world. On the other, as Henry Hazlitt put it, it is haunted by more fallacies than any other study known to man. People simply love to misunderstand economics.

Ironically, this presents a profit opportunity to those who choose to exploit people’s willing ignorance…especially if they are economists. Then they can present popular fallacies as seemingly insightful critiques or even novel takes. Because they are from the inside — one of “them” — people are willing to take their word for it. 

University College London economist Mariana Mazzucato is a case in point. She has made a name for herself writing books and consulting policymakers on how the State can be used to produce “free lunches.” Books like The Entrepreneurial State and Mission Economy argue that the State can be an effective low- or no-cost shortcut to prosperity and that it should therefore be used liberally by policymakers. 

Any economist worth his salt would naturally object that there are no “free lunches.” Nothing is without opportunity cost, which is why we must economize. But this “dismal” view, albeit true, is often rejected by those who wish to believe in a mystical world in which money trees exist and scarcity does not. Unfortunately, Mazzucato and others are happy to provide rationalizations for those who don’t understand basic economics.

In her new book The Common Good Economy, to be released this fall, Mazzucato, per the blurb, “builds on her visionary ideas of the entrepreneurial state and mission-oriented policies to establish a new theory of the common good, one which allows governments and businesses to develop purposeful economic relationships, creating value and building spaces where human flourishing can happen.” In other words, it is more of the same. The State, assumed glorious, both can and should actively interfere in the economy and beyond because businesses cannot be trusted to produce what people actually want.

It is a curious argument, especially when considering the nature of voluntary exchange and market entrepreneurship. In markets, entrepreneurs can only earn profits by satisfying their customers — on the customers’ terms. They compete by creating as much value as possible, but must bear the uncertainty of their speculation because there is no way of knowing what consumers appreciate until after the goods are already produced and available for sale.

The State, in contrast, is not subject to such approval. It does not need to produce value and does not even need to economize on resources. It has the power to take and need not ask permission. This creates serious incentive problems and leaves the State operating in the dark, unable to know — or even reliably estimate — how resources are best used. Lacking a conception of actual value, which in the market is determined for and by consumers, and not needing to economize, what are the odds that the State will produce something good? And what are the odds that it will be effectively produced?

The answer is that we cannot expect the State to do anything effectively — other than waste resources. Any reasonable analysis of opportunity costs of the State’s undertakings should find that they are higher than the supposed value they create. Even relying only on the “seen” as captured in official statistics, the State’s investments have dubious returns. And despite Mazzucato’s claims, there is certainly no lack of public “investments.” As McCloskey and Mingardi note in The Myth of the Entrepreneurial State, which illuminates the limitations of Mazzucato’s claims, “in the past century government expenditure as a percentage of GDP has drifted up towards 50 percent.”

Basic economic understanding and research are of no relevance to Mazzucato. She has already attempted to redefine the very concept of value to serve her political purposes in the highly confused The Value of Everything. And she keeps finding ways to argue that politically directed investments not only outperform private ones but conjure value from nothing.

Many more grounded economists have pushed back on the claims by Mazzucato and others. The Entrepreneurial State was debunked. And so was Mission Economy. Perhaps this is why the profiteers keep inventing new terms for the same basic fallacy. The Common Good Economy will be no different in this regard. It will probably sell well, however — and further undermine economic understanding in the process.

Energy price increases are hitting Americans hard. In the March 2026 Everyday Price Index, my colleague Pete Earle noted that the Iran war drove up energy prices, with adjacent industries feeling the impact, while core inflation remained muted. These price increases resemble an energy shock rather than broad-based inflation that might concern the Fed.  

For ordinary Americans, however, Earle comments, “consumers are first encountering the shock in the most visible and psychologically powerful places — gas stations, travel, and transportation-linked expenses — while the rest of the basket remains relatively stable.” 

The “visible and psychologically powerful” price increases have many policymakers rightly concerned. Both Indiana and Georgia have enacted state gas tax holidays while Utah will implement one from July through December. Several states are also considering issuing similar policies, and federal lawmakers have proposed a nationwide gas tax holiday. 

Concerns about affordability are genuine, but this is a case where good intentions do not guarantee good outcomes. Our present strains at the pump are due to limited supply. Pausing gas taxes will not increase the supply of gas. Instead, policymakers should focus on regulatory reforms that lower energy production costs and reduce bottlenecks. 

Reasoning from the Pump Price 

Prices act as a signal that informs buyers and sellers how much of a good or service is available and how much others want that good or service. Scott Sumner’s insight, “people should never reason from a price change, but always start one step earlier—what caused the price to change” is essential here.  

The legal incidence (who is legally obligated to pay the tax) falls on wholesalers or retailers while the economic incidence (who bears the cost of a tax) falls on consumers. Consumer demand for gas is relatively less elastic than other goods and services in the short run, meaning people are willing to forgo other spending before reducing fuel consumption. 

When prices rise due to a supply shock, consumers continue purchasing gasoline. A tax holiday can, therefore, increase demand at precisely the worst moment. Evidence from past tax holidays and disaster responses shows that such policies often shift consumption, but do not provide lasting relief. 

When refining capacity, inventories, or distribution networks tighten, the benefits of tax cuts dissipate. In those conditions, tax holidays provide less relief precisely when relief is needed the most.  

Gas tax holidays must be judged by their outcomes. Understanding the cause of price increases helps policymakers avoid responses that are ineffective or do further damage. 

What Can Be Done? 

The good news is that there are some reforms that federal and state policymakers can accomplish to help the American people. While avoiding gas tax holidays prevents additional harm, they can focus on getting government out of the way through regulatory reforms that improve supply. 

Policymakers should reform regulations that currently constrain oil and gas production and create supply chain bottlenecks. Federal actions include accelerating leasing, streamlining permitting processes, and reining in executive discretion over permitting, which allows the President to revoke permits that go against a given administration’s preferred energy agenda. States can roll back renewable portfolio standards to reduce compliance costs and ease permitting bottlenecks. They can also exit regional cap-and-trade programs to lower costs often passed to consumers. 

Additionally, with the Greenhouse Gas Endangerment Finding rescinded, now is the time to conduct regulatory audits to assess the costs and benefits of regulations. Policymakers could enact regulatory budgets that cap the number of regulations in force at any given time. Finally, they should consider sunset requirements that remove regulations after a certain period unless explicitly renewed by the legislative branch.  

The Problem Isn’t Gas Prices — It’s Supply

Gas tax holidays might be politically attractive, but they do not expand supply nor ease supply chain constraints. They can even worsen shortages by increasing demand. 

A more effective approach focuses on reducing regulatory barriers and improving energy market flexibility. This approach can address some of the root causes of policy volatility during and after the supply shock. 

Prices work best when they are treated as signals, not problems to suppress. By understanding how and why prices change and minimizing interference in the price system, policymakers can avoid doing unintentional harm.

Recent remarks by Elon Musk have reignited debate over the economic implications of artificial intelligence, following a widely circulated video clip in which he predicts a future of “universal high income” funded by direct government payments. In the clip — shared broadly on X and quickly amplified across financial media — Musk argues that AI-driven production will expand so rapidly that it will outpace growth in the money supply, rendering such payments non-inflationary and potentially even deflationary. As he puts it, if goods and services grow faster than money, prices should fall, even as governments distribute cash to households. The claim builds on his longstanding advocacy of income support in an AI-disrupted labor market, but extends it into a more explicit monetary argument: that large-scale issuance of money need not distort prices if productivity growth is sufficiently strong.

It is a striking claim, and one that arrives at a moment when Musk’s commercial interests are increasingly tied to the perceived scale and inevitability of the AI transformation. With his artificial intelligence initiatives becoming more deeply integrated into the broader SpaceX ecosystem — and with expectations of a major capital markets event on the horizon — there is a clear incentive to frame AI not merely as an incremental innovation, but as a system-altering force capable of reshaping the global economic landscape. That does not make the vision wrong. But it does suggest that rhetoric surrounding abundance, inevitability, and frictionless adjustment should be read, at least in part, as a forward-looking narrative — an attempt to describe not just what may happen, but what investors and the public should come to expect.

The economic reasoning underlying the claim, however, is where the argument begins to break down. Issuing money — even in a high-productivity environment — does not create income in any real sense. It redistributes claims on output. Goods and services must still be produced. The act of distributing purchasing power does not add to that production; it just reallocates access to it. Even if AI dramatically increases the total quantity of goods available, the path by which money enters the system matters. New money is never distributed evenly or instantaneously. It arrives through specific channels — government transfers, financial institutions, asset markets — and those entry points shape how prices adjust across sectors.

This is why the idea that inflation or deflation can be understood as a simple ratio of aggregate output to the money supply is misleading. Prices are not set in the aggregate; they are relative, reflecting the interplay of supply, demand, expectations, and timing. When new money is introduced, it affects some prices before others, altering incentives and redirecting resources. Some sectors expand more rapidly than they otherwise would, while others are effectively taxed by rising input costs or shifting demand. These relative price movements are not noise — they are the mechanism by which the economy coordinates activity. Distort them, and the structure of production itself becomes misaligned.

The role of monetary policy does not disappear in such a world; it may become more subtle, but no less important. If income transfers are financed by sustained monetary expansion, interest rates and credit conditions will still respond. Artificially abundant liquidity can suppress borrowing costs and encourage investment projects that appear viable under those conditions but are not supported by underlying resource availability or consumer preferences. (Indeed, these conditions may already be manifesting.) Over time, this can lead to overextension in some sectors and underinvestment in others — a familiar pattern that has historically culminated in corrections when financial conditions tighten or expectations shift.

What is notable is how closely these latest remarks mirror Musk’s earlier statements about an AI-driven future of “sustainable abundance.” For years, he has argued that advances in automation would so dramatically expand productive capacity that scarcity itself would fade as a central economic concern. The current formulation simply extends that logic: if scarcity recedes, then distributing money becomes a largely administrative exercise, unmoored from traditional constraints. But this is precisely where the conceptual error lies. Technology can expand what is possible — it can shift the frontier outward — but it does not eliminate the need for intertemporal coordination, nor nullify the importance of how resources are allocated.

A substantial expansion in productive capacity is entirely within reach. Advances in AI could lower costs across wide swaths of the economy, streamline production, and unlock entirely new forms of output. But greater plenty does not eliminate the need for coordination, nor does it neutralize the role of money. Prices, investment decisions, and income flows are still shaped by institutional frameworks and incentive structures, and those forces continue to operate regardless of how quickly output is growing.

If the coming decades deliver anything like the transformation being envisioned, its success will depend not only on technological capability but on how well economic systems adapt to it. Producing more with fewer inputs is a powerful development, but it does not negate the importance of sound signals in markets or disciplined allocation of capital. Expanding the money supply alongside rising output does not bypass these considerations; it interacts with them, and if handled poorly, can obscure rather than clarify the information that markets rely on. If nothing else, seeing the convergence of the thinking of generational entrepreneur Elon Musk with that of NYC Mayor Zohran Mamdani confirms that economists, myself included, need to do a far better job of communicating basic economic concepts.  

In 1959, Milton I. Roemer — a physician and pioneering health services researcher at UCLA — published a study that would influence American healthcare policy for generations. Examining hospital utilization patterns, Roemer observed a striking correlation in insured populations: the availability of more hospital beds was associated with greater numbers of hospital days used. “A built bed is a filled bed,” he concluded. This insight, known as Roemer’s Law, posited that supply tends to create its own demand. In the context of third-party payment systems, it implied that unchecked expansion of facilities would fuel wasteful overcapacity and drive escalating costs through supplier-induced demand.

This observation became the intellectual foundation for Certificate of Need (CON) laws. The core implication was that supplier-induced demand would inevitably lead to inefficient duplication and  wasteful overcapacity. Roemer, a staunch advocate of social medicine, viewed the Soviet Union as embodying the healthcare system of the future — one oriented more toward equity.

CON laws are state regulatory mechanisms that require healthcare providers — hospitals, ambulatory surgical centers (ASCs), nursing homes, and others — to obtain explicit government approval before making major capital investments, expanding services, or even purchasing certain equipment.

In practice, a state health planning agency reviews applications based on bureaucratic formulas for “community need,” projected utilization, and impact on existing providers. If approved, the CON acts as a legal permission slip; if denied, the project dies. These laws do not improve safety or clinical quality — that is handled by separate licensing, accreditation, and Medicare certification processes. Instead, they function as artificial barriers to entry in the healthcare marketplace.

Origins of CON Laws

CON laws trace their origins to the 1960s, with New York enacting the first statute in 1964. The concept gained national momentum amid concerns over rising healthcare expenditures under cost-plus reimbursement systems. The federal government amplified the approach through the National Health Planning and Resources Development Act of 1974 (NHPRDA), which conditioned certain federal funding on states establishing CON programs.

By the early 1980s, nearly every state except Louisiana had implemented some form of CON review. Congress repealed the federal mandate in 1986 after recognizing its shortcomings, yet as of 2026, approximately 30–35 states, including Alabama, retain active CON regimes.

Practical Application of CON Laws

Consider the practical implications for opening an ambulatory surgical center in a CON state. A multidisciplinary group of physicians — orthopedists, neurosurgeons, gastroenterologists, and pain specialists — identifies unmet demand in their markets: protracted wait times for outpatient procedures, higher costs in hospital outpatient departments (often 30–50% above ASC rates for equivalent cases), and opportunities for same-day discharge with superior patient experience. The surgeons and doctors realize this because they are actively taking care of patients.

Private capital is secured, a facility is designed emphasizing operational efficiency, infection control, and specialization, and a CON application is submitted to the appropriate board. Approval hinges on demonstrating conformity with a rigid state health plan that employs formulaic metrics: population-to-provider ratios, historical utilization rates, and projected demand that frequently lag behind actual market dynamics and technological shifts.

Incumbent hospitals, whose outpatient margins subsidize other operations, routinely intervene as opponents. Formal protests trigger adversarial public hearings, extensive discovery, and protracted legal proceedings. Applicants incur legal and consulting fees often exceeding hundreds of thousands of dollars. The review board — frequently influenced by representatives of existing providers — deliberates for 12 to 24 months or longer. Even conditional approval may impose geographic or service-line restrictions. During this interval, patients endure higher costs and delays, surgeons sacrifice productivity, and scarce capital remains unproductive. This process exemplifies not rational planning, but regulatory capture and rent-seeking. Political allocation supplants consumer sovereignty.

Justification For CON Laws

Advocates of CON laws advance a primary economic rationale: preventing duplicative investments and “overcapacity” that would allegedly inflate costs through underutilized fixed assets and supplier-induced demand, while safeguarding access in underserved (often rural) areas. Without regulatory gatekeeping, they contend, fragmented entry would fragment volume, raise unit costs, and exacerbate maldistribution of services.

Beyond cost control, CON regulation is defended as essential for safeguarding access in underserved — particularly rural — areas. Without government gatekeeping, new entrants (especially efficient ambulatory surgery centers) would “cream-skim” the most profitable cases and commercially insured patients, leaving incumbent hospitals burdened with a disproportionate share of complex, high-cost, low-margin, and uncompensated care. This fragmentation of volume would supposedly raise unit costs for remaining providers, threaten the financial viability of safety-net and rural hospitals, undermine cross-subsidization of essential services (such as emergency and trauma care), and ultimately exacerbate maldistribution of services, harming the very populations CON laws purport to protect.

This rationale is explicitly articulated by the National Conference of State Legislatures (NCSL), which states that CON programs “primarily aim to control health care costs by restricting duplicative services and determining whether new capital expenditures meet a community need,” while also seeking to ensure access for “historically underserved communities, such as rural areas.” Similar arguments appear in state-level policy analyses and hospital association positions.

Economic Theory Does Not Support CON Laws

Friedrich Hayek’s seminal 1945 essay, “The Use of Knowledge in Society,” illuminates the core epistemic failure. Economic knowledge is not centralized or articulable in a form readily aggregated by a planning board in any state capital; it is dispersed, tacit, and contextual — embodied in the localized observations of physicians, administrators, investors, and patients regarding shifting demographics, technological feasibility (e.g., minimally invasive techniques enabling safer ASC procedures), and revealed preferences via willingness to pay. The price system serves as a “telecommunications” mechanism that synthesizes this fragmented information into actionable signals far more efficiently than any bureaucratic formula. CON laws supplant these dynamic signals with static, politically mediated projections, inevitably producing misallocation: persistent shortages where entrepreneurial insight perceives opportunity, and protected excess where incumbents lobby effectively.

Milton Friedman extended this critique to occupational and market-entry licensing, arguing that such barriers function primarily as protectionist devices that restrict supply, elevate prices, and shield established interests from competition. CON regimes exemplify this on a facility level: by limiting entry, they enable incumbents to exercise greater market power, sustaining higher reimbursement rates and operational inefficiencies. As typically occurs, Friedman’s thoughts are corroborated by data and empirical evidence.

Studies document that CON states exhibit fewer ASCs per capita; repeal of ASC-specific CON requirements has been causally linked to 44–47% statewide increases in ASC supply (and 92–112% in rural areas), without corresponding rises in hospital closures or service reductions. Broader analyses reveal associations with higher variable costs in acute-care hospitals, elevated per-service and per-capita spending in many specifications, and slower adoption of cost-saving innovations. Competition disciplines providers toward value — ASCs routinely deliver equivalent or superior outcomes for appropriate cases at substantially lower cost precisely because they must attract patients and surgeons on merit rather than regulatory fiat.

A colloquial illustration underscores the absurdity.

Envision applying CON logic to the fast-food industry in a growing Alabama suburb plagued by long lines at the lone Chick-fil-A. Entrepreneurs propose a new location, financed privately, promising faster service, consistent quality, and local jobs. Under a hypothetical “Certificate of Need for Fried Chicken,” they must persuade a state board that the community “requires” additional drive-thru capacity according to utilization formulas and population ratios. Existing chains directly or indirectly impacted by this — Burger King, McDonald’s — file vigorous objections, warning of “duplicative” capacity that would force price hikes to amortize their underutilized grills and parking lots. Months of hearings, expert testimony, and six-figure legal expenditures ensue. The board denies the application, citing sufficient “nugget utilization rates.” Customers endure persistent queues and elevated prices, innovation in menu or service models stalls, and consumer welfare suffers — all justified as preventing wasteful “overcapacity in poultry processing.” The satire exposes the folly: in competitive markets, entry and exit guided by profit-and-loss signals rapidly correct misallocations; suppressing them predictably harms the very consumers purportedly protected.

In healthcare, the consequences are graver, measured in delayed care, inflated expenditures, and forgone innovations. CON laws do not merely fail on their own terms; they invert the logic of markets, substituting political knowledge for the superior coordinating power of prices and voluntary exchange. Decades of evidence — from cross-state comparisons to difference-in-differences analyses of repeals — affirm that liberalization expands supply, moderates costs, and improves access without the predicted collapse of incumbent providers.

Repeal CON.

The Board of Trustees of the  American Institute for Economic Research (AIER), one of the oldest and most respected nonpartisan economic research and educational organizations in the United States dedicated to promoting classical liberal and free-market ideas, has appointed Dr. Samuel Gregg as its new President.

Gregg served as interim president after the previous incumbent, Dr. William Ruger, accepted the position of Deputy Director of National Intelligence in April 2025.

Terry Anker, Chair of AIER’s Board of Trustees, remarked,

We are delighted that Dr. Gregg has accepted the board’s invitation to serve as AIER’s next president. Since 2018, AIER has undergone a period of remarkable growth and expansion, much of which was led and driven by Will Ruger. He left an indelible mark upon the institute and its core mission of educating the general public, students, and policy makers on the value of free market principles and the ways in which they promote prosperity and a free society. I and the Board of Trustees believe that Samuel Gregg will bring a unique and proven combination of executive skills and scholarly achievement to AIER’s presidency as he leads AIER in its advancement of sound economic thinking in the marketplace of ideas.

“I’m honored by the trust that the Board of Trustees has placed in me to lead AIER at a time when principles of economic liberty, the rule of law, and other classical liberal commitments are under severe pressure,” said Dr. Gregg. “I’m committed to advancing these ideas to AIER’s target audiences throughout the United States and equipping our superb team with everything that they need to achieve this goal.”

Dr. Gregg added:

These are very challenging times for those who believe in free markets, limited government, and the free society. But I am confident that AIER will continue to take a leading role in making the case for economic liberty to its target audiences, and first and foremost to those everyday Americans whom AIER’s founder, Colonel Harwood, was especially committed to reaching.

AIER, which was founded in 1933 by economist and financial advisor Colonel Edward C. Harwood, is dedicated to promoting the ideas of personal freedom, free enterprise, property rights, limited government, and sound money.

The New York Times recently reported on a new research paper that finds that, as summarized by the Times, “the North American Free Trade Agreement and trade competition with Mexico led to earlier deaths for American factory workers.”

Specifically, the researchers found that, from NAFTA’s launch in January 1994 through 2008, mortality increased in those “commuting zones” in the continental United States that had a disproportionately large number of workers producing manufactured goods in competition with imports from Mexico. Especially hard hit in those commuting zones were men who, in 1994, were ages 25 to 44. Losing jobs as a result of the greater freedom of Americans to purchase imports from Mexico, manufacturing workers and members of their households in these hard-hit commuting zones became more likely to commit suicide, turn to drugs or alcohol, or otherwise suffer ill health that raised their chances of going early to their graves.

In short, NAFTA was deadly because NAFTA destroyed manufacturing jobs. It’s a tiny leap from this finding to the conclusion that free trade is very likely hazardous to the health of manufacturing workers and their families. And at least one of the paper’s three authors — University of Chicago economist Matthew Notowidigdo — made this leap when he told the New York Times that his research highlights an “underappreciated cost of globalization.”

The econometrics in the paper is genuinely impressive. I assume that the finding of increased mortality is accurate. But I dispute the conclusion that this rise in mortality can legitimately be said to be the result of the freeing of trade.

NAFTA Job Losses Compared to Total Job Losses

Let’s put NAFTA job losses into perspective.

The total number of jobs destroyed by NAFTA from 1994 through 2008 was minuscule compared to total job destruction over those years. The St. Louis Fed has data starting in December 2000 on total monthly layoffs and discharges — that is, for 97 of the 180 months covered by Notowidigdo, et al’s research. During those 97 months, an average of 1.9 million workers in America every month lost or were laid off from jobs they wanted to keep.

How much of this job destruction was caused by NAFTA? The Economic Policy Institute — an outfit hostile to NAFTA — estimates that over the course of NAFTA’s first 20 years, it destroyed a total of 700,000 jobs. Even assuming that all of those 700,000 jobs were destroyed in NAFTA’s first 15 years, that’s an average monthly job loss of only 3,900 — or 0.2 percent of the average total monthly layoffs and discharges during this period.

This picture hardly changes if we compare NAFTA job losses to only manufacturing-worker layoffs and discharges. On average, 194,000 manufacturing workers lost their jobs each and every month from December 2000 through December 2008. NAFTA job losses, therefore, were a mere 2.0 percent of all manufacturing-job losses in those years. Ninety-eight percent of manufacturing-job losses from December 2000 through December 2008 were caused by forces other than NAFTA.

NAFTA Job Losses Compared to Earlier-Era Losses of Manufacturing Jobs

The nationwide rate of manufacturing-job loss from 1994 through 2008 — the years studied by Notowidigdo, et al. —  was lower than the nationwide rate of manufacturing-job loss before NAFTA was implemented. Specifically, from 1958 through 1980, each month, on average, 1.6 percent of manufacturing workers were laid off. Yet from 1994 through 2008, on average only 1.3 percent of manufacturing workers were discharged or laid off. (I calculated this 1.3 percent average monthly rate of manufacturing-worker job loss using available data.) Although there are no data on manufacturing-job losses from 1981 through 1993, the comparison of 1994-2008 with 1958-1980 is nevertheless revealing because it shows for an earlier long span of years a notably higher rate of manufacturing-job loss than occurred during the first 15 years of NAFTA, thereby putting the experience of the first 15 years of NAFTA into some meaningful historical context.

If it’s true that NAFTA’s destruction of manufacturing jobs resulted in an unusually high rate of mortality among manufacturing workers, it should also be true that manufacturing workers in those pre-NAFTA years were even more likely than were manufacturing workers in the years with NAFTA to commit suicide, turn to drink or drugs, or otherwise fall into life-draining despair.

Were they? I searched hard for evidence from that earlier era on the mortality linked to job losses of manufacturing workers, but (even with the help of AI) found none. Yet I’ve also never encountered any claims that manufacturing workers in the years 1958 through 1980 were unusually likely to suffer “deaths of despair” and other life-shortening calamities. The absence of barking by this particular dog is especially telling given that, compared to the NAFTA years, both the absolute number of manufacturing workers, as well as manufacturing employment’s share of total employment, were higher in those earlier years. My tentative conclusion, therefore, is that the blame for the increased mortality identified by Notowidigdo, et al., lies with something other than the loss of manufacturing jobs — and, hence, with something other than NAFTA. (I call my conclusion “tentative” because it’s possible that someone will uncover evidence from those pre-NAFTA years of high manufacturing-worker mortality — specifically, high mortality linked to job losses. But, again, I know of no such evidence.)

What If Manufacturing-Job Loss DOES Increase Mortality?

Let us, however, assume for the moment that evidence is uncovered showing that, in those pre-NAFTA years, mortality linked to job losses of manufacturing workers was indeed unusually high. Would such evidence salvage Prof. Notowidigdo’s conclusion that the rise in mortality reported in his paper is a “cost of globalization”?

No.

The reason is that the US economy in those earlier years was much less exposed to foreign competition than it was during the NAFTA years. (Each year from 1958 through 1980, US goods imports averaged 1.8 percent of GDP, while each year from 1994 through 2008, US goods imports averaged 4.6 percent of GDP.) Those earlier manufacturing-job losses were due overwhelmingly to rising productivity. Between 1958 and 1980, real output per manufacturing worker in the US doubled — a major reason why manufacturing employment as a share of total private-sector employment fell over those years from 34 percent to 25 percent (calculated by dividing total manufacturing employment by total private-sector employment). Even with NAFTA in place, rising worker productivity continues to be the chief source of manufacturing-job loss — accounting, according to Michael Hicks and Srikant Devaraj, for nearly 88 percent of such job losses from 2000 through 2010.

Even if manufacturing-job loss can legitimately be said to cause unusually high mortality among manufacturing workers, trade is only one source of such job loss, and a relatively minor source at that. Therefore, if one is to classify globalization as a cause of higher-than-usual mortality among manufacturing workers, one must also classify, as an even more significant cause of this mortality, labor-saving technology — and, indeed, any source of manufacturing-job loss.

Under these circumstances, singling out globalization as a source of unusually high mortality is not only misleading, but counterproductive. Doing so focuses the public’s and policymakers’ attention on a relatively insignificant source of avoidably high mortality while ignoring the chief source: rising worker productivity. If the loss of manufacturing jobs raises mortality — and if the government is intent on ensuring that manufacturing workers don’t fall into early graves — the government must prevent not only increased imports of manufactured goods, but also, and far more importantly, increases in manufacturing-worker productivity.

What politician or pundit will openly endorse such a policy?

Fortunately, in fact, there is no evidence that the productivity-driven loss of manufacturing jobs in the past caused a rise in mortality. And so because even today freer trade destroys far fewer manufacturing jobs than do improvements in worker productivity, it’s almost certainly incorrect to blame the job losses due to freer trade generally, and to NAFTA specifically, for any measured increases in manufacturing-worker mortality.

Whatever the Cause(s) of Higher Mortality, Free Trade Isn’t to Blame

So what are the likely causes of the rising mortality detected by Notowidigdo, et al.? To answer this question requires, as they say, further study. There are several candidates, however, of varying plausibility. These include:

  • Increased access to public and private welfare which enables people who lose jobs to remain unemployed longer, perhaps undermining their sense of self-worth.
  • Readier access to debilitating drugs, or reduced social stigma from using such drugs.
  • Increased occupational-licensing requirements which obstruct unemployed workers’ efforts to pursue new occupations.
  • The rise in land-use restrictions which raise the cost of moving to new locations with better job prospects.
  • A cultural change that either made the loss of manufacturing jobs more shameful than were such losses prior to NAFTA, or that drained unemployed manufacturing workers of the gumption possessed by previous generations of unemployed workers to actively search for new jobs.

Whatever the actual cause (or causes) of the rise in mortality, blaming NAFTA is incorrect given that it is only one of countless sources of job destruction, and a rather minor source. Even worse is leaping from a finding of rising manufacturing-worker mortality during NAFTA’s first 15 years to the conclusion that, for manufacturing workers generally, globalization is lethal.

Delaware Gov. Matt Meyer recently signed an executive order directing state and district agencies to work together and expedite permits for broadband and other infrastructure projects. The order aims to expand statewide internet connectivity and keep Delaware businesses competitive by reducing regulatory bottlenecks. It’s one of many state and federal initiatives to remove barriers to the deployment of next-generation broadband.

Delaware has the right idea. Reducing government overreach to unlock broadband’s potential won’t just deliver reliable, speedy and affordable internet while reducing the digital divide between our rural and urban communities. It will also support American leadership in cutting-edge data-intensive technologies, including AI, autonomous vehicles, and telemedicine, granting millions of Americans unparalleled access to economic, healthcare and educational opportunities. But policymakers still have more work to do.

In 2021, Congress voted to provide $42.5 billion to state and territory governments for deploying high-speed internet access through the Broadband Equity, Access and Deployment (BEAD) program, one of 16 federal initiatives dedicating more than $413 billion for broadband expansion. As the recent Minnesota daycare fraud illustrates, massive federal grants administered by states and localities create opportunities for waste, abuse, and inefficiencies as bureaucrats overseeing and spending taxpayer funds bear neither the risk of failure nor commercial reward for success. 

Ensuring BEAD-funded infrastructure projects meet their goals while shrewdly stewarding funds requires that governments repeal unwarranted regulatory hurdles while maintaining guardrails for accountability and public welfare. However, many states get this balance wrong. 

California requires AT&T to maintain expensive and outdated copper-wire landline networks that don’t provide competitive broadband speeds and are susceptible to hacking and copper theft. 99.7 percent of served Californians can access at least three alternatives — including mobile networks and voice over internet protocol (VoIP) delivered online. The copper-wire requirement diverts funds from high-speed broadband infrastructure building and maintenance. AT&T reports that it costs them $6 billion annually to maintain such networks nationwide. The company recently received federal approval to retire 30 percent of its copper-wire networks, excluding California.

With fewer households opting for landline connections, these mandates should be confined to localities where landline is the only option and should be phased out as modern networks reach them. At least 20 states have or are abolishing such mandates. They conflict with the Trump administration’s commitment to “technology neutrality,” which makes satellite internet providers like Starlink and Amazon LEO eligible for BEAD funding as they offer a cost-effective alternative to fiber networks for many areas. Commendably, the FCC is considering a permanent rule that will streamline approvals for providers to discontinue copper networks. The agency also plans to scrap at least 18 other “outdated and obsolete” mandates regarding everything from telegraphs to phone booths, in order to cut red tape, expedite deployment and modernize networks.

Barriers to constructing and upgrading utility poles can also stymie broadband deployment. Maine, which received $50 million in BEAD funds, recently expanded a rule allowing towns to force removal and relocation of existing poles, creating uncertainty and costs for providers. Many of these are in rural areas that could benefit the most from expanded network access. Infrastructure providers must also comply with federal, state and local permitting processes, rights-of-way approvals and environmental reviews. These are important processes, but may be duplicative and carry inconsistent criteria and standards that increase costs. They would benefit from streamlining, better inter-agency coordination, and clearer timelines. Federal bill H.R. 2289 would address some of these issues by imposing deadlines for processing permits on state and local authorities, limiting what they can require from applicants, and limiting local fee recovery to “actual and direct costs.” Allowing for full business expensing of infrastructure investments would also lower after-tax costs and encourage new capital-intensive broadband projects without raising direct federal expenditures. Requiring transparent, competitive bidding for BEAD-funded contracts would also foster competition while limiting cronyism and government favoritism.

Cutting-edge broadband is vital for the rapid and secure movement of high volumes of data necessary to develop and execute life-changing AI models and applications. Robust and stable fiber networks foster model training, rapid inference and data center linkage while reducing latency that can render real-time tools like predictive analytics, chatbots and virtual assistants ineffective and sluggish. Latency and outages can be fatal for high-stakes applications like finance, healthcare and cybersecurity. Even fraction-of-a-second delays can make a life-or-death difference for autonomous vehicles and industrial robots. 

State and local authorities should be able to make public interest and safety decisions on network infrastructure that they’re best placed to make. But the immense benefits of expedient network deployment and plethora of existing rules and mandates that fail the cost-benefit test call for reducing bureaucracy in broadband.

The FCC can continue playing its part by reforming rules within its discretion. Federal policymakers can help by placing sensible limits on state and local regulation, and through conditioning BEAD funding to states and localities on procompetitive reforms that maximize the value of those dollars.

Could income taxes ever encourage someone to switch from a higher-paying to a lower-paying job? Perhaps surprisingly, the answer is yes. I have a recent example of this in my own family.

The marginal nature of the United States’ progressive income tax is supposed to prevent this outcome. You pay a higher tax rate only on income you earn above a certain level, not your entire income. For example, a married couple earning $600,000 a year would pay 10 percent on the first $24,000 they make, 12 percent on the $73,000 they earn between $24,000 and $97,000, and so on, up until they pay 35 percent on the $99,000 they earn above the $501,000 threshold. (Bracket figures are rounded to the nearest thousand.)

What matters when you’re considering a new income source is the tax you will pay on that additional income. If a new job would pay $50,000 more and you’re already making $600,000 as a married couple, then the job is really only paying you, after tax, $32,500 more.

And that difference between pre-tax and post-tax pay could make all the difference when deciding on a job that pays more but is less fulfilling or enjoyable. High marginal tax rates discourage productive, paid labor.

One alternative to paid work is household work. Accordingly, economic research suggests that married women respond to income taxes more strongly than married men, mainly by choosing whether to do paid work at all.

But another substitute for paid labor is different paid labor — instead of getting paid entirely in money, though, you can get paid partly in job satisfaction and amenities. The economic term “compensating differentials” (see video below) refers to this kind of nonmonetary compensation. More difficult, unpleasant, and unsafe jobs pay higher wages than equivalent jobs that are less difficult, more pleasant, and safer. If they want to attract workers to the tougher jobs, companies have to offer them higher wages. 

In the popular Paramount TV show Landman, the main character points out that men working the West Texas oil patch make $180,000 a year for the dangers they face. “That’s not enough money to risk your life on,” a young female attorney responds. “For you? Maybe,” he counters. “For a felon with an eighth-grade education, it’s a lottery ticket.” (I’m guessing he means a winning lottery ticket.) The difference between $180,000 and what a felon with an eighth-grade education would make annually in a safe job he’d be qualified for is the compensating differential.

My wife recently faced a tradeoff of a similar kind. She chose to leave a higher-paying job for a slightly lower-paying job. Her old job had a long commute and required intercontinental travel; her new job has no commute and requires only minimal travel within North America. 

There were important family reasons for the switch, but part of the logic had to do with taxes. We carefully calculated the value of her time and the wear and tear on the car from the commute, and used them to estimate how much salary she could reasonably give up for a more flexible job. We are in a higher tax bracket, which squeezed the difference in after-tax pay enough that the lower-paying job was a good deal once all other factors were considered.

How many other families find themselves in a situation similar to ours? I found exactly one study on the question. It found that a 10 percent increase in the net-of-tax rate causes a worker to choose an occupation with a 0.3 percent higher wage, on average. (The “net-of-tax rate” is the rate at which wages are converted into post-tax earnings, so you can think of it as the inverse of a tax rate.) It’s a small effect, but added up across an entire economy could amount to billions of dollars in new wages from a typical tax cut. Moreover, it confirms the theory: people are willing to work harder, better-paid jobs when taxes are lower.

Income taxes decrease labor supply in other ways too. Economist Michael Keane points out that even if income tax rates have a small effect on labor force participation (particularly men’s) in the short run, the long-run effect could be much larger if work experience builds productivity. Depriving even a small number of potential workers of the incentive to work when they are in their 20s and 30s means they’ll be far less productive than they otherwise could have been in their 40s and 50s.

Most other countries have “flatter” tax burdens than the US. In other words, the middle class shoulders a larger share of the tax burden in other countries. Perhaps the US could learn something from them in this respect — indeed, it may be that they are only able to raise the revenue to sustain a bigger government by keeping their tax structure flat and disincentivizing paid work less. If they want to avoid hiking taxes on the middle class, Congress could have no choice in the future but to cut spending drastically. Hiking already-high marginal tax rates on the most productive workers will do too much economic damage.

A property deed should mean ownership, not a renewable lease from the government. 

Yet that is what property taxes amount to in practice. A family can earn the income, buy the home, pay off the mortgage, maintain and improve the property, and still owe the government every year merely to retain possession of it. Miss enough payments, and the state can seize the property. That may be common. It is not normal in any morally serious sense.

That is why the standard economist’s line that property taxes are the “least bad tax” has always missed the deeper problem. The issue is not only economic efficiency in the abstract. It is whether a free society should tolerate a tax that permanently weakens ownership, punishes stewardship, ignores ability to pay, funds excessive spending, and treats citizens as perpetual tenants of the state. From a taxpayer’s perspective, and from a classical liberal, constitutional view of limited government, the answer should be no.

Our core humanity consists of responsibility, work, and the right to enjoy the fruits of honest labor. Property ownership flows from that principle. What people build, buy, improve, and care for should be theirs to keep. Property taxes invert that moral order. They place governments above the owner and convert secure ownership into conditional possession.

An Old Tax With a Long Record of Failure

Property taxes are not merely flawed in their current iteration. They are an old tax with a long history of administrative failure and political abuse.

In early America, taxing visible property was convenient because land and buildings were easier to identify than income or financial assets. But convenience is not justice. Over time, states expanded the old general property tax into a supposed tax on nearly all forms of wealth.

It sounded fair in theory. In practice, it became arbitrary and unworkable. As the economy modernized, wealth became more mobile, financial, and complex. Local assessors could not reliably find it, value it, or tax it evenly. Real estate, however, stayed put. So governments kept taxing what they could easily see and seize.

The history of property taxation in the United States shows the pattern clearly: what began as a supposedly broad and equal tax became increasingly narrow, uneven, and disconnected from any real measure of ability to pay. That problem never went away. Today’s system still leans heavily on immovable property because homes, land, and buildings cannot flee the jurisdiction.

In Texas, where I reside, the system became so contentious and inconsistent that lawmakers eventually created central appraisal districts and related review structures to standardize valuations. That did not make property taxes elegant. It merely professionalized the bureaucracy around appraisals, protests, hearings, and litigation.

Property taxes are not a simple tax. They are an elaborate administrative machine for guessing values and then fighting about them.

Property Taxes Violate the Meaning of Ownership

The core case against property taxes is moral rather than economic.

Property is the foundation of liberty because it protects the individual’s right to control what they earn, save, buy, and build through voluntary exchange. That right creates independence, responsibility, and the ability to form families, build communities, and leave a legacy. A government strong enough to tax ownership forever is a government already reaching beyond its proper role.

Defenders say property taxes help fund local services. Roads, police, and courts are not free. But that does not justify an annual tax on mere ownership. Governments exist to protect life, liberty, and property, not to establish a permanent claim on property once acquired. A tax that says, in effect, “pay us every year or lose your home” is not a neutral funding mechanism. It is legalized extortion.

That is why my research on securing ownership through property tax reform starts from a different place than much of the standard literature. The usual conversation begins with the government budget and asks how to preserve it. I begin with the taxpayer and ask what kind of tax system best protects ownership, respects the constitutional limits of government, and lets people prosper. On that test, property taxes fail badly.

The Tax Is Inefficient, Costly, and Detached From the Ability to Pay

Property taxes are often defended as stable and efficient. Stable for government, maybe. Efficient for taxpayers, not even close.

They require appraisal districts, valuation models, protest procedures, review boards, appeals, compliance staff, and legal disputes. That is a costly way to raise revenue. A broad consumption tax on final goods and services is not perfect, but it is generally more transparent and less administratively invasive than a recurring tax on ownership filtered through appraisal bureaucracies. And to be clear, the better alternative is not a value-added tax (VAT). The superior choice is a tax on final consumption, not a tax layered throughout production chains, and not a tax piled on top of property taxes forever.

Property taxes are also disconnected from the ability to pay. Income taxes apply when income is earned. Sales taxes apply when purchases are made. Property taxes arrive whether someone got a raise, lost a job, retired, or suffered a financial setback. A rising appraisal does not mean a family has more cash. It just means the government sees a larger tax base. That is why retirees and fixed-income households get squeezed so hard. They can be asset-rich on paper and cash-poor in real life.

Milton Friedman and many other economists in the free-market tradition preferred taxes on consumption over taxes that punish productive activity, investment, or saving. Property taxes do exactly that: they punish ownership itself. They are not neutral. They discourage improvement, raise the cost of holding property, and hit people for simply staying put.

Highly Regressive in the Real World

The standard defense of property taxes also downplays their regressive nature in practice.

Lower- and middle-income households spend a greater share of their budgets on housing. Renters bear a significant portion of the burden through higher rents. Businesses pass along property tax costs through higher prices, lower wages, and reduced investment. And assessments can be regressive, meaning lower-value homes may bear a higher effective tax rate than more expensive properties.

Then there are the behavioral distortions. Property taxes create lock-in effects, where people stay in homes that no longer fit their needs because moving means a new assessment and often a higher bill, driving up prices for everyone else. They also create push-out effects, in which seniors and lower-income residents are forced from homes they have already paid off because taxes too quickly to absorb. They also prevent people from purchasing a home. That is a rotten combination. It punishes staying, moving, and buying. All things that typical regressivity calculations cannot easily compute, thereby making property taxes much more regressive than those calculations suggest.

Stable Revenue for Government Means Endless Revenue for Government

One reason property taxes remain so popular with officials is that they are a wonderfully stable way to finance bigger government.

That stability is often praised as a virtue. But stable revenue for the government is not the same thing as stability for households. It simply means politicians have a dependable stream of money to keep spending. Local officials can hold nominal rates steady while appraisals rise and collections swell, then pretend they never raised taxes. That is not transparency. It is camouflage.

The real driver of the property tax problem is not undertaxation. It is overspending. 

This is why so many so-called reforms disappoint. Levy limits, appraisal caps, homestead exemptions, and rebates may slow the growth of the burden for a while, but without strict spending restraint, they do not change the underlying trajectory. Kansas and Texas have both tried versions of these limitations, and property taxes remain a major problem because spending has continued to grow and loopholes have remained.

Levy Limits Can Help, but Only if They Are Truly Tough

This is where the debate needs more honesty.

Yes, property tax growth can be limited with levy caps. But most caps are too weak, too narrow, or too easy to bypass. A serious limit should apply to all property, with no carveouts, no games, and no exemptions that merely shift burdens around.

The right standard is simple: zero percent levy growth in all property taxes collected unless a supermajority of voters explicitly approves more. Even then, truth-in-taxation rules should require that rates fall automatically when values rise, unless voters say otherwise.

That is a good guardrail. But even a strict levy limit mostly slows the growth of property taxes. It does not reduce them meaningfully on its own. People do not want a slower climb up the hill. They want the burden reduced. That is where my budget surplus buydown approach comes in.

The Best Path: Spend Less, Use Surpluses, Buy Down the Tax

Real property tax reduction should start with a hard spending limit below population growth plus inflation for state and local governments, not as a target but as a ceiling.

When government spending grows more slowly than the average taxpayer’s ability to pay, as measured by population growth plus inflation, budget surpluses emerge. Those surpluses should not be used for new programs, bigger bureaucracies, or one-time political goodies. They should automatically go to reducing property tax rates.

This surplus-driven buydown is a sustainable path to durable relief. It is predictable, pro-growth, and fiscally disciplined. It allows taxes to come down without sudden budget shocks. And unlike gimmicks, it works because it directly reduces the government’s claim on property.

At the state level, the first priority should be school district maintenance-and-operations (M&O) property taxes, because states already control most school finance systems. That is the obvious place to start.

States should use surpluses generated above strict spending caps to buy down school district M&O rates until they reach zero. That can also support a transition toward truly universal education savings accounts, where money follows students rather than being routed through district monopolies. 

Yes, state constitutions still generally require some form of schooling system, and that language is unlikely to disappear anytime soon. But nothing in that reality requires permanent dependence on school district property taxes.

Local governments should use the same surplus-buydown model to reduce city, county, and special district property taxes until they reach zero as well. The logic is the same: spend less, generate surpluses, and use those surpluses to compress rates downward over time.

Could this be accelerated? Yes. States and localities could also broaden the sales tax base to include more final goods and services, and even raise the rate if needed, to replace property taxes more quickly.

But the key is not the exact mix. The key is spending limits. Without strict limits, any tax swap just funds the same bloated government through a different collection method.

The Goal Should Be Elimination

My argument runs counter to much of the conventional wisdom because it starts with the taxpayer, not the tax collector. From a constitutional perspective, the government’s role is limited. It should protect rights, not build endless revenue structures around violating them. 

From a pro-growth perspective, income taxes are more destructive and should be eliminated where possible. But after income taxes are gone, property taxes should be next. They are more coercive than a sales tax on final consumption, less connected to the ability to pay, more administratively wasteful, and more corrosive to secure ownership.

That is why more states are now reconsidering them. As my work shows, lawmakers and commissions in Florida, Illinois, Kansas, Missouri, Montana, Nebraska, North Dakota, Oklahoma, Pennsylvania, South Carolina, Texas, and Wyoming are debating reforms ranging from modest relief to full elimination. They should aim higher than temporary relief.

Property taxes are arcane. They are immoral. They are inefficient. They are highly regressive once all effects are counted. They fund excessive spending and never let people fully own what they have earned. A free society should not settle for trimming them at the margins forever. It should start reducing them now through strict spending limits and surplus buydowns, and it should put in place a serious path to eliminating them for good.