Category

Economy

Category

The economic history of commercial airline flight began with courageous pilots and entrepreneurs in the American West. They could hardly have imagined the proliferation of the industry and its low relative costs, safety, and frequent use among Americans of all income levels and backgrounds. 

A century ago, Varney Air Lines took flight with a mail delivery that departed Pasco, Washington, headed for Boise, Idaho. Founded by Walter T. Varney (pictured below), who was a part of the US Signal Corps in World War I, the fledgling operation obtained the first airmail contract in 1925 from the US Postal Service. 

Varney’s chief pilot was Dewey “Lee” Cuddeback who guided a Laird Swallow biplane from Pasco to Boise in just under three hours, safely landing before a crowd of around two thousand onlookers with 207 pounds of mail in tow. After refueling, he delivered more mail to Elko, Nevada finishing just shy of four-and-a-half hours in the sky. This feat took less than a tenth of the time that traditional railway transport would have taken to deliver the mail and represented an incredible reduction in delivery time. 

US Mail airplane, 1922. National Photo Co Collection: Library of Congress

Later that day, a second pilot, Franklin Rose, ended his flight in a less auspicious way. His leg from Elko to Boise terminated in an emergency landing in a mud-soaked grain field just north of the Nevada-Idaho state line. While Rose walked away unharmed after being blown off course by over sixty miles, the aircraft remained stuck in the mire, bringing an end to the day’s second round of flights. Despite that unfortunate outcome, the stage was being set for modern passenger flight. Four years later, Varney Air Lines was acquired by United Aircraft and Transport Corporation; in 1934 it merged with several other air transport firms to form United Airlines. 

The early years of American flight were filled with these harrowing stories of innovation, safety improvements, and rivalrous competition. Through it all, efficiency, safety, and costs improved. One such innovator was Archie League, whose innovations in ground signalling and radio communications paved the way for ever-safer passenger flight. League’s ideas on safety were highly valued in St. Louis, as the flight fever that was spawned by Charles Lindbergh’s exploits attracted more and more pilots to America’s heartland to test their aviation mettle. As air traffic became busier there, League’s safety protocols set the standard for flight safety for takeoffs and landings. Discontented with ground signalling alone, League famously guided a flight bound for St. Louis’s Lambert airfield through severe weather and dense fog with calm and concise directions for a safe landing in October of 1929, sparing the lives of all on board. 

Pilot Leon Cuddeback takes off from Pasco Airport, 1926. Image Courtesy Franklin County Historical Museum

In the same month as League’s radio innovation, economic history would take a turn with Black Tuesday’s infamous stock market crash. While the aviation industry was in its infancy, it would soon be smothered by New Deal regulations, preventing the same pace of improvement that had been unleashed by the likes of Varney, League, and Lindbergh. Indeed, the Hoover and FDR administrations delivered an unprecedented rise in the “fatal conceit” of economic planners who attempted to create economic outcomes according to their own wishful thinking. Their restrictions and barriers to entry and exit would do nothing of the sort, and proved to inhibit improvements and price competition in the airline industry for four decades. 

With FDR’s creation of the Civil Aeronautics Board (CAB) in 1938, its designers claimed that it would centrally administer, “safety-related rulemaking, accident investigation, and economic regulation of commercial airlines.” Eventually, it would go far beyond such broad claims and do far more than that, engaging in price-fixing and the prevention of new entrants, just to name a few. Ultimately, the hubris of social engineers led them to declare what “fair” prices were across the airline industry.

In a 1975 report, no less than liberal senator Edward Kennedy admitted that “the Board’s experience suggests it is extremely difficult, if not impossible, to develop a cost-based ratemaking system that uses fair procedures and keeps fares in such an industry low.” In a more damning admission, “This is not to say that inherent defects are the only cause of the CAB’s failings. These may, for example, also reflect the human tendency to listen more closely to representatives, such as those for the industry, who are powerful, well-informed, and can reward regulators with future jobs or contracts.” 

The ultimate effect of this centralized planning was to “control prices, restrict entry, and confer antitrust immunity.” In brief, the CAB was used to create a government-backed cartel in the interest of existing large carriers. In what amounted to a public confession of crony-capitalism, the CAB’s days were numbered. 

In the wake of the report, American Airlines was allowed to discount its fares up to 45 percent in an attempt to see whether airline travel could be “made available at a price all can afford.” Once this mild form of price competition was allowed, rivalrous competition showed suspicious legislators and regulators that allowing competition did indeed create greater value for consumers. Eventually, Senator Howard Cannon along with bipartisan supporters including Ted Stevens and Wendell Ford helped pass the Airline Deregulation Act in February of 1978. 

Since industrial leaders at the time, like Delta Airlines, had grown accustomed to the many protections they received under the CAB, they lobbied against the deregulatory move. They made claims that “free entry” and “free exit” were “untested concepts” that would result in the concentration of the industry into the “hands of only a few carriers…causing service deterioration at smaller cities and in smaller markets.” Delta’s doom-mongering didn’t materialize in either the short or long run. 

Delta-written flyer opposing deregulation. Courtesy Smithsonian’s National Air and Space Museum

In the nearly 50 years since the abolition of the Civil Aeronautics Board, routes and flexibility have proliferated, and prices have declined continually. In fact, the last three decades have seen inflation-adjusted domestic airfares fall from $614 in 1995 to $397 in 2025. Further, the industry continues to grow, nearly doubling the number of employees since 1990. Prior to deregulation, air travel was undoubtedly a luxury good. Now, it has become so affordable that 80 percent of Americans with annual household income below $50,000 have taken flight at some point in their lives. 

In the 100 years since Varney Air Lines first took on the tremendous risks and costs of delivering a few hundred pounds of mail in the American West, to the amazement of onlookers in Pasco, Boise, and Elko, the industry itself has undergone incredible transformation. Once the purview of daredevils and former combat pilots, the “friendly skies” are now a nearly ubiquitous experience for Americans who, despite the inconveniences of TSA delays and the need for significant reform, continue to vote with their dollars and take flight at lower costs than Varney, League, or Lindbergh would have dared imagine.

That transformation is, in large part, thanks to their own courageous actions and airline deregulation nearly 50 years ago.

In February, the Tax Foundation’s Jared Walczak observed that not long ago, it was still possible to speak of a “typical” state income tax with a top rate of about six percent.

“That is no longer the case,” he continued. “Today, far more states prioritize low, competitive rates, whereas a smaller number have abandoned the middle for much higher rates.”

Walczak noted that, in 2006, 15 states had top rates of personal income tax below five percent (including those with no personal income tax); now, 26 states do. Over the same period, the number of states with a double-digit top rate has risen from one to six.  

This is another point of divergence between “Red” and “Blue” America. The Wall Street Journal reported on Walczak’s work with “Red and Blue States Are Growing Further Apart on Income Tax.” 

“Republican-led states are racing each other to flatten, cut and eliminate individual income taxes,” reporters Richard Rubin and Jeanne Whalen note. By contrast, “Democratic-controlled states are moving the opposite way, pushing to increase taxes on top earners…” Meanwhile, “The middle ground is quickly disappearing.”

Disappearing along with it, on current trends, could be the Democratic Party’s chances of controlling either the White House or House of Representatives. 

Economic Policy in “Red” and “Blue” America

The divergence in income tax policies in “Red” and “Blue” America isn’t confined to the top of the income scale. 

We can code the states as “Red” or “Blue” if they have had either Republican or Democrat “trifectas” in the years 2019 to 2025, “Lean Red” or “Lean Blue” if they have had Republican or Democratic “trifectas” for some of that period, and “Toss Up” if they have had either no trifectas or at least one for each party over the period. 

Figure 1 shows the median share of “Average Annual Pay” taken in state income tax among each of these five groups of states for 2025. Among “Red” states, the median share of the average earner’s pay taken in state income tax was 2.8 percent; among “Lean Red” it was 3.1 percent; among “Toss Up” it was 3.3 percent; among “Lean Blue” it was 4.5 percent; and among “Blue” states it was 4.7 percent. 

“Blue” states do not just tax the rich more heavily; the median “Blue” state government takes a share of the average earner’s wages in state income tax 1.6 times greater than the median “Red” state.   

Figure 1: State income tax as a share of Average Annual Pay, median state, 2025

Source: Bureau of Labor Statistics, IRScalculators.com, Ballotpedia, and Center of the American Experiment

There is a similar divergence between “Red” and “Blue” America on the spending side of the fiscal equation. 

Figure 2 shows the median level of per capita general fund spending in each group of states for 2025. It shows that among “Red” states, the median level of per capita general fund spending was $2,623; among “Lean Red” it was $2,605; among “Toss Up” it was $3,602; among “Lean Blue” it was $4,046; and among “Blue” states it was $4,902. 

In per capita terms, the median “Blue” state government spends 1.9 times more than the median “Red” state.   

Figure 2: Per capita general fund spending, median state, 2025

Source: National Association of State Budget Officers, Census Bureau, Ballotpedia, and Center of the American Experiment

Taxes and spending aren’t the only policy areas where US states are drifting apart. 

In their recent book Abundance, Ezra Klein and Derek Thompson note that for all that Democrats agonize about affordable housing, “[t]he Austin metro area led the nation in housing permits in 2022, permitting 18 new homes for every 1,000 residents” while “Los Angeles’s and San Francisco’s metro areas permitted only 2.5 units per 1,000 residents.”

“In our political typologies, it is liberals who embrace change and conservatives who cling to stasis,” Klein and Thompson write. “But that is not how things work when you compare red-state and blue-state housing policies.” The authors blame “liberals — and particularly the strain of liberalism that began to develop in the ‘60s and ‘70s” — which has created “so many rules around permitting and environmental regulations” — what some have called “Blue Tape” — “that it became impossible to build necessary housing.”

Regulatory burdens are more difficult to quantify than the burden of taxes or government spending, but the Cato Institute’s “Freedom in the 50 States” project provides index values of “Regulatory Policy” for 2022 with lower scores representing less freedom and higher scores representing more, with more freedom pushing you in the direction of Austin, Texas, and less in the direction of the California cities. Figure 3 shows that among “Red” states, the median score for regulatory policy was 0.06; among “Lean Red” it was 0.05; among “Toss Up” it was 0.02; among “Lean Blue” it was 0.01; and among “Blue” states it was 0.18. 

The median “Blue” state has a score for regulatory policy which is 384 percent lower than that of the median “Red” state.  

Figure 3: Regulatory policy scores, median state, 2022 

Source: Cato Institute, Ballotpedia, and Center of the American Experiment

“Red” and “Blue” America offer different models of economic policy, models which appear to be getting more different over time. While “Red” states offer lower taxes across the income scale, lower government spending, and less regulation, “Blue” states offer higher taxes for almost everybody, higher government spending, and more regulation. 

The political consequences of Big, “Blue” Government 

When they vote with their feet, Americans make it abundantly clear which of these models they prefer. 

Figure 4 shows the total net domestic migration for each of our five groups of states for 2020 to 2025. It shows that, while 3.5 million Americans left “Blue” states for elsewhere in the United States, “Red” states gained 3.2 million residents from elsewhere.   

Figure 4: Net domestic migration, millions, 2020 to 2025

Source: Census Bureau, Ballotpedia, and Center of the American Experiment 

Indeed, “Blue” state governance has become a target for derision, even among those who consider themselves of the “Blue” persuasion. “California’s most populous cities are run by Democrats,” Klein and Thompson note. “Every statewide official in California is a Democrat. Both chambers of the legislature are run by Democrats…Liberals should be able to say: Vote for us, and we will govern the country the way we govern California! Instead, conservatives are able to say: Vote for them, and they will govern the country the way they govern California!

These movements will have political consequences which Democrats are unlikely to appreciate. Figure 5 shows the net change in seats in the House of Representatives forecast for 2030 based on current population movements. It shows that states which had Republican trifectas for the whole period 2019 to 2025 — our “Red” states — will gain 11 seats while our “Blue” states — those which had Democratic trifectas — will lose 10. This will impact the Electoral College, also.   

Figure 5: Forecast net change in House of Representatives seats, 2030

Source: Carnegie Mellon University and Center of the American Experiment

People have generally moved towards economic freedom; more people crossed the Berlin Wall from east to west or the 38th parallel from north to south than went the other way. 

Democrats who want their party to remain competitive for the House or presidency should remember this. We are not there yet. Instead, policymakers in some “Blue” states are eying “exit taxes” as a way of erecting fiscal Berlin Walls or 38th parallels around their own worker’s paradises.  

It won’t work. In politics, economic policy is destiny.

Economics is a peculiar science. On the one hand, it is the queen of the social sciences and offers a powerful logic for understanding the world. On the other, as Henry Hazlitt put it, it is haunted by more fallacies than any other study known to man. People simply love to misunderstand economics.

Ironically, this presents a profit opportunity to those who choose to exploit people’s willing ignorance…especially if they are economists. Then they can present popular fallacies as seemingly insightful critiques or even novel takes. Because they are from the inside — one of “them” — people are willing to take their word for it. 

University College London economist Mariana Mazzucato is a case in point. She has made a name for herself writing books and consulting policymakers on how the State can be used to produce “free lunches.” Books like The Entrepreneurial State and Mission Economy argue that the State can be an effective low- or no-cost shortcut to prosperity and that it should therefore be used liberally by policymakers. 

Any economist worth his salt would naturally object that there are no “free lunches.” Nothing is without opportunity cost, which is why we must economize. But this “dismal” view, albeit true, is often rejected by those who wish to believe in a mystical world in which money trees exist and scarcity does not. Unfortunately, Mazzucato and others are happy to provide rationalizations for those who don’t understand basic economics.

In her new book The Common Good Economy, to be released this fall, Mazzucato, per the blurb, “builds on her visionary ideas of the entrepreneurial state and mission-oriented policies to establish a new theory of the common good, one which allows governments and businesses to develop purposeful economic relationships, creating value and building spaces where human flourishing can happen.” In other words, it is more of the same. The State, assumed glorious, both can and should actively interfere in the economy and beyond because businesses cannot be trusted to produce what people actually want.

It is a curious argument, especially when considering the nature of voluntary exchange and market entrepreneurship. In markets, entrepreneurs can only earn profits by satisfying their customers — on the customers’ terms. They compete by creating as much value as possible, but must bear the uncertainty of their speculation because there is no way of knowing what consumers appreciate until after the goods are already produced and available for sale.

The State, in contrast, is not subject to such approval. It does not need to produce value and does not even need to economize on resources. It has the power to take and need not ask permission. This creates serious incentive problems and leaves the State operating in the dark, unable to know — or even reliably estimate — how resources are best used. Lacking a conception of actual value, which in the market is determined for and by consumers, and not needing to economize, what are the odds that the State will produce something good? And what are the odds that it will be effectively produced?

The answer is that we cannot expect the State to do anything effectively — other than waste resources. Any reasonable analysis of opportunity costs of the State’s undertakings should find that they are higher than the supposed value they create. Even relying only on the “seen” as captured in official statistics, the State’s investments have dubious returns. And despite Mazzucato’s claims, there is certainly no lack of public “investments.” As McCloskey and Mingardi note in The Myth of the Entrepreneurial State, which illuminates the limitations of Mazzucato’s claims, “in the past century government expenditure as a percentage of GDP has drifted up towards 50 percent.”

Basic economic understanding and research are of no relevance to Mazzucato. She has already attempted to redefine the very concept of value to serve her political purposes in the highly confused The Value of Everything. And she keeps finding ways to argue that politically directed investments not only outperform private ones but conjure value from nothing.

Many more grounded economists have pushed back on the claims by Mazzucato and others. The Entrepreneurial State was debunked. And so was Mission Economy. Perhaps this is why the profiteers keep inventing new terms for the same basic fallacy. The Common Good Economy will be no different in this regard. It will probably sell well, however — and further undermine economic understanding in the process.

In 1895, Greek journalist Vlasis Gavriilidis traveled to Cambridge University seeking advice from three leading economists — Alfred Marshall, Henry Sidgwick, and John Neville Keynes — on the most urgent economic problem facing his country: a collapsing market for currants (Corinthian raisins), which then accounted for roughly half of all Greek exports. 

Overproduction, fueled by earlier government policies and a temporary export boom, threatened widespread rural unemployment and poverty. The economists offered divided counsel. That ambiguity gave organized currant growers the opening they needed to lobby successfully for a price-support system — a “temporary” intervention that promised stable incomes for growers while shifting costs onto taxpayers and distorting the broader economy. 

The Greek currant crisis of the 1890s offers enduring lessons in policy hubris, the stubborn longevity of supposedly temporary measures, and the lasting damage caused by interfering with market incentives. 

Boom, Bust, and the Roots of Overproduction 

Currant cultivation in Greece had ancient roots, but the crisis was modern. French vineyards were devastated by the phylloxera pest in the 1860s and 1870s, creating massive demand for Greek currants to produce “raisin wine.” This surge encouraged rapid expansion. 

The First Agrarian Reform of 1871 had distributed national lands (former Ottoman holdings) in small plots to create a broad class of peasant proprietors. Many new landowners, often with credit secured against their holdings, rushed to plant currants — the most profitable crop at the time. Currants quickly became Greece’s dominant export. 

Then the boom reversed. French vineyards recovered. French producers, noting consumer preference for the taste and shelf-life of currant-based wine, successfully lobbied for the Méline Tariff of 1892 and the Turrell Act of 1896, which effectively shut Greek currants out of the French winemaking market. 

At the same time, high-quality, consistent California raisins entered global markets as strong competitors. The result was a sharp and sudden price collapse. As Patras merchant Theodoros Burmuli warned in 1899 in the Economic Journal, prices fell to the bare cost of production, threatening “disastrous and far-reaching consequences” for the Greek economy. 

The Retention Scheme and the Cambridge Debate 

Burmuli advocated a state retention system: exporters would be required to deliver 10–15 percent of their currants to a government depot (initially for supposed domestic use), artificially restricting supply to prop up prices and shield small growers from market reality. 

A group of anti-retentionists — largely free-trade liberals — opposed the plan. They argued it would distort markets, encourage even more overproduction, impose heavy administrative and fiscal costs, and fail to address the underlying imbalance between supply and demand. They also warned that any “temporary” program would prove difficult to end. 

The debate reached Cambridge in 1895. Sidgwick and John Neville Keynes leaned toward supporting the retention idea, with Keynes suggesting it “might prove temporarily effective” in easing growers’ distress. Alfred Marshall opposed it, though the exact record of his reasoning has not survived; his broader body of work aligns clearly with the anti-retentionist emphasis on allowing prices to adjust and resources to reallocate. 

The divided expert opinion helped the well-organized currant growers prevail politically. Greece enacted the retention law in 1895 as a supposedly short-term measure. 

What Actually Happened 

The results vindicated the critics. In his 1906 Economic Journal article “The Currant Crisis in Greece,” economist Andreas Andréadès documented how the program backfired. By guaranteeing inflated prices, it subsidized rather than discouraged production. Growers planted more vines, including on marginal land. Terraced hillsides and drained wetlands were converted to currants long after global demand had shifted. 

The measure was anything but temporary. Renewed annually at first, it was reorganized in 1899 as the “Currant Bank” and extended for another decade. Variants and references to retention schemes lingered into the early 1930s. 

The government accumulated debt and stockpiles of unsold currants. Andréadès concluded that the real crisis was no longer the initial overproduction but the intervention itself. By interfering with the law of supply and demand, policymakers turned a painful but localized adjustment into a prolonged national problem. He wrote: “Consequently, the only result [of the program] was to render permanent a crisis which could have been only temporary if the ‘economic laws’ had been respected.” 

Public Choice in Action 

Classic public-choice dynamics explain why the program persisted. As land values rose to capitalize on the artificial support, farmers came to depend on continuation of the policy. Any attempt to repeal it would impose visible, concentrated losses on a politically powerful group, while the costs (higher taxes, misallocated resources, and slower economic adjustment) were diffuse and borne by the broader economy and future generations. 

Greece’s heavy reliance on a single crop left the country economically fragile, and the fiscal burden of the scheme contributed to its chronic debt difficulties. 

Lessons for Today 

The nineteenth-century Greek currant saga remains highly relevant in an era of widespread agricultural subsidies, “temporary” assistance programs, and industry bailouts. 

  1. Price signals matter. When demand falls or competition rises, the healthy response is reduced production and reallocation of resources — not government price floors that delay inevitable adjustments and lock capital and labor into unproductive uses. 
  2. “Temporary” support rarely is. Programs sold as short-term relief tend to become entrenched when concentrated interest groups benefit and develop a stake in their continuation. 
  3. Concentrated benefits, diffuse costs. Vocal, organized groups often succeed in capturing gains for themselves while spreading the bill across taxpayers and the wider economy — frequently at the expense of long-term growth and resilience. 

Greece’s currant crisis shows that good intentions and political expediency can transform a manageable market correction into decades of distortion. Policymakers tempted by price supports or industry rescues would do well to remember how a “temporary” Greek retention scheme outlived its justification by generations and left the economy weaker for it. 

Energy price increases are hitting Americans hard. In the March 2026 Everyday Price Index, my colleague Pete Earle noted that the Iran war drove up energy prices, with adjacent industries feeling the impact, while core inflation remained muted. These price increases resemble an energy shock rather than broad-based inflation that might concern the Fed.  

For ordinary Americans, however, Earle comments, “consumers are first encountering the shock in the most visible and psychologically powerful places — gas stations, travel, and transportation-linked expenses — while the rest of the basket remains relatively stable.” 

The “visible and psychologically powerful” price increases have many policymakers rightly concerned. Both Indiana and Georgia have enacted state gas tax holidays while Utah will implement one from July through December. Several states are also considering issuing similar policies, and federal lawmakers have proposed a nationwide gas tax holiday. 

Concerns about affordability are genuine, but this is a case where good intentions do not guarantee good outcomes. Our present strains at the pump are due to limited supply. Pausing gas taxes will not increase the supply of gas. Instead, policymakers should focus on regulatory reforms that lower energy production costs and reduce bottlenecks. 

Reasoning from the Pump Price 

Prices act as a signal that informs buyers and sellers how much of a good or service is available and how much others want that good or service. Scott Sumner’s insight, “people should never reason from a price change, but always start one step earlier—what caused the price to change” is essential here.  

The legal incidence (who is legally obligated to pay the tax) falls on wholesalers or retailers while the economic incidence (who bears the cost of a tax) falls on consumers. Consumer demand for gas is relatively less elastic than other goods and services in the short run, meaning people are willing to forgo other spending before reducing fuel consumption. 

When prices rise due to a supply shock, consumers continue purchasing gasoline. A tax holiday can, therefore, increase demand at precisely the worst moment. Evidence from past tax holidays and disaster responses shows that such policies often shift consumption, but do not provide lasting relief. 

When refining capacity, inventories, or distribution networks tighten, the benefits of tax cuts dissipate. In those conditions, tax holidays provide less relief precisely when relief is needed the most.  

Gas tax holidays must be judged by their outcomes. Understanding the cause of price increases helps policymakers avoid responses that are ineffective or do further damage. 

What Can Be Done? 

The good news is that there are some reforms that federal and state policymakers can accomplish to help the American people. While avoiding gas tax holidays prevents additional harm, they can focus on getting government out of the way through regulatory reforms that improve supply. 

Policymakers should reform regulations that currently constrain oil and gas production and create supply chain bottlenecks. Federal actions include accelerating leasing, streamlining permitting processes, and reining in executive discretion over permitting, which allows the President to revoke permits that go against a given administration’s preferred energy agenda. States can roll back renewable portfolio standards to reduce compliance costs and ease permitting bottlenecks. They can also exit regional cap-and-trade programs to lower costs often passed to consumers. 

Additionally, with the Greenhouse Gas Endangerment Finding rescinded, now is the time to conduct regulatory audits to assess the costs and benefits of regulations. Policymakers could enact regulatory budgets that cap the number of regulations in force at any given time. Finally, they should consider sunset requirements that remove regulations after a certain period unless explicitly renewed by the legislative branch.  

The Problem Isn’t Gas Prices — It’s Supply

Gas tax holidays might be politically attractive, but they do not expand supply nor ease supply chain constraints. They can even worsen shortages by increasing demand. 

A more effective approach focuses on reducing regulatory barriers and improving energy market flexibility. This approach can address some of the root causes of policy volatility during and after the supply shock. 

Prices work best when they are treated as signals, not problems to suppress. By understanding how and why prices change and minimizing interference in the price system, policymakers can avoid doing unintentional harm.

Recent remarks by Elon Musk have reignited debate over the economic implications of artificial intelligence, following a widely circulated video clip in which he predicts a future of “universal high income” funded by direct government payments. In the clip — shared broadly on X and quickly amplified across financial media — Musk argues that AI-driven production will expand so rapidly that it will outpace growth in the money supply, rendering such payments non-inflationary and potentially even deflationary. As he puts it, if goods and services grow faster than money, prices should fall, even as governments distribute cash to households. The claim builds on his longstanding advocacy of income support in an AI-disrupted labor market, but extends it into a more explicit monetary argument: that large-scale issuance of money need not distort prices if productivity growth is sufficiently strong.

It is a striking claim, and one that arrives at a moment when Musk’s commercial interests are increasingly tied to the perceived scale and inevitability of the AI transformation. With his artificial intelligence initiatives becoming more deeply integrated into the broader SpaceX ecosystem — and with expectations of a major capital markets event on the horizon — there is a clear incentive to frame AI not merely as an incremental innovation, but as a system-altering force capable of reshaping the global economic landscape. That does not make the vision wrong. But it does suggest that rhetoric surrounding abundance, inevitability, and frictionless adjustment should be read, at least in part, as a forward-looking narrative — an attempt to describe not just what may happen, but what investors and the public should come to expect.

The economic reasoning underlying the claim, however, is where the argument begins to break down. Issuing money — even in a high-productivity environment — does not create income in any real sense. It redistributes claims on output. Goods and services must still be produced. The act of distributing purchasing power does not add to that production; it just reallocates access to it. Even if AI dramatically increases the total quantity of goods available, the path by which money enters the system matters. New money is never distributed evenly or instantaneously. It arrives through specific channels — government transfers, financial institutions, asset markets — and those entry points shape how prices adjust across sectors.

This is why the idea that inflation or deflation can be understood as a simple ratio of aggregate output to the money supply is misleading. Prices are not set in the aggregate; they are relative, reflecting the interplay of supply, demand, expectations, and timing. When new money is introduced, it affects some prices before others, altering incentives and redirecting resources. Some sectors expand more rapidly than they otherwise would, while others are effectively taxed by rising input costs or shifting demand. These relative price movements are not noise — they are the mechanism by which the economy coordinates activity. Distort them, and the structure of production itself becomes misaligned.

The role of monetary policy does not disappear in such a world; it may become more subtle, but no less important. If income transfers are financed by sustained monetary expansion, interest rates and credit conditions will still respond. Artificially abundant liquidity can suppress borrowing costs and encourage investment projects that appear viable under those conditions but are not supported by underlying resource availability or consumer preferences. (Indeed, these conditions may already be manifesting.) Over time, this can lead to overextension in some sectors and underinvestment in others — a familiar pattern that has historically culminated in corrections when financial conditions tighten or expectations shift.

What is notable is how closely these latest remarks mirror Musk’s earlier statements about an AI-driven future of “sustainable abundance.” For years, he has argued that advances in automation would so dramatically expand productive capacity that scarcity itself would fade as a central economic concern. The current formulation simply extends that logic: if scarcity recedes, then distributing money becomes a largely administrative exercise, unmoored from traditional constraints. But this is precisely where the conceptual error lies. Technology can expand what is possible — it can shift the frontier outward — but it does not eliminate the need for intertemporal coordination, nor nullify the importance of how resources are allocated.

A substantial expansion in productive capacity is entirely within reach. Advances in AI could lower costs across wide swaths of the economy, streamline production, and unlock entirely new forms of output. But greater plenty does not eliminate the need for coordination, nor does it neutralize the role of money. Prices, investment decisions, and income flows are still shaped by institutional frameworks and incentive structures, and those forces continue to operate regardless of how quickly output is growing.

If the coming decades deliver anything like the transformation being envisioned, its success will depend not only on technological capability but on how well economic systems adapt to it. Producing more with fewer inputs is a powerful development, but it does not negate the importance of sound signals in markets or disciplined allocation of capital. Expanding the money supply alongside rising output does not bypass these considerations; it interacts with them, and if handled poorly, can obscure rather than clarify the information that markets rely on. If nothing else, seeing the convergence of the thinking of generational entrepreneur Elon Musk with that of NYC Mayor Zohran Mamdani confirms that economists, myself included, need to do a far better job of communicating basic economic concepts.  

In 1959, Milton I. Roemer — a physician and pioneering health services researcher at UCLA — published a study that would influence American healthcare policy for generations. Examining hospital utilization patterns, Roemer observed a striking correlation in insured populations: the availability of more hospital beds was associated with greater numbers of hospital days used. “A built bed is a filled bed,” he concluded. This insight, known as Roemer’s Law, posited that supply tends to create its own demand. In the context of third-party payment systems, it implied that unchecked expansion of facilities would fuel wasteful overcapacity and drive escalating costs through supplier-induced demand.

This observation became the intellectual foundation for Certificate of Need (CON) laws. The core implication was that supplier-induced demand would inevitably lead to inefficient duplication and  wasteful overcapacity. Roemer, a staunch advocate of social medicine, viewed the Soviet Union as embodying the healthcare system of the future — one oriented more toward equity.

CON laws are state regulatory mechanisms that require healthcare providers — hospitals, ambulatory surgical centers (ASCs), nursing homes, and others — to obtain explicit government approval before making major capital investments, expanding services, or even purchasing certain equipment.

In practice, a state health planning agency reviews applications based on bureaucratic formulas for “community need,” projected utilization, and impact on existing providers. If approved, the CON acts as a legal permission slip; if denied, the project dies. These laws do not improve safety or clinical quality — that is handled by separate licensing, accreditation, and Medicare certification processes. Instead, they function as artificial barriers to entry in the healthcare marketplace.

Origins of CON Laws

CON laws trace their origins to the 1960s, with New York enacting the first statute in 1964. The concept gained national momentum amid concerns over rising healthcare expenditures under cost-plus reimbursement systems. The federal government amplified the approach through the National Health Planning and Resources Development Act of 1974 (NHPRDA), which conditioned certain federal funding on states establishing CON programs.

By the early 1980s, nearly every state except Louisiana had implemented some form of CON review. Congress repealed the federal mandate in 1986 after recognizing its shortcomings, yet as of 2026, approximately 30–35 states, including Alabama, retain active CON regimes.

Practical Application of CON Laws

Consider the practical implications for opening an ambulatory surgical center in a CON state. A multidisciplinary group of physicians — orthopedists, neurosurgeons, gastroenterologists, and pain specialists — identifies unmet demand in their markets: protracted wait times for outpatient procedures, higher costs in hospital outpatient departments (often 30–50% above ASC rates for equivalent cases), and opportunities for same-day discharge with superior patient experience. The surgeons and doctors realize this because they are actively taking care of patients.

Private capital is secured, a facility is designed emphasizing operational efficiency, infection control, and specialization, and a CON application is submitted to the appropriate board. Approval hinges on demonstrating conformity with a rigid state health plan that employs formulaic metrics: population-to-provider ratios, historical utilization rates, and projected demand that frequently lag behind actual market dynamics and technological shifts.

Incumbent hospitals, whose outpatient margins subsidize other operations, routinely intervene as opponents. Formal protests trigger adversarial public hearings, extensive discovery, and protracted legal proceedings. Applicants incur legal and consulting fees often exceeding hundreds of thousands of dollars. The review board — frequently influenced by representatives of existing providers — deliberates for 12 to 24 months or longer. Even conditional approval may impose geographic or service-line restrictions. During this interval, patients endure higher costs and delays, surgeons sacrifice productivity, and scarce capital remains unproductive. This process exemplifies not rational planning, but regulatory capture and rent-seeking. Political allocation supplants consumer sovereignty.

Justification For CON Laws

Advocates of CON laws advance a primary economic rationale: preventing duplicative investments and “overcapacity” that would allegedly inflate costs through underutilized fixed assets and supplier-induced demand, while safeguarding access in underserved (often rural) areas. Without regulatory gatekeeping, they contend, fragmented entry would fragment volume, raise unit costs, and exacerbate maldistribution of services.

Beyond cost control, CON regulation is defended as essential for safeguarding access in underserved — particularly rural — areas. Without government gatekeeping, new entrants (especially efficient ambulatory surgery centers) would “cream-skim” the most profitable cases and commercially insured patients, leaving incumbent hospitals burdened with a disproportionate share of complex, high-cost, low-margin, and uncompensated care. This fragmentation of volume would supposedly raise unit costs for remaining providers, threaten the financial viability of safety-net and rural hospitals, undermine cross-subsidization of essential services (such as emergency and trauma care), and ultimately exacerbate maldistribution of services, harming the very populations CON laws purport to protect.

This rationale is explicitly articulated by the National Conference of State Legislatures (NCSL), which states that CON programs “primarily aim to control health care costs by restricting duplicative services and determining whether new capital expenditures meet a community need,” while also seeking to ensure access for “historically underserved communities, such as rural areas.” Similar arguments appear in state-level policy analyses and hospital association positions.

Economic Theory Does Not Support CON Laws

Friedrich Hayek’s seminal 1945 essay, “The Use of Knowledge in Society,” illuminates the core epistemic failure. Economic knowledge is not centralized or articulable in a form readily aggregated by a planning board in any state capital; it is dispersed, tacit, and contextual — embodied in the localized observations of physicians, administrators, investors, and patients regarding shifting demographics, technological feasibility (e.g., minimally invasive techniques enabling safer ASC procedures), and revealed preferences via willingness to pay. The price system serves as a “telecommunications” mechanism that synthesizes this fragmented information into actionable signals far more efficiently than any bureaucratic formula. CON laws supplant these dynamic signals with static, politically mediated projections, inevitably producing misallocation: persistent shortages where entrepreneurial insight perceives opportunity, and protected excess where incumbents lobby effectively.

Milton Friedman extended this critique to occupational and market-entry licensing, arguing that such barriers function primarily as protectionist devices that restrict supply, elevate prices, and shield established interests from competition. CON regimes exemplify this on a facility level: by limiting entry, they enable incumbents to exercise greater market power, sustaining higher reimbursement rates and operational inefficiencies. As typically occurs, Friedman’s thoughts are corroborated by data and empirical evidence.

Studies document that CON states exhibit fewer ASCs per capita; repeal of ASC-specific CON requirements has been causally linked to 44–47% statewide increases in ASC supply (and 92–112% in rural areas), without corresponding rises in hospital closures or service reductions. Broader analyses reveal associations with higher variable costs in acute-care hospitals, elevated per-service and per-capita spending in many specifications, and slower adoption of cost-saving innovations. Competition disciplines providers toward value — ASCs routinely deliver equivalent or superior outcomes for appropriate cases at substantially lower cost precisely because they must attract patients and surgeons on merit rather than regulatory fiat.

A colloquial illustration underscores the absurdity.

Envision applying CON logic to the fast-food industry in a growing Alabama suburb plagued by long lines at the lone Chick-fil-A. Entrepreneurs propose a new location, financed privately, promising faster service, consistent quality, and local jobs. Under a hypothetical “Certificate of Need for Fried Chicken,” they must persuade a state board that the community “requires” additional drive-thru capacity according to utilization formulas and population ratios. Existing chains directly or indirectly impacted by this — Burger King, McDonald’s — file vigorous objections, warning of “duplicative” capacity that would force price hikes to amortize their underutilized grills and parking lots. Months of hearings, expert testimony, and six-figure legal expenditures ensue. The board denies the application, citing sufficient “nugget utilization rates.” Customers endure persistent queues and elevated prices, innovation in menu or service models stalls, and consumer welfare suffers — all justified as preventing wasteful “overcapacity in poultry processing.” The satire exposes the folly: in competitive markets, entry and exit guided by profit-and-loss signals rapidly correct misallocations; suppressing them predictably harms the very consumers purportedly protected.

In healthcare, the consequences are graver, measured in delayed care, inflated expenditures, and forgone innovations. CON laws do not merely fail on their own terms; they invert the logic of markets, substituting political knowledge for the superior coordinating power of prices and voluntary exchange. Decades of evidence — from cross-state comparisons to difference-in-differences analyses of repeals — affirm that liberalization expands supply, moderates costs, and improves access without the predicted collapse of incumbent providers.

Repeal CON.

The New York Times recently reported on a new research paper that finds that, as summarized by the Times, “the North American Free Trade Agreement and trade competition with Mexico led to earlier deaths for American factory workers.”

Specifically, the researchers found that, from NAFTA’s launch in January 1994 through 2008, mortality increased in those “commuting zones” in the continental United States that had a disproportionately large number of workers producing manufactured goods in competition with imports from Mexico. Especially hard hit in those commuting zones were men who, in 1994, were ages 25 to 44. Losing jobs as a result of the greater freedom of Americans to purchase imports from Mexico, manufacturing workers and members of their households in these hard-hit commuting zones became more likely to commit suicide, turn to drugs or alcohol, or otherwise suffer ill health that raised their chances of going early to their graves.

In short, NAFTA was deadly because NAFTA destroyed manufacturing jobs. It’s a tiny leap from this finding to the conclusion that free trade is very likely hazardous to the health of manufacturing workers and their families. And at least one of the paper’s three authors — University of Chicago economist Matthew Notowidigdo — made this leap when he told the New York Times that his research highlights an “underappreciated cost of globalization.”

The econometrics in the paper is genuinely impressive. I assume that the finding of increased mortality is accurate. But I dispute the conclusion that this rise in mortality can legitimately be said to be the result of the freeing of trade.

NAFTA Job Losses Compared to Total Job Losses

Let’s put NAFTA job losses into perspective.

The total number of jobs destroyed by NAFTA from 1994 through 2008 was minuscule compared to total job destruction over those years. The St. Louis Fed has data starting in December 2000 on total monthly layoffs and discharges — that is, for 97 of the 180 months covered by Notowidigdo, et al’s research. During those 97 months, an average of 1.9 million workers in America every month lost or were laid off from jobs they wanted to keep.

How much of this job destruction was caused by NAFTA? The Economic Policy Institute — an outfit hostile to NAFTA — estimates that over the course of NAFTA’s first 20 years, it destroyed a total of 700,000 jobs. Even assuming that all of those 700,000 jobs were destroyed in NAFTA’s first 15 years, that’s an average monthly job loss of only 3,900 — or 0.2 percent of the average total monthly layoffs and discharges during this period.

This picture hardly changes if we compare NAFTA job losses to only manufacturing-worker layoffs and discharges. On average, 194,000 manufacturing workers lost their jobs each and every month from December 2000 through December 2008. NAFTA job losses, therefore, were a mere 2.0 percent of all manufacturing-job losses in those years. Ninety-eight percent of manufacturing-job losses from December 2000 through December 2008 were caused by forces other than NAFTA.

NAFTA Job Losses Compared to Earlier-Era Losses of Manufacturing Jobs

The nationwide rate of manufacturing-job loss from 1994 through 2008 — the years studied by Notowidigdo, et al. —  was lower than the nationwide rate of manufacturing-job loss before NAFTA was implemented. Specifically, from 1958 through 1980, each month, on average, 1.6 percent of manufacturing workers were laid off. Yet from 1994 through 2008, on average only 1.3 percent of manufacturing workers were discharged or laid off. (I calculated this 1.3 percent average monthly rate of manufacturing-worker job loss using available data.) Although there are no data on manufacturing-job losses from 1981 through 1993, the comparison of 1994-2008 with 1958-1980 is nevertheless revealing because it shows for an earlier long span of years a notably higher rate of manufacturing-job loss than occurred during the first 15 years of NAFTA, thereby putting the experience of the first 15 years of NAFTA into some meaningful historical context.

If it’s true that NAFTA’s destruction of manufacturing jobs resulted in an unusually high rate of mortality among manufacturing workers, it should also be true that manufacturing workers in those pre-NAFTA years were even more likely than were manufacturing workers in the years with NAFTA to commit suicide, turn to drink or drugs, or otherwise fall into life-draining despair.

Were they? I searched hard for evidence from that earlier era on the mortality linked to job losses of manufacturing workers, but (even with the help of AI) found none. Yet I’ve also never encountered any claims that manufacturing workers in the years 1958 through 1980 were unusually likely to suffer “deaths of despair” and other life-shortening calamities. The absence of barking by this particular dog is especially telling given that, compared to the NAFTA years, both the absolute number of manufacturing workers, as well as manufacturing employment’s share of total employment, were higher in those earlier years. My tentative conclusion, therefore, is that the blame for the increased mortality identified by Notowidigdo, et al., lies with something other than the loss of manufacturing jobs — and, hence, with something other than NAFTA. (I call my conclusion “tentative” because it’s possible that someone will uncover evidence from those pre-NAFTA years of high manufacturing-worker mortality — specifically, high mortality linked to job losses. But, again, I know of no such evidence.)

What If Manufacturing-Job Loss DOES Increase Mortality?

Let us, however, assume for the moment that evidence is uncovered showing that, in those pre-NAFTA years, mortality linked to job losses of manufacturing workers was indeed unusually high. Would such evidence salvage Prof. Notowidigdo’s conclusion that the rise in mortality reported in his paper is a “cost of globalization”?

No.

The reason is that the US economy in those earlier years was much less exposed to foreign competition than it was during the NAFTA years. (Each year from 1958 through 1980, US goods imports averaged 1.8 percent of GDP, while each year from 1994 through 2008, US goods imports averaged 4.6 percent of GDP.) Those earlier manufacturing-job losses were due overwhelmingly to rising productivity. Between 1958 and 1980, real output per manufacturing worker in the US doubled — a major reason why manufacturing employment as a share of total private-sector employment fell over those years from 34 percent to 25 percent (calculated by dividing total manufacturing employment by total private-sector employment). Even with NAFTA in place, rising worker productivity continues to be the chief source of manufacturing-job loss — accounting, according to Michael Hicks and Srikant Devaraj, for nearly 88 percent of such job losses from 2000 through 2010.

Even if manufacturing-job loss can legitimately be said to cause unusually high mortality among manufacturing workers, trade is only one source of such job loss, and a relatively minor source at that. Therefore, if one is to classify globalization as a cause of higher-than-usual mortality among manufacturing workers, one must also classify, as an even more significant cause of this mortality, labor-saving technology — and, indeed, any source of manufacturing-job loss.

Under these circumstances, singling out globalization as a source of unusually high mortality is not only misleading, but counterproductive. Doing so focuses the public’s and policymakers’ attention on a relatively insignificant source of avoidably high mortality while ignoring the chief source: rising worker productivity. If the loss of manufacturing jobs raises mortality — and if the government is intent on ensuring that manufacturing workers don’t fall into early graves — the government must prevent not only increased imports of manufactured goods, but also, and far more importantly, increases in manufacturing-worker productivity.

What politician or pundit will openly endorse such a policy?

Fortunately, in fact, there is no evidence that the productivity-driven loss of manufacturing jobs in the past caused a rise in mortality. And so because even today freer trade destroys far fewer manufacturing jobs than do improvements in worker productivity, it’s almost certainly incorrect to blame the job losses due to freer trade generally, and to NAFTA specifically, for any measured increases in manufacturing-worker mortality.

Whatever the Cause(s) of Higher Mortality, Free Trade Isn’t to Blame

So what are the likely causes of the rising mortality detected by Notowidigdo, et al.? To answer this question requires, as they say, further study. There are several candidates, however, of varying plausibility. These include:

  • Increased access to public and private welfare which enables people who lose jobs to remain unemployed longer, perhaps undermining their sense of self-worth.
  • Readier access to debilitating drugs, or reduced social stigma from using such drugs.
  • Increased occupational-licensing requirements which obstruct unemployed workers’ efforts to pursue new occupations.
  • The rise in land-use restrictions which raise the cost of moving to new locations with better job prospects.
  • A cultural change that either made the loss of manufacturing jobs more shameful than were such losses prior to NAFTA, or that drained unemployed manufacturing workers of the gumption possessed by previous generations of unemployed workers to actively search for new jobs.

Whatever the actual cause (or causes) of the rise in mortality, blaming NAFTA is incorrect given that it is only one of countless sources of job destruction, and a rather minor source. Even worse is leaping from a finding of rising manufacturing-worker mortality during NAFTA’s first 15 years to the conclusion that, for manufacturing workers generally, globalization is lethal.

The Board of Trustees of the  American Institute for Economic Research (AIER), one of the oldest and most respected nonpartisan economic research and educational organizations in the United States dedicated to promoting classical liberal and free-market ideas, has appointed Dr. Samuel Gregg as its new President.

Gregg served as interim president after the previous incumbent, Dr. William Ruger, accepted the position of Deputy Director of National Intelligence in April 2025.

Terry Anker, Chair of AIER’s Board of Trustees, remarked,

We are delighted that Dr. Gregg has accepted the board’s invitation to serve as AIER’s next president. Since 2018, AIER has undergone a period of remarkable growth and expansion, much of which was led and driven by Will Ruger. He left an indelible mark upon the institute and its core mission of educating the general public, students, and policy makers on the value of free market principles and the ways in which they promote prosperity and a free society. I and the Board of Trustees believe that Samuel Gregg will bring a unique and proven combination of executive skills and scholarly achievement to AIER’s presidency as he leads AIER in its advancement of sound economic thinking in the marketplace of ideas.

“I’m honored by the trust that the Board of Trustees has placed in me to lead AIER at a time when principles of economic liberty, the rule of law, and other classical liberal commitments are under severe pressure,” said Dr. Gregg. “I’m committed to advancing these ideas to AIER’s target audiences throughout the United States and equipping our superb team with everything that they need to achieve this goal.”

Dr. Gregg added:

These are very challenging times for those who believe in free markets, limited government, and the free society. But I am confident that AIER will continue to take a leading role in making the case for economic liberty to its target audiences, and first and foremost to those everyday Americans whom AIER’s founder, Colonel Harwood, was especially committed to reaching.

AIER, which was founded in 1933 by economist and financial advisor Colonel Edward C. Harwood, is dedicated to promoting the ideas of personal freedom, free enterprise, property rights, limited government, and sound money.

Delaware Gov. Matt Meyer recently signed an executive order directing state and district agencies to work together and expedite permits for broadband and other infrastructure projects. The order aims to expand statewide internet connectivity and keep Delaware businesses competitive by reducing regulatory bottlenecks. It’s one of many state and federal initiatives to remove barriers to the deployment of next-generation broadband.

Delaware has the right idea. Reducing government overreach to unlock broadband’s potential won’t just deliver reliable, speedy and affordable internet while reducing the digital divide between our rural and urban communities. It will also support American leadership in cutting-edge data-intensive technologies, including AI, autonomous vehicles, and telemedicine, granting millions of Americans unparalleled access to economic, healthcare and educational opportunities. But policymakers still have more work to do.

In 2021, Congress voted to provide $42.5 billion to state and territory governments for deploying high-speed internet access through the Broadband Equity, Access and Deployment (BEAD) program, one of 16 federal initiatives dedicating more than $413 billion for broadband expansion. As the recent Minnesota daycare fraud illustrates, massive federal grants administered by states and localities create opportunities for waste, abuse, and inefficiencies as bureaucrats overseeing and spending taxpayer funds bear neither the risk of failure nor commercial reward for success. 

Ensuring BEAD-funded infrastructure projects meet their goals while shrewdly stewarding funds requires that governments repeal unwarranted regulatory hurdles while maintaining guardrails for accountability and public welfare. However, many states get this balance wrong. 

California requires AT&T to maintain expensive and outdated copper-wire landline networks that don’t provide competitive broadband speeds and are susceptible to hacking and copper theft. 99.7 percent of served Californians can access at least three alternatives — including mobile networks and voice over internet protocol (VoIP) delivered online. The copper-wire requirement diverts funds from high-speed broadband infrastructure building and maintenance. AT&T reports that it costs them $6 billion annually to maintain such networks nationwide. The company recently received federal approval to retire 30 percent of its copper-wire networks, excluding California.

With fewer households opting for landline connections, these mandates should be confined to localities where landline is the only option and should be phased out as modern networks reach them. At least 20 states have or are abolishing such mandates. They conflict with the Trump administration’s commitment to “technology neutrality,” which makes satellite internet providers like Starlink and Amazon LEO eligible for BEAD funding as they offer a cost-effective alternative to fiber networks for many areas. Commendably, the FCC is considering a permanent rule that will streamline approvals for providers to discontinue copper networks. The agency also plans to scrap at least 18 other “outdated and obsolete” mandates regarding everything from telegraphs to phone booths, in order to cut red tape, expedite deployment and modernize networks.

Barriers to constructing and upgrading utility poles can also stymie broadband deployment. Maine, which received $50 million in BEAD funds, recently expanded a rule allowing towns to force removal and relocation of existing poles, creating uncertainty and costs for providers. Many of these are in rural areas that could benefit the most from expanded network access. Infrastructure providers must also comply with federal, state and local permitting processes, rights-of-way approvals and environmental reviews. These are important processes, but may be duplicative and carry inconsistent criteria and standards that increase costs. They would benefit from streamlining, better inter-agency coordination, and clearer timelines. Federal bill H.R. 2289 would address some of these issues by imposing deadlines for processing permits on state and local authorities, limiting what they can require from applicants, and limiting local fee recovery to “actual and direct costs.” Allowing for full business expensing of infrastructure investments would also lower after-tax costs and encourage new capital-intensive broadband projects without raising direct federal expenditures. Requiring transparent, competitive bidding for BEAD-funded contracts would also foster competition while limiting cronyism and government favoritism.

Cutting-edge broadband is vital for the rapid and secure movement of high volumes of data necessary to develop and execute life-changing AI models and applications. Robust and stable fiber networks foster model training, rapid inference and data center linkage while reducing latency that can render real-time tools like predictive analytics, chatbots and virtual assistants ineffective and sluggish. Latency and outages can be fatal for high-stakes applications like finance, healthcare and cybersecurity. Even fraction-of-a-second delays can make a life-or-death difference for autonomous vehicles and industrial robots. 

State and local authorities should be able to make public interest and safety decisions on network infrastructure that they’re best placed to make. But the immense benefits of expedient network deployment and plethora of existing rules and mandates that fail the cost-benefit test call for reducing bureaucracy in broadband.

The FCC can continue playing its part by reforming rules within its discretion. Federal policymakers can help by placing sensible limits on state and local regulation, and through conditioning BEAD funding to states and localities on procompetitive reforms that maximize the value of those dollars.