Category

Economy

Category

In an era where “democratic socialism” has gained renewed traction among politicians, activists, and intellectuals, one might assume the term carries a clear, operational meaning. Yet, a closer examination reveals a concept shrouded in ambiguity, often serving as a rhetorical shield rather than a blueprint for policy.  

Proponents often invoke it to promise equality and democracy without the baggage of historical socialist failures, but this vagueness undermines serious discourse. Precise definitions are essential for theoretical, empirical, and philosophical scrutiny. Without them, democratic socialism risks becoming little more than a feel-good label, evading accountability while potentially eroding the very freedoms it claims to uphold. 

The Historical Consensus on Socialism: State Ownership and Its Perils 

During the socialist calculation debate of the early twentieth century, a clash between Austrian economists like Ludwig von Mises and Friedrich Hayek and their socialist counterparts, including Oscar Lange and Abba Lerner, the consensus definition of socialism was straightforward: state ownership of the means of production. As I demonstrate in my coauthored paper, “The Road to Serfdom and the Definitions of Socialism, Planning, and the Welfare State, 1930-1950,” this understanding was shared not only by critics but also by the socialist intellectuals of the time.  

Socialism, in this context, entailed the state directing resources through planning, often requiring ownership to fund expansive welfare programs. This definition is crucial for interpreting Hayek’s seminal work, The Road to Serfdom (1944), which posits a unique threat to democracy arising from state ownership of the means of production. Hayek argued that central planning inevitably concentrates power, leading to authoritarianism as planners override individual choices to meet arbitrary goals. Far from a slippery slope toward any government intervention, Hayek’s warning targeted the specific dynamics of state-owned economies, where the absence of market prices stifles the flow of information and the structuring of incentives, ultimately endangering democratic institutions. Using this definition, my coauthors and I, in our paper “You Have Nothing to Lose but Your Chains?” empirically test and confirm Hayek’s hypothesis that democratic freedoms cannot be sustained under socialism.  

Economists working in this tradition, from Mises to contemporary scholars, retain this rigorous definition. It serves as a foundation for understanding why socialist systems have repeatedly faltered: without private ownership of the means of production, rational economic calculation becomes impossible, resulting in waste, shortages, and coercion.  

The Vagueness and Contradictions of Modern Socialist Rhetoric 

Contrast this clarity with the approach of many contemporary socialists, including those advocating democratic variants. Definitions of socialism often shift, praised in moments of perceived success and disowned when failures mount. This pattern is not new; it has recurred across a range of historical experiments, from the Soviet Union to Venezuela. Kristian Niemietz’s Socialism: The Failed Idea That Never Dies offers a comprehensive review of socialist rhetoric that highlights this inconsistency: regimes are initially hailed as “true” socialism, such as “worker-led” and “democratic,” only to be retroactively labeled as distortions or “state capitalism” once repression and economic stagnation emerge.  

When Hugo Chavez introduced socialism in Venezuela in 2005, he claimed that he was re-inventing socialism so as to avoid the outcomes of the Soviet Union, stating that they would “develop new systems that are built on cooperation, not competition.” And that they “cannot resort to state capitalism.” Bernie Sanders famously endorsed this socialism, saying that the American dream was more likely to be realized in places like Venezuela and calling the United States a banana republic in comparison. Nobel laureate economist Joe Stiglitz was quick to point out the “very impressive” growth rates and the eradication of poverty. But socialism in Venezuela, according to the state ownership of the economy measure from Varieties of Democracy, corresponded to the classic definition of socialism, leading to the very blackouts, empty grocery shelves, and suppression of political freedom socialists explicitly sought to avoid.   

This vagueness extends to democratic socialism today. Proponents often speak in lofty terms, such as “workplace democracy,” without specifying policies. Such abstractions allow evasion of empirical evidence. By rendering the concept unfalsifiable, socialists can dismiss critiques as attacks on straw men, perpetuating debates that stall progress. If democratic socialists insist on reclaiming the term “socialism,” as distinct from the technical term used by economists, the burden falls on them to explicitly state their divergence and provide a concrete definition amenable to empirical testing. 

The Imperative of Precision for Empirical and Philosophical Inquiry 

A precise definition is not mere pedantry; it is the prelude to meaningful investigation. To enable cross-country comparisons, socialism must be defined through specific policies, not vague platitudes. What exact measures constitute this vision? Some socialists point to the Nordic countries as their model, but these countries have important differences between them. And, if a country is the model, then democratic socialists must consistently advocate for all the policies in that country, including those that might contradict their ideals, such as flexible labor markets or low corporate taxes. The Nordic countries, as measured by state ownership of the economy, are capitalist. Similarly, as measured by Fraser Economic Freedom of the World Index, they are also some of the most economically free.   

Empirical literature in economics often examines the effects of specific policies in isolation, separate from the discussion of comparative economic systems, revealing trade-offs often ignored by democratic socialists. Minimum wage laws, for example, often supported by unions, can reduce employment opportunities, particularly for low-skilled workers and minorities. Prevailing wage requirements, pushed by unions, may inflate costs and exclude smaller firms, suppressing economic mobility and also having racially disparate economic impacts.  

Philosophical debates demand equal rigor. Consider unions, a cornerstone of many democratic socialist platforms. Do proponents support open ballot laws, which protect workers from intimidation during union votes, or do they favor secret ballots to ensure true democracy? Exempting unions—as labor cartels—from antitrust laws raises concerns: why allow monopolistic practices that could hike prices and limit competition, regressively harming consumers? If a national or subnational electorate democratically enacts right-to-work laws, preventing closed-shop unions, should this override a workplace vote? Such questions expose potential anti-democratic undercurrents, where “worker democracy” might privilege special interests over broader societal choice. 

These inconsistencies highlight a deeper issue: democratic socialism often conflates social democracy – market economies with robust safety nets – with true socialism, diluting the latter’s radical edge while inheriting its definitional baggage. Without clarity, it risks repeating history’s errors, where good intentions devolve into coercion. 

Toward Clarity and Accountability 

Democratic socialism’s appeal lies in its promise of equity without tyranny, but its vagueness invites skepticism. Only by adhering to historical definitions and demanding specificity can we foster advancement in these debates. What policies do democratic socialists argue for exactly? How will they avoid the pitfalls of past experiments in socialism, which often started with the noblest of intentions? Until answered, democratic socialism remains an elusive mirage.  

The Trump administration is making good on its promise to shrink the bloated federal bureaucracy, starting with the Department of Education. Education Secretary Linda McMahon recently announced that her department has signed six interagency agreements with four other federal departments – Health and Human Services, Interior, Labor, and State – to shift major functions away from the Education Department.  

These agreements will redistribute responsibilities like managing elementary and secondary education programs, including Title I funding for low-income schools, to the Department of Labor; Indian Education programs to the Interior Department; postsecondary education grants to Labor; foreign medical accreditation and child care support for student parents to Health and Human Services; and international education and foreign language studies to the State Department, to agencies better equipped to handle them without the added layer of bureaucratic meddling. 

Interagency agreements, or IAAs, aren’t some radical invention. They’re commonplace in government operations. The Department of Education already maintains hundreds of such pacts with other agencies to coordinate on everything from data sharing to program implementation. What makes this move significant isn’t the mechanism – it’s the intent. By offloading core duties, the administration is systematically reducing the department’s scope, making it smaller, less essential, and easier to eliminate altogether. This approach is the next logical step in a process aimed at convincing Congress to vote to abolish the agency entirely. 

Remember, the Department of Education was created by an act of Congress in 1979, so dismantling it requires congressional action. In the Senate, that means overcoming the filibuster, which demands a 60-vote supermajority. Without it, Republicans would need a handful of Democrats to cross the aisle – or they’d have to invoke the “nuclear option” to eliminate the filibuster for this legislation.  

Conservatives have wisely resisted that temptation. Ending the filibuster might feel expedient now, but it would set a dangerous precedent, allowing Democrats to ram through their big-government agendas – like expanded entitlements or gun control – with a simple majority the next time they hold power. It’s better to build consensus and preserve the procedural safeguards that protect limited government. 

The Trump team’s strategy is smart: It breaks down the bureaucracy piece by piece, demonstrating to the public and lawmakers that other agencies can handle education-related workloads more efficiently. Why prop up a standalone department riddled with waste when existing structures can absorb its functions? The administration’s approach goes beyond administrative housekeeping to serve as proof of concept that education policy belongs closer to home, not in the hands of distant D.C. officials. 

Of course, the only ones howling about sending education back to the states are the teachers unions and the politicians in their pockets. Groups like the National Education Association (NEA) and the American Federation of Teachers (AFT) thrive on centralized power. It’s easier for them to influence one federal agency where they’ve already sunk their claws than to battle for control across 50 states and thousands of local districts.  

We’ve seen this playbook in action. During the COVID-19 pandemic, unions lobbied the Centers for Disease Control and Prevention – another federal entity – to impose draconian guidelines that made school reopenings nearly impossible. They held children’s education hostage, demanding billions in taxpayer-funded ransom payments through stimulus packages. 

The unions’ power grab isn’t new. The Department of Education itself was born as a political payoff. Democrat President Jimmy Carter created it in 1979 to secure the NEA’s endorsement for his reelection bid. It’s no secret that teachers unions have long controlled Democrat politicians, but even some Republicans aren’t immune.  

Rep. Brian Fitzpatrick (R., Pa.) came out swinging against dismantling the department, claiming it was established “for good reason.” That “good reason” apparently includes his own ties to the unions. Fitzpatrick is the only Republican in Congress currently endorsed by the NEA. Back in 2018, the NEA even backed him over a Democrat challenger. Over the years, he’s raked in hundreds of thousands of dollars in campaign contributions from public-sector unions. Is it any wonder he’s against Trump’s plan?  

Meanwhile, more than 98% of the NEA’s political donations went to Democrats in the last election cycle, yet less than 10% of their total funding went towards representing teachers. Follow the money, and you’ll see why federal control suits them just fine. 

Sending education to the states would empower local communities, where parents and educators know best what’s needed. It would also mean more dollars reaching actual classrooms instead of lining the pockets of useless bureaucrats in Washington. Federal education spending gets skimmed at every level, with administrative overhead siphoning off funds that could buy books, hire teachers, or upgrade facilities. 

Critics claim abolishing the department would gut protections for vulnerable students, but that’s a red herring. Federal special-needs laws, like the Individuals with Disabilities Education Act, predated the department and can continue without it. Civil-rights enforcement in schools doesn’t require a dedicated agency; the Justice Department and other entities already handle similar oversight. Moreover, the word “education” appears nowhere in the US Constitution. The department’s very existence arguably violates the 10th Amendment, which reserves powers not delegated to the federal government to the states or the people. 

The evidence against federal involvement is damning. Since the department’s inception, Washington has poured about $3 trillion into K-12 education. Achievement gaps between rich and poor students haven’t closed, and in many cases, they’ve widened. Overall academic outcomes have stagnated or declined. Per-student spending, adjusted for inflation, has surged 108% since 1980, yet test scores remain flat. The US spends more per pupil than nearly any other developed nation, but our results are an international embarrassment. 

The Trump administration has already taken decisive action to chip away at this failed experiment. They’ve slashed millions in diversity, equity, and inclusion grants that promote division rather than learning. Thousands of department employees have been let go, streamlining operations and cutting costs. The unions are probably gearing up to sue over these latest interagency agreements. But they tried that before – challenging the administration’s personnel reductions – and lost at the Supreme Court. The chief executive has clear authority to manage the executive branch, and the unions would likely face another defeat if they push this latest move to litigation. 

It’s time to end the charade. The Department of Education focuses on control rather than helping kids. By dispersing its functions and proving the sky won’t fall, the Trump team is paving the way for real reform. America’s students deserve better than a federal fiefdom beholden to special interests. Let’s send education back where it belongs: to the states, the localities, and the families who know their children best. 

Thanksgiving draws people, regardless of race or creed, together around a table heavy with food and laughter. At its center sits a golden turkey, but it’s the sides (mashed potatoes, green beans, stuffing, and gravy) that spark the most excitement. American football murmurs from the television as plates and hands cross the table, passing dishes with the casual choreography of family life.

This is, in spirit, the very scene Frédéric Bastiat once imagined when he marveled at how Paris was fed each morning. “It staggers the imagination,” he wrote, “to comprehend the vast multiplicity of objects that must pass through its gates tomorrow… And yet all are sleeping peacefully at this moment.” No single mind coordinates the miracle and yet, it happens.

Thanksgiving is the modern version of Bastiat’s wonder. What we see is the feast itself: Mom and Nana pulling the turkey from the oven. The Department of Agriculture reports that roughly 46 million turkeys, about the population of Spain, are eaten every Thanksgiving. The extended family that arrives hours before the meal is ready is joined by 1.6 million people who travel on Thanksgiving. Dad and his child switch between American football and the Macy’s Thanksgiving Day Parade, joining the more than 100 million viewers who tune in each year, coordinated across satellites, networks, advertisers, and camera crews, so that the same spectacle can play out in millions of living rooms at once. 

What remains unseen are the invisible threads of cooperation that make the Thanksgiving table possible. Long before the turkey reached the oven, farmers in Iowa, Nebraska, and Arkansas were raising it, relying on feed grown by other farmers and transported by rail from thousands of miles away. The green beans and sweet potatoes come from networks of growers, processors, and distributors whose work depends on forecasts, algorithms, and trade routes most of us never think about. Truck drivers cross state lines to deliver ingredients to logistics managers who ensure that shelves stay stocked. Every piece comes together until someone realizes the cranberry sauce is missing. Last-minute panic sets in, and a quick dash to the grocery store follows.

Today such a trip isn’t seen with wild wonder. But in 1989, during a policy shift called perestroika, or restructuring, the USSR sent a delegation to thaw relations with the United States. Alongside a tour of NASA’s Johnson Space Center in Texas, the foreign delegation made an unscheduled stop at a Randalls Supermarket. Among them was future Russian President Boris Yeltsin, who, astonished by the variety of foods, claimed, “Even the Politburo doesn’t have this choice. Not even Mr. Gorbachev.” The visit left Yeltsin at a loss for words: “I think we have committed a crime against our people by making their standard of living so incomparably lower than that of the Americans.” 

Since its founding in 1917, the Soviet Union endured famine with grim regularity. The Volga famine of 1917–1922 claimed between five and seven million lives. A decade later, the Holodomor of 1932–1933 starved another five to eight million, and after World War II, the famine of 1946–1947 took roughly two million more. Each disaster was born not of nature but of policy: central planning, forced collectivization, and the state’s determination to control production.

By the 1990s, the pattern of scarcity persisted, mocking the propaganda that declared, “Life has become easier, comrades; life has become happier.” In April 1991, bread prices rose 300 percent, beef 400 percent, and milk 350 percent. Shortages grew so severe that Premier Mikhail Gorbachev appealed to the international community for humanitarian aid, with officials admitting that the USSR had “flung itself around the world, looking for aid and loans.”

Shipments of frozen chicken, nicknamed “Bush legs” after President George H. W. Bush, were flown in to feed the population. The image carried an irony history could not have scripted better: just decades earlier, at the height of the Cold War in the 1950s, Premier Nikita Khrushchev had thundered before Western diplomats, “About the capitalist states, it doesn’t depend on you whether or not we exist. If you don’t like us, don’t accept our invitations, and don’t invite us to come see you. Whether you like it or not, history is on our side. We will bury you.” Yet by the end of the century, the USSR that vowed to bury the West was surviving on American poultry—in other words, on capitalist chicken. The spiraling crisis soon escalated into nationwide strikes and protests demanding the end of the system itself. By Christmas Day, December 25, 1991, the Soviet Union dissolved, undone by the same command economy that had once promised to abolish hunger.

Even the most ardent bureaucrats, armed with vast tracts of farmland and central plans, could not guide the Soviet Union into prosperity, let alone feed its people. Yet the urge to direct, ration, and manage markets never disappears; it only changes its accent. 

Today, in New York City, the beating heart of global finance, the temptation to fix the market endures. Mayor-elect Mahmood Mamdani has proposed government-run grocery stores as “a public option for produce,” arguing that too many New Yorkers find groceries out of reach. His plan would cost roughly $60 million, financed through higher corporate taxes at 11.5 percent and a new 2 percent levy on those earning over a million dollars a year. Despite the recent failure of a government-run grocery store in Kansas City, which left local taxpayers with a $750,000 bill, New York’s food culture already rests on some 13,000 independent bodegas: small, adaptive enterprises that thrive precisely because they respond to local needs. A state-run grocery network would not only crowd them out, but also make the city more vulnerable to the very shortages it hopes to prevent.

Thanksgiving is a yearly proof of concept for liberty: a society of free individuals coordinating better than any plan could dictate. From Moscow to New York, the lesson remains the same. The miracle of prosperity does not flow from ministries or mayors, but from the voluntary cooperation of ordinary people who produce, trade, and trust one another. 

The Soviet Union collapsed because it tried to command what can only be discovered, the daily knowledge of millions working freely. New York, for all its wealth, risks forgetting that lesson each time it trades competition for control. The feast that fills our tables each November is more than a meal; it is civilization itself, renewed by freedom and gratitude. Each Thanksgiving feast reminds us that civilization’s greatest miracles are not decreed; they are cooked, carried, traded, and shared by free people every day.

It is important to celebrate victories for economic freedom as they emerge, even when they come in the most peculiar of places. One such place is the racing world. 

In October, North Carolina Governor Josh Stein signed into law HB 926, called the “Right to Race” law. This new measure shields racetracks from noise-related nuisance lawsuits if the facility existed and was permitted before nearby properties were developed. This is an incredible win for economic freedom against NIMBYs demanding to silence roaring engines after making the decision to move next to a racetrack. North Carolina has definitively answered “no” to the question: Should those who knowingly move next to a racetrack be allowed to use the government to quiet it?

As areas around the country redevelop, with rural areas becoming suburbs and suburbs evolving into de facto metropolises, racetracks have found themselves the target of those moving into these newer developments. Racetracks that long predate the existence of these neighbors are facing legal action by the newcomers. 

For example, in Tennessee, the Nashville Fairgrounds Speedway, which first opened in 1904, now faces opposition from residents as the track seeks to renovate in order to lure NASCAR racing back to the facility. Despite being there first, despite the track’s positive economic impact, many tracks find themselves without legal protection from the locals. 

In response to this, some states have taken action. Over the summer, Iowa Governor Kim Reynolds enshrined HF 645, which protects racetracks, such as the famed Knoxville Raceway, from the constant threat of litigation from their neighbors, provided the track preexisted the neighboring property’s purchase or development.

Last month, North Carolina followed suit. Racing has been embedded in the state for almost a century, with tracks like Bowman Gray Stadium (1937), North Wilkesboro Speedway (1947), Hickory Motor Speedway (1951), and Charlotte Motor Speedway (1960) becoming renowned venues of racing around the world. 

There is a long-held legal, philosophical, and economic position: first use establishes right. This is how we arrive at property rights, through homesteading. These rights secure not only possession, but also established and peaceful use. In interaction with property rights, we reach “coming to the nuisance.” In this common-law doctrine, you consent to the effects of an existing activity if you choose to move next to it. It is your choice of proximity that entails your acceptance of the conditions. 

In “Law, Property Rights, and Air Pollution,” economist Murray Rothbard makes this point with the example of an airport. Prior use generates a legitimate easement-like claim in sound or emission. In the case of the airport, Rothbard writes, “The airport has already homesteaded X decibels worth of noise. By its prior claim, the airport now owns the right to emit X decibels into the surrounding area.” 

This simple point illustrates the following: when an airport operates openly for years, it “homesteads” its noise. The sound waves become part of its legitimate use, and newcomers consent when moving next door. The same logic applies here to racetracks. Their races, the noise of the engines, do not violate anyone’s rights — they exercise, instead, preexisting ones. What seems like an abstract theory is expressed in clear statutes like those in North Carolina and Iowa.

These states are translating the principles of homesteading into positive law, at least in the defense of racing. In effect, they take what Rothbard described as a natural rights easement — earned through peaceful, longstanding use — and make it explicit law. What these laws help clarify is the difference between preferences and rights. When preferences override rights, this signals institutional instability — rights are negotiable. Nashville proves itself as a cautionary counterexample. Without statutory protection, the Fairgrounds Speedway is vulnerable to neighborhood pressures that could lead to the violation of the track owners’ rights.

With any luck, North Carolina and Iowa will not be outliers, but a broad legal correction. When courts and city councils prefer what might be called “aesthetic interventionism”, where neighbors’ preferences, not owners’ rights, dictate outcomes, this creates uncertainty regarding property rights, threatening the very foundation of a free economy. When property owners and entrepreneurs can rely on institutional stability, they can invest with confidence in the future. Without such confidence, the erosion of trust that it produces deters economic growth. 

These “Right to Race” laws push back against this drift, restoring predictability to this segment of the market. These racetrack cases are only a small, visible example, but the same logic applies to other industries in various ways, such as nightlife ordinances and noise complaints for musicians. The order of homesteading matters, and these laws help preserve the space for voluntary exchange. 

The sound of dozens of racecars flying around North Wilkesboro or Charlotte may not be music to everyone’s ears, but it represents something deeper than sport. It is the roar of property rights at work — the anchor of fairness, stability, and freedom. States like North Carolina and Iowa have protected not only racing, but the freedom that depends upon stable expectations. 

Nashville’s ongoing fight, on the other hand, shows what happens without such clarity. When rights are negotiable, every market action becomes provisional. Economic freedom demands the simple rule that those who came, acted, and homesteaded first have property rights. Sometimes, that means protecting racetracks. Thank you, North Carolina and Iowa.

In a recent analysis gone viral, financial blogger Michael W. Green traced how modern American families can earn anywhere from $40,000 to $100,000 and still fall further behind. The argument is devastatingly simple: the mathematical parameters defining “poverty” are built upon a benchmark drawn in 1963, multiplied by three, and only lightly adjusted for inflation. Everything else — childcare, healthcare, housing, transportation, and the structural design of the welfare state — has transformed beyond recognition. The result is a system in which the official poverty line tells us less about deprivation than it does about starvation. And once you trace the math, the inescapable metaphor emerges: America’s working households require escape velocity to break free from the gravitational well of modern costs of living.

In physics, escape velocity is the minimum energy needed to break free from a body’s gravitational pull. Below that threshold, every burst of energy merely bends the trajectory and drops the object back into orbit. The same dynamic now governs mobility in the United States.

Using conservative assumptions, a bare-bones “participation budget,” the minimal cost necessary for a household to work, raise children, and avoid freefall, is roughly between $136,000 to $150,000. That figure doesn’t represent luxurious living; it’s the updated application of Mollie Orshansky’s original method, which assumed food was one-third of a household’s budget. Today, food is closer to 5 to 7 percent, and the real multipliers reside in the unavoidable costs of existing in a post-industrial service economy. The system still uses the original 1963 architecture, so the “poverty line” is measured as if housing, childcare, and healthcare still operated like they did during the Kennedy administration.

Below this new-era threshold, income gains are eaten by benefit cliffs: the loss of Medicaid, SNAP, childcare subsidies, and at that same point a sudden, full exposure to market prices in sectors that the United States has spent decades distorting through subsidies, mandates, and regulatory sclerosis. A family can leap from $45,000 to $65,000 and end up poorer, because the system confiscates more than 100 percent of that incremental income. From that perspective, it’s not irrational to stay put rather than aggressively seek higher earnings that will only bring more hardship and deprivation.

Using the 1963 poverty line today is like measuring the distance from Earth to the moon with a yardstick whose markings have been sandblasted away. It ensures two outcomes. First, because the benchmark is too low, benefits are means-tested too early. The ladder gets sawed off halfway up. The poor face marginal tax rates that would make a hedge fund blanch, and the working poor find that one extra dollar of income can trigger thousands of dollars in lost benefits. The mathematics are inherently punitive, punishing upward mobility and the productive instincts that animate it.

Second, persistent inflation, especially in non-discretionary categories, reshapes the spending basket faster than the poverty formula can adjust. This is not purely the result of supply-and-demand fundamentals. It is a direct consequence of decades of monetary expansion, financial repression, interest-rate suppression, and regulatory barriers that choke off the supply in housing, healthcare, education, and childcare. When the Federal Reserve aims to stabilize macroeconomic aggregates, it also inadvertently distorts the production of essential goods that determine whether a family can remain afloat. Price levels matter for survival even if economic science has come to prefer analyzing rates of change.

A similar mismatch between past prices and present reality — the real versus nominal divide — haunts the financial system. The $10,000 reporting requirement for bank transfers was created in the early 1970s, when $10,000 represented a down payment on a house. Today it represents two or three months’ rent in many cities — or a single dental emergency. Inflation has quietly turned an anti-money-laundering threshold into a mass-surveillance dragnet for normal people performing normal transactions. That same inflation, coupled with outdated benchmarks, now pushes American families into poverty by statistical invisibility and brutally repels attempts at upward mobility.

When escape velocity is $140–$150k, and the effective marginal tax rate is 80–120 percent, buying scratch-off tickets ceases to be obviously irrational. One needs a tremendous economic leap of roughly $100,000 a year to continue living without disruption. In a nonlinear system with cliffs and arbitrary phase changes, a low-probability high-payout gamble can be mathematically defensible. Tilting at heavy-tailed payoffs is not illogical; it is a response to a payoff structure policymakers engineered.

A likely response, politically, is to suggest simply lifting eligibility all the way up to the true cost-of-living threshold. But indexing benefits to the real cost of American life would balloon federal outlays by trillions. Extending Medicaid, SNAP, housing subsidies, and childcare credits to households making $140,000 would produce deficit dynamics that would make the 2020–2021 stimulus era look mild and restrained. The welfare state is already actuarially fragile; expanding it to cover half the US population would collapse it. On the other hand, three somewhat simple reforms could help restore a sane poverty escape velocity:

  • Use a modern participation-budget approach, not a 1963 grocery multiple. If there is to be a social safety net, it should be driven by means testing which phases out smoothly, not falls off cliffs. 
  • Deregulate housing, healthcare, childcare, and education: the sectors where supply is most strangled by regulation. Deregulation — particularly zoning, certificate of need laws, licensing, and insurance mandates — would create downward price pressure far more powerful than subsidies.
  • The Federal Reserve’s century-long experiment with cheap money has inflated asset prices, destroyed purchasing power, raised the cost of entry into middle-class life, and widened the gap between wages and participation requirements. A quick fix could be rendered by shifting from discretion to a rules-based monetary regime (whether Taylor-style, commodity-linked, or another transparent, market-tested anchor) to stabilize prices and reduce the boom-bust cycles that erode household stability.

America’s primary poverty crisis is not moral failure, laziness, or poor financial literacy. It is math. A system built on 1963 assumptions cannot function in a 2025 reality. Until the parameters shift, which is to say until lawmakers acknowledge the true cost of participation, that escape velocity will remain impossibly out of reach for tens of millions. The tragedy is not that people are failing; it is that the system is calibrated for a world that has not existed in over three generations. There is no reform, no genuine improvement in the condition of the poor, no revival in the living standards of consumers — or of any American who works — without monetary reform beginning at the very top, with the Federal Reserve.

In most sectors of the American economy, we celebrate the moment when insiders break away to build something better. Engineers start their own firms. Chefs open their own restaurants. Innovators leave incumbents and test their mettle in the market. Only in US healthcare do we treat that entrepreneurial impulse as a threat worthy of prohibition. 

Section 6001 of the 2010 Affordable Care Act froze the growth of physician-owned hospitals (POHs) by barring new POHs from getting paid by Medicare and Medicaid, and by restricting the expansion of existing POHs. It did not ban POHs outright, but it had roughly the effect of a ban; after years of growth, the number of POHs in the US abruptly plateaued at around 230-250, and practically no new POHs have opened since 2010.   

Supporters of the ban on POHs say it is needed to prevent conflicts of interest, cream-skimming, and overuse.

One argument is that without such a ban, POHs would cherry-pick the healthier and more profitable patients, leaving other hospitals with sicker and more costly patients. There is some evidence that physician-owned specialty hospitals tend to attract healthier patients and tend to focus on lucrative service lines. But why does that justify a ban on POHs? Specialization is one way that entrepreneurs create value. Cardiac centers, orthopedic hospitals, and focused surgical facilities exist precisely because repetition and standardization can improve outcomes and reduce costs. Specialty hospitals can even exert a positive influence on surrounding general hospitals to improve quality and reduce costs for everyone. 

Another argument is that uncontrolled self-referral would result in the overutilization of services and a rise in healthcare spending. Overutilization is a major contributor to wasteful spending in healthcare, which has been estimated to account for approximately 25 percent of total healthcare spending, or between $760 billion and $935 billion nationwide. The reasoning is that if physicians are able to refer patients internally for services, procedures, and tests, then physicians will cease to exercise careful cost control. This, however, is more of an indictment of the current price and payment systems than an indictment of physician ownership. By setting prices via committee instead of relying on genuine market prices, policymakers have created in Medicare and Medicaid a gameable system that rewards volume. The response to poorly designed reimbursement mechanisms should be to fix the mechanisms, not blame ownership models.

The POH issue illustrates how, in a mixed economy, controls beget controls. To keep the program politically popular, Medicare’s pre-payment review and protections against waste are generally less stringent than those found in the private insurance world. Given that context, preventing physicians from referring patients to the entities they own can seem like a sensible check against waste and abuse.

In a more market-driven system, however, the problem would evaporate without the need for a ban on POHs. Individuals (or their plan sponsors) would control more of their healthcare dollars; prices would be transparent and site-neutral; and hospitals and physician-led facilities would compete on bundled prices, warranties, and measured outcomes. The alleged perils of physician ownership would be addressed through competition and reputation. Insurers and self-funded employers would exercise discipline on overuse through selective contracting, reference-based pricing, and value-based payments, and patients would reward cost-effective specialists. 

In a free-market system, a physician’s ownership stake in a hospital is no more a threat to the taxpayer than a chef’s ownership stake in a restaurant is to an individual looking for a good place to dine. 

Often in US health policy, we are in the position of needing to make multiple fixes simultaneously in order to take a real step forward. Philosophically, the ban is indefensible. Physicians should be as free as any other professionals to become entrepreneurs and form, finance, and run institutions. Entrepreneurship should not require special permission. In nearly every other industry, the very engine of specialization, quality improvement, and cost discipline is entrepreneurship. Entrepreneurial profit is a reward for foresight, innovation, and service. But prior policy decisions give the ban a veneer of justification.

If we let the POH ban stand, then incumbency triumphs over innovation, with large hospital systems holding a legislated shield against potential competitors. If we lift the ban but make no accompanying changes, some fleecing of the taxpayer could occur.

We ought to lift the ban on POHs while simultaneously making reforms that let individuals control more of their own healthcare dollars. This would incentivize physicians to compete on value, mitigating concerns about overutilization.

One way to do this is to pair the repeal of the POH ban with payment neutrality and consumer control. This would end the artificial price differences that federal policy has assigned to different sites of care. MedPAC has long recommended site-neutral payment to strip away hospital markups for services that can be safely delivered in lower-cost settings. Efficient entrants will thrive by being better at care, instead of being better at “location arbitrage.”

Another way to do this is to put more real dollars under patient control. Empowering individuals with flexible accounts — yes, even in the Medicare and Medicaid contexts — would guard against overutilization. Evidence shows that when consumers face prices and control the marginal dollar, spending becomes more disciplined. This could be the proving ground for broader reforms involving the pairing of portable health savings accounts with catastrophic coverage in the Medicare and Medicaid populations.

Maintaining the ban on POHs is wrong. It denies clinicians the freedom to build their own institutions, and it denies patients the freedom to choose them. However, simply repealing the ban without making any other changes could open the door to overutilization at the expense of taxpayers, which is why we should pair the lifting of the ban with other changes. We should protect voluntary exchange among free individuals, while taking steps to align incentives so that patients, not political pull, direct the flow of dollars.

The longest federal shutdown in US history has created deep gaps in the flow of economic data, preventing calculation of the Business Conditions Monthly indices. Most BCM components depend on federal statistical agencies, including the US Bureau of Labor Statistics, Census Bureau, Bureau of Economic Analysis, and the Federal Reserve, that were unable to collect, process, or publish October 2025 data. As a result, critical indicators such as payroll employment, labor force participation, consumer price index, industrial production, housing starts, retail sales, construction spending, business inventories, factory orders, personal income, and several Conference Board composites remain unavailable or were published without the sub-series needed for BCM methodology. Agencies have already confirmed that several October datasets were never collected and cannot be reconstructed. And while a handful of private and market-based measures (University of Michigan consumer expectations, FINRA margin balances, heavy truck sales, commercial paper yields, and yield-curve spreads) continued updating normally, the BCM cannot be produced unless all 24 components are available for the same month; missing even one Census or BLS series renders the entire month unusable.

Because the October data will not be produced, that month is permanently lost for BCM purposes. The indices can resume only once federal agencies complete their post-shutdown catch-up work and release full, internally consistent datasets for the next available month in which all 24 BCM components exist. Even once resumption begins, calculations based on the first complete month may reflect a gap that renders that initial reading economically suspect. Based on current release schedules, the earliest realistic timeframe for restoring the BCM is early 2026, once a complete set of post-shutdown data is again available. 

This new data void is a graphic illustration of how short-term, error-prone, and erratic US economic policy has become, echoing earlier episodes such as the “transitory” miscalculation of 2021, the ruinous and clumsily-handled pandemic responses, and the panicked Fed rate hikes between 2022 and 2023 which resulted in a minor banking crisis. 

Discussion, October – November 2025

September’s inflation data (released October 24th) offered a rare clean signal in an otherwise muddied environment, confirming a broad though modest cooling in both headline and core CPI before the federal shutdown froze statistical agencies. Headline CPI rose 0.31 percent and core 0.23 percent, both softer than expected, with year-over-year core easing to 3.0 percent. Goods inflation softened, helped by declining vehicle prices and deflation among low–tariff-exposure categories. Core services, meanwhile, slowed sharply on a sizable drop in shelter inflation. Firms continued to pass through roughly 26 cents of every dollar of tariff costs, leaving price pressures elevated but stable, and diffusion indices showed slightly narrower breadth, with fewer extreme increases or declines. Combined, these data reinforced market expectations for another rate cut in December.

Producer price data (released November 25th) painted a similar picture of contained underlying pressures, reinforcing the disinflationary tilt suggested by the CPI. September headline PPI firmed to 0.3 percent on an energy spike, but core PPI rose only 0.1 percent, below expectations, and categories feeding into the Fed’s preferred core PCE gauge were mixed. Portfolio-management fees fell sharply, medical services posted uneven readings, and airfares jumped, suggesting pockets of resilient discretionary spending. Overall producer-side inflation remained tame. Prices for steel and aluminum products covered by Section 232 tariffs have risen about 7.6 percent since March yet appear to be leveling off, supporting the observation that tariff-driven pressures are largely one-time rather than accelerating. The challenge ahead is that October CPI and several subsequent releases will be heavily compromised: two-thirds or more of price quotes were never collected during the shutdown, forcing the Bureau of Labor Statistics to rely on imputation well into spring 2026. As a result, September’s moderate inflation reading may be the last clean data point for months, complicating the Fed’s ability to gauge true disinflation progress even as markets continue to anticipate further easing.

Against this backdrop, the Fed entered its October 28–29 meeting with more uncertainty than usual and opted for the path of least resistance: cutting rates by 25 basis points and announcing that quantitative tightening via the balance sheet will be dialed back starting on December 1st, citing tightening liquidity conditions and a lack of reliable data as the shutdown froze much of the federal statistical system. Policymakers framed the cut as insurance against downside labor market risks even as Chair Powell used his press conference to push back against the idea that another cut at the December 9–10 meeting is guaranteed, emphasizing sharply divided views on the Committee, evidence that bank reserves are slipping from “abundant” to merely “ample,” and the need to pause without fresh official readings on employment or inflation. The statement’s sober description of growth as “moderate,” despite private-sector estimates nearer 4 percent, underscored how the absence of October CPI, payroll data, and other inputs are forcing the Fed to rely on partial and private data, much of which points to softening hiring but continued consumer spending. Markets initially assumed a follow-up cut in December, but Powell’s more hawkish tone, noting lingering inflation frustrations, mixed labor signals, and uncertainty about whether recent growth is real or overstated, pulled those odds down sharply. Investors are now bracing for a data-blind December decision in which alternative labor indicators may carry more weight than any official release.

This dynamic is sharpened by the fact that September’s nonfarm payrolls report is now the only official labor data point available to the Fed before the December meeting, complicating case for another rate cut at a time when the shutdown has halted JOLTS (next release: August 2025, on December 9th), ADP (October 2025, on December 3rd), and every other major labor indicator for October and November. Payrolls rose by 119,000, more than double the consensus, with gains concentrated in construction, health care, and leisure and hospitality. The prior two months were revised down, August job creation turned negative; the unemployment rate rose to 4.44 percent; the latter primarily because labor force participation jumped. Wage growth slowed to 0.2 percent, and sector-level data showed uneven hiring with services expanding, transportation and warehousing shrinking and unemployment inflows continuing to exceed outflows for a third month. In total, the report suggests gradual softening of labor market conditions beneath the surface. 

With October and November employment reports cancelled and the next release tentatively planned for December 16, policymakers are left to make a December decision based on a single, stale release, private proxies, and fragmentary signals.

Meanwhile, October’s Institute for Supply Management surveys offered a split view of the underlying economy, reinforcing the sense that growth is uneven but still resilient in places. Services activity accelerated meaningfully, with the headline index rising on the back of strong new orders and renewed business activity — these were partially fueled by data center demand and a burst of mergers and acquisitions in tech and telecom. Contrarily, manufacturing slipped further into contraction at as production reversed sharply following September’s jump. Yet beneath the manufacturing headline, several forward-looking indicators improved, including new orders, backlogs, and employment, all alongside price pressures easing as producers reported input costs rising at a slower pace. Services told the opposite inflation story, with the prices-paid index surging to its highest reading since 2022 and respondents explicitly citing tariffs as a driver of higher contract costs even as service-sector employment contracted more slowly. Taken together the ISM data depict an economy still expanding on the services side while manufacturing remains weak but stabilizing, with demand firming across both sectors even as inflation dynamics sharply diverge.

Those mixed signals contrast with a sharp deterioration in household sentiment. Consumer sentiment fell in November 2025 to one of the lowest readings ever recorded as Americans reported the weakest views of their personal finances since 2009 and the worst buying conditions for big-ticket goods on record. Despite inflation expectations easing for both the one-year (4.5 percent) and long-term (3.4 percent) horizons, households remain deeply strained by high prices, eroding incomes, and growing job insecurity, with the probability of job loss rising to its highest level since mid-2020 and continuing unemployment claims climbing to a four-year high. The survey also highlighted a widening split between wealthier households (whose stock market gains and assets cushion them) and non-stockholders, whose financial positions are deteriorating even as headline economic data appear steady. Of particular note American consumer views darkened even after the federal shutdown ended, suggesting that sentiment is being driven less by political theater and more by lived economic pressure.

View on the other side of the cash register were not materially brighter, which reinforces the broader theme of a cooling but still functioning economy. Small business sentiment slipped to a six-month low in October with the National Federation of Independent Business optimism index falling as firms reported weaker earnings, softer sales, and rising input costs. Half of the index’s components declined, including a notable drop in owners’ expectations for future economic conditions – now at their lowest since April – while the share reporting stronger recent earnings posted its steepest decline since the Covid pandemic. Hiring challenges eased, with only 32 percent of respondents unable to fill openings and fewer firms citing a lack of qualified applicants. Yet near-term hiring plans ticked down for the first time since May, reflecting caution rather than confidence. Price pressures moderated, planned price hikes slipped to a net 30 percent, and somewhat paradoxically the uncertainty index fell to its lowest level of the year (yet remained high by historical standards). The consequent picture is one where firms are still uneasy, yet not panicking, about souring trends in demand, margins, and the broader economic trajectory.

In retail consumption the recent narrative is similar: signs of slowing momentum but not collapse. September brought a modest downshift from August’s brisk pace as households eased off goods purchases after an unusually strong back-to-school season, even as discretionary spending at restaurants and bars remained solid. Headline retail sales rose just 0.2 percent, with most of the softness concentrated in nonstore retail, autos, and the control group categories (clothing, sporting goods, hobby items, and online purchases) all of which gave back part of the summer’s surge. Food services and drinking places, by contrast, continued to post healthy gains, suggesting the pullback in goods was more a matter of normalization than retrenchment, and that spending momentum remained intact through the end of the third quarter. Despite the mixed monthly profile, strength earlier in the summer left real consumer spending on track for a robust 3.2 percent annualized gain in the third quarter, underscoring that households, however stretched and anxious, were still spending steadily heading into the shutdown.

All of this must be interpreted through the lens of the unprecedented disruption of the federal statistical system. The next industrial production and capacity utilization readings are likely to be released on December 3, but beyond that, the timing of most other releases remains uncertain, and agency leaders must now decide which October data can be reconstructed and on what schedule. The CPI presents the thorniest case: with two-thirds of its 100,000 monthly price quotes gathered through in-person store visits, none of which occurred in October 2025, the probability is high that no October CPI will ever be published, and the November CPI may also be delayed beyond the December FOMC meeting. Missing shelter data will complicate rent calculations well into the first quarter of 2026, while surveys fundamental to unemployment measurement simply cannot be recreated weeks after the fact. Although payroll employment and GDP are less vulnerable, because both can be backfilled from employer and business records, the broader effect is essentially the same: for the next several months, official US data will be patchy, delayed, and in some cases permanently incomplete.

Even once agencies resume full operations, the statistical damage will ripple outward, affecting not only headline indicators but also numerous dependent series and long-running supplements. The unemployment rate may post its first missing observation in more than 75 years, since labor market transition measures cannot be estimated. The education supplement to the October household survey will disappear entirely. More immediately, the delays in the November employment report and CPI mean the Fed’s December rate decision will be made with little to no official visibility on inflation or labor conditions for two full months — an extraordinarily rare and consequential impairment. While most of the record will eventually be repaired, the next several weeks will hinge on crucial judgments by BLS, BEA, Census Bureau, and Federal Reserve officials about accuracy, feasibility, and timing. Those choices will inexorably determine how quickly the economy’s statistical foundation regains its footing.

Back to the macroeconomic outlook: soft data show the economy’s split personality: services expanding while manufacturing contracts but stabilizes, consumer sentiment collapsing to near-record lows amid deteriorating personal finances and job anxiety with consumption remaining strong, and small-business optimism slipping on weaker earnings and softer sales. The loss of October and November’s core indicators, plus gaps going forward, mean that policymakers have no reliable read on inflation momentum or labor market cooling, forcing them to evaluate the economy through anecdotes and information patches rather than a full picture. In this environment, even modest surprises — whether in private-sector labor trackers, ISM reports, or high-frequency spending data — carry outsized weight, shaping market expectations and policy debates in ways that would never occur under normal statistical conditions. For now, the lone clear signal amid the noise is the price of gold, and its message is unmistakably cautious.

CAPITAL MARKET PERFORMANCE

Can regulation work when a market changes faster than a case can be litigated?

The Justice Department filed its antitrust case against Google in 2020. By the time Judge Amit Mehta issued his ruling in 2024, AI large-language models had already begun to change how people search for information online. As Judge Mehta put it, “the emergence of GenAI changed the course of this case.”  

Indeed, between the 2020 filing and his 2025 remedy decision, the competitive context shifted fundamentally. Google’s antitrust case was argued in one market, and the remedy will be implemented in another, one in which AI is challenging traditional search engines. 

Google’s dominance is already slipping, with the company rapidly losing market share to ChatGPT and other AI chatbots. 

In other words, the market moved faster than the litigation.  That is not an anomaly; it is becoming the norm.

In fast-moving technology sectors, markets often evolve while regulatory and legal processes are still underway, increasing the risk of ill-timed remedies.

Consider ride-hailing. When New York City debated how to regulate Uber and Lyft in 2015, regulators worked within frameworks built for a capped number of taxi medallions and tightly controlled entry.  Yet between 2015 and 2018, the number of for-hire vehicles in the city surged from 63,000 to over 100,000. By the time comprehensive rules emerged, they governed a market fundamentally different from the one initially under review.  

Commercial drones show a similar pattern. Zipline began large-scale medical drone delivery operations in Rwanda in 2016 and spent years seeking comparable authorization in the United States. The company received emergency FAA waivers in 2020 and Part 135 air carrier certification in June 2022. From 2016 to 2022, Zipline completed hundreds of thousands of international deliveries with the same drones it sought to operate in the US. 

The merger of Hewlett-Packard Enterprise (HPE) and Juniper Networks, approved by the Department of Justice under certain structural remedies, illustrates similar dynamics. Cisco remains the largest player, and yet even its market share doesn’t come close to clearing 50 percent, as it did nearly a decade ago. To think that some state attorneys general are trying to convince a judge to reverse the DOJ’s decision to allow not Cisco but HPE-Juniper — Cisco’s smaller competitor — to merge is preposterous. 

These examples point to the same issue: when regulators analyze markets more slowly than the markets themselves change, they risk setting rules for conditions that no longer exist, and enforcement can arrive when the competitive landscape has already shifted.  

Gail Slater, the Assistant Attorney General for Antitrust, acknowledged as much in a September speech. 

“Premature regulation can be particularly harmful in incipient industries at the early stages of development because it imposes broad, ex ante rules,” she said, “and these rules have the effect of limiting the direction of innovation across the entire industry.”

She’s right. The next generation of disruptive technologies is emerging: AI assistants, autonomous systems, and eventually quantum computing. The question is whether oversight evolves with these markets or continues to govern conditions that no longer exist.  

There is a practical way to address this issue: allow regulation to update over time instead of assuming that market conditions will remain stable. Other domains already do this. Financial regulators periodically update capital and trading rules. Regulation works best when it is continuous, not static, and when it adapts as information changes.

This is not an argument against regulation. It is an argument for regulation that can adjust as firms race to commercialize new technologies. Periodic reviews would allow rules to tighten when risks emerge and to adjust when market conditions change. Regulators can either build systems that adapt alongside these markets or spend the next decade enforcing remedies designed for the last one.

A regulatory framework that adapts isn’t more lenient — it’s more effective. And effective regulation is what keeps markets open to new competitors.

Introduction

The gold standard was a monetary system that defined a unit of a nation’s currency as a fixed weight of gold and made the two mutually exchangeable. For much of modern history, several versions of this pairing served as the foundation of global trade and finance. Under the gold standard, governments promised to redeem paper money for a defined amount of gold on demand, which made the value of currencies stable and predictable. That stability fueled unprecedented global integration, linking the prosperity of many nations through the shared economic logic of gold.

The gold standard was largely abandoned during the twentieth century, but debate over its virtues and flaws endures. Supporters see it as a bulwark against inflation and government overspending; critics call it too rigid for modern economies. Understanding what the gold standard was, how it worked, and why it fell out of favor helps to clarify not only a pivotal era in economic history but also recurring arguments about money, fiscal discipline, and currency stability. 

What Is the Gold Standard?

Under an active gold standard, a country defines its currency as equivalent toa specific weight of gold. Governments or central banks advertise willingness to buy or sell gold at that fixed price, ensuring that paper money is “as good as gold.” When the United States adopted the classical gold standard, one dollar equaled about one-twentieth of an ounce of gold. Anyone could, in theory, exchange paper currency for that amount of metal. 

This convertibility linked every participating currency to gold, and to one another, creating a system of fixed exchange rates. A dollar, a pound, or a franc all represented certain weights of gold, making international trade and investment far more predictable. Because the supply of gold changed only slowly, the total amount of money governments could print was naturally limited. That constraint is what advocates of the gold standard consider its greatest strength: it restricted governments from printing money without real value behind it. 

Over time, the gold standard evolved in several forms. The gold specie standard, dominant in the nineteenth century, involved coins made of gold circulating alongside paper notes that were fully redeemable for gold. After World War I, many nations moved to a gold bullion standard, in which paper money could be exchanged for large bars of gold held by central banks, but gold coins disappeared from daily use. Later, the gold exchange standard — most notably the Bretton Woods system after 1944 — linked national currencies indirectly to gold through reserve currencies such as the US dollar. Each version reflected an attempt to preserve gold’s stability while adapting to changing political and economic conditions. 

How the Gold Standard Worked

The gold standard operated through a simple but powerful mechanism: every unit of currency was a claim on a fixed quantity of gold held by the issuing authority. Central banks or treasuries maintained gold reserves to back that commitment. When a country ran a trade surplus, gold flowed in; when it ran a deficit, gold flowed out. These movements automatically regulated domestic money supplies and prices. 

This dynamic was captured in the price-specie flow mechanism, first described by the nineteenth-century economist David Hume. If a nation imported more than it exported, gold left the country to pay for those goods. The resulting contraction of the money supply reduced prices and wages, making exports cheaper and imports dearer until balance was restored. Conversely, gold inflows expanded the money supply and lifted prices, damping exports and stimulating imports. In theory, this automatic adjustment kept the global economy in equilibrium without the need for government manipulation. 

The gold standard’s self-correcting nature was both a discipline and a constraint. Governments could not simply expand credit or pursue inflationary spending without risking a drain of gold reserves. At the same time, this rigidity left little room for active responses to recession, war, or financial panic. 

By the late nineteenth century, the major industrial nations — Britain, Germany, France, Japan, and the United States — had adopted this system. Their currencies were convertible into gold at fixed rates, creating what historians call the classical gold standard (1870s–1914). The resulting predictability underpinned an era of extraordinary growth in trade, capital flows, and industrialization. 

Advantages of the Gold Standard

A number of benefits distinguished the gold standard from later fiat-money systems.

Price Stability 

Because gold production increases only slowly, the total supply of money expands at a slow and generally steady pace. This natural limitation kept long-term inflation low. Over decades, average prices under the classical gold standard remained remarkably stable, especially when compared to the persistent inflation of the fiat-currency era. 

Predictability and Confidence 

The promise that paper money could be converted into gold made currencies credible. Businesses could plan investments and trade agreements without fearing sudden currency devaluations. Fixed exchange rates reduced uncertainty in international commerce and encouraged the flow of capital across borders. 

Fiscal and Monetary Discipline 

Linking money creation to gold restrained governments from overspending or financing deficits by printing currency. Monetary policy was effectively automatic: a nation could not expand its money supply unless it acquired more gold. For this reason, advocates view the gold standard as a guardrail against political manipulation of money and a deterrent to reckless borrowing. 

Promotion of International Trade 

A universal gold anchor simplified exchange and reduced transaction costs. With stable exchange rates, traders and investors faced fewer risks, and international settlements could be made in a currency recognized everywhere. 

Protection Against Manipulation 

Unlike modern systems, in which central banks can devalue currencies or engage in “quantitative easing,” the gold standard made competitive devaluations and “currency wars” far more difficult. Its rules constrained the temptation to seek economic advantage through monetary distortion. 

Encouragement of Saving and Investment 

Stable prices preserved the purchasing power of money, fostering an environment in which long-term planning, capital accumulation, and thrift were rewarded. Investors could rely on real returns rather than on nominal gains eroded by inflation.

To the gold standard’s defenders, these traits explain why the classical gold standard coincided with rapid industrialization, robust trade expansion, and rising living standards across much of the world. 

Alleged Disadvantages of the Gold Standard: A Balanced Examination

Critics of the gold standard see those same features — discipline and rigidity — as liabilities. But many alleged flaws reflect implementation failures or modern misinterpretations, rather than inherent defects. 

Inflexibility and Limited Policy Response 

Opponents argue that tying money to gold prevents governments and central banks from acting decisively during crises. Under the gold standard, expanding the money supply or lowering interest rates risked losing gold reserves. Supporters counter that this discipline prevented the political misuse of money and forced governments to confront fiscal realities instead of masking them with currency inflation. 

Deflationary Tendencies 

Because gold supplies grow slowly, economies under the standard could face mild deflation during periods of rapid productivity growth. Critics warn that falling prices increase debt burdens and discourage investment. Much of this “deflation,” however, was of the benign kind — reflecting efficiency gains rather than collapsing demand — and often coincided with strong economic growth. 

Vulnerability to Gold Supply Shocks 

The discovery of new gold deposits could modestly increase money supplies, while scarcity could constrain growth. Still, such changes were gradual and predictable (about one percent per year) compared with the abrupt inflationary shocks that fiat regimes can unleash through policy error or political expediency. 

Constraints on Growth 

Some economists claim that a gold-based system limits credit creation. Historically, however, banking systems developed fractional-reserve practices that allowed credit to expand well beyond physical gold holdings, so long as public confidence remained intact. The industrial revolutions of Britain, Germany, and the United States unfolded entirely under gold-linked regimes. 

Difficult International Coordination 

The interwar period demonstrated how uneven adherence to gold rules could destabilize the system. Yet the problem lay in inconsistent policies — overvalued currencies, protectionist trade barriers, and poor coordination — rather than in gold itself. 

Exposure to Crises 

Some have claimed that the gold standard worsened bank runs by restricting emergency liquidity. But under the classical system, private clearinghouses often filled that role effectively by issuing temporary certificates and policing member banks. Such crises also occur under fiat systems; their frequency since 1971 suggests that discretion is no panacea. 

Historical Instability 

The Great Depression is often cited as proof that the gold standard was fatally flawed. In fact, many economists — including Barry Eichengreen and Milton Friedman — acknowledge that poor policy choices, such as Britain’s overvalued return to pre-war parity and the Federal Reserve’s inaction in 1931-33, deepened the downturn. Nations that left gold earlier — like Britain in 1931 — recovered faster than those that clung rigidly to it. The failure was less about gold itself than about governments’ unwillingness to adapt intelligently. 

In short, while the gold standard imposed constraints, many of its supposed defects stemmed from mismanagement or misunderstanding. Every monetary system involves trade-offs; gold’s discipline may appear harsh, but it also forestalled the chronic inflation and debt accumulation that define modern economies. 

Rise of the Gold Standard

Gold has served as money for millennia because of its scarcity, divisibility, and durability. Ancient civilizations used gold coins as units of account and stores of value, but the formal linkage between gold and national currencies developed gradually with the rise of modern banking.

In early modern Europe, goldsmiths issued paper receipts for stored metal, which began circulating as money. The realization that not all depositors redeemed their gold simultaneously led to fractional-reserve banking — a key innovation that allowed credit expansion beyond physical reserves. 

Britain was the first major nation to codify a gold standard, officially adopting it in 1821 after years of wartime inflation. Its global influence ensured that others followed: Germany in 1871, the United States in 1879, France and Japan soon thereafter. By the 1870s, the classical gold standard had become the backbone of international finance. Currencies were freely convertible into gold, exchange rates were fixed, and trade imbalances were corrected through automatic gold flows. 

This system coincided with rapid globalization. Capital moved freely, shipping and communication costs fell, and international investment flourished. The gold standard’s credibility helped unify the world economy in a way unmatched until late in the twentieth century. 

Collapse of the Gold Standard

The end of the gold standard came not from economic theory, but from the pressures of war, depression, and political expedience. 

World War I (1914) 

The classical gold standard’s first collapse came when belligerent nations suspended convertibility to finance massive military spending. Paper money flooded economies, and inflation followed. By the war’s end, the system was in tatters. 

The Interwar Gold Exchange (or “Managed”) Standard (1919–1933) 

After the war, several nations tried to restore the pre-war order. Britain returned to gold in 1925 at its old parity, overvaluing the pound and triggering deflation. Other countries followed with similar missteps, attempting to maintain gold convertibility without the fiscal discipline that had once supported it. The result was a fragile and uncoordinated system that collapsed under the strain of the Great Depression. Britain abandoned gold in 1931; the United States followed in 1933 for domestic use, though it maintained limited international convertibility. 

The Bretton Woods System (1944–1971) 

In the wake of World War II, nations sought a more flexible gold-based order. The Bretton Woods agreement pegged other currencies to the US dollar, while the dollar itself was convertible into gold at $35 per ounce. For two decades, the system promoted stability and growth. Yet in its success were the seeds of its downfall. As global trade expanded, the supply of dollars grew far faster than US gold reserves. Massive spending on the military in Vietnam and on expansive social programs at home fueled deficits and inflation. Confidence in the dollar waned. 

In August 1971, President Richard Nixon suspended the dollar’s convertibility into gold — a moment known as the Nixon Shock. Within two years, the world’s major economies had shifted to floating exchange rates. By 1973, the gold standard, in all its forms, had come to an end. 

Conclusion

The gold standard shaped global economic history for nearly two centuries. It imposed a clear, transparent rule linking money to a tangible asset, thereby restraining inflation and curbing political manipulation. That very discipline, however, proved incompatible with the fiscal demands of modern warfare, welfare states, and activist monetary policy. 

The shift to fiat money systems brought flexibility to spend more but also chronic inflation, recurring financial crises, and rising public debt. Today, few economists advocate a full return to gold, recognizing that the scale and complexity of global finance make it impractical. But the gold standard remains a touchstone in debates over monetary integrity, symbolizing a time when money was anchored in something real — and when the value of currency depended less on trust in the discretion of governments than on the weight of a metal measured in ounces. 

Even if the world never returns to a gold-based system, understanding how it worked — and why it failed — offers enduring lessons. Stability and discipline come at a cost, but so does the freedom to create money without constraint. The long arc of monetary history suggests that neither extreme provides a permanent answer, yet the gold standard endures as a benchmark against which every modern experiment is, in some sense, still judged.

References

Bordo, M. D., & Schwartz, A. J. (Eds.). (1984). A Retrospective on the Classical Gold Standard, 1821–1931. University of Chicago Press. 

Bordo, M. D. (1981). The classical gold standard: Some lessons for today. Federal Reserve Bank of St. Louis Review, 63(5), 2–17. 

Eichengreen, B. (1996). Globalizing Capital: A History of the International Monetary System (2nd ed.). Princeton University Press. 

Eichengreen, B., & Sachs, J. (1985). Exchange rates and economic recovery in the 1930s. Journal of Economic History, 45(4), 925–946. 

Friedman, M., & Schwartz, A. J. (1963). A Monetary History of the United States, 1867–1960. Princeton University Press. 

Luther, W. J., & Earle, P. C. (2021). The Gold Standard: Retrospect and Prospect. 

Menger, C. (1892). On the origin of money. Economic Journal, 2(6), 239–255. 

Officer, L. H. (2008). The price of gold and the exchange rate since 1791. Journal of Economic Perspectives, 22(1), 115–134. 

Rockoff, H. (1984). Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge University Press. 

Smith, V. (1990). The Rationale of Central Banking and the Free Banking Alternative (L. H. White, Ed.). Liberty Fund. (Original work published 1936)

On Capitol Hill this week, five Democratic senators accused the Trump administration of “sweetheart deals with Big Tech” that have “driven up power bills for ordinary Americans.” 

Their letter, addressed to the White House, faulted the administration for allowing data-center operators to consume “massive new volumes of electricity without sufficient safeguards for consumers or the climate.”

But the senators’ complaint points to a deeper reality neither party can ignore: artificial intelligence is changing America’s energy economy faster than policy can adapt. Every conversation with ChatGPT, every AI-generated image, every search query now runs through vast new physical infrastructure — data centers — that consume more electricity than some nations. 

The world’s appetite for digital intelligence is colliding with its appetite for cheap, reliable power. 

A New Industrial Landscape 

The anonymous-looking gray boxes—bigger than football fields—rising across Virginia, Texas, and the Arizona desert look like nothing special from the highway. Inside, however, they house the machinery of the new economy: tens of thousands of high-end processors performing trillions of calculations per second. These are the “intelligence factories,” where neural networks are trained, deployed, and refined — and where America’s energy system is pushed to its limits and beyond. 

“People talk about the cloud as if it were ethereal,” energy analyst Jason Bordoff said recently. “But it’s as physical as a steel mill — and it runs on megawatts.” 

According to the Pew Research Center, US data centers consumed about 183 terawatt-hours (TWh) of electricity in 2024 — some 4 percent of total US power use, and about the same as Pakistan. By 2030, that figure could exceed 426 TWh, more than double today’s level. The International Energy Agency (IEA) warns that, worldwide, data-center electricity demand will double again by 2026, growing four times faster than total global power demand. 

The driver is artificial intelligence. Training and running large language models (LLMs) like ChatGPT and other models requires enormous computing clusters powered by specialized chips — notably Nvidia’s graphics processing units (GPUs). Each new generation of AI systems multiplies power requirements. OpenAI’s GPT-4 reportedly demanded tens of millions of dollars’ worth of electricity just to train. Multiply that by hundreds of companies now racing to build their own AI models, and the implications for the grid are staggering. 

Where the Power Is Going 

The American and global epicenter (for now) of this new build-out remains Loudoun County, Virginia — nicknamed “Data Center Alley” — where nearly 30 percent of that county’s electricity now flows to data facilities. Virginia’s utilities estimate that data centers consume more than a quarter of the whole state’s total generation.

Elsewhere in America, the story is similar. Microsoft’s burgeoning data center complex near Des Moines has forced MidAmerican Energy to accelerate new natural-gas generation. Arizona Public Service now plans to build new substations near Phoenix to serve a cluster of AI facilities; Texas grid operator ERCOT says data centers will add 3 gigawatts of demand by 2027. 

And the trend, by the way, isn’t limited to electricity. Most facilities require water for cooling. A single “hyperscale” campus can use billions of gallons per year, prompting local backlash in drought-prone regions.

The Political Blame Game 

Soaring demand has begun to translate into electric-rate filings. US utilities asked for $29 billion in rate increases in the first half of 2025, nearly double the total for the same period last year. Executives cite “data-center growth and grid reinforcement” as drivers. 

And so, we get the letter from Senate Democrats — among them Elizabeth Warren and Sheldon Whitehouse — urging the Department of Energy to impose “efficiency standards” and “consumer protections” before authorizing new power contracts for AI operators. “We cannot allow Silicon Valley’s hunger for compute to be fed by higher bills in the heartland,” they wrote. 

The Trump administration shot back a reply. Press Secretary Karoline Leavitt said, “The president will not let bureaucrats throttle America’s leadership in AI or its supply of affordable energy. If the choice is between progress and paralysis, he chooses progress.” 

That framing “progress versus paralysis” captures the larger divide. The administration has prioritized energy abundance, reopening leasing on federal lands, greenlighting LNG export terminals, rolling back environmental restrictions of all kinds, and signaling renewed support for coal and nuclear power. Democrats, fixated on climate commitments, have continued to oppose expanded drilling in Alaska’s Arctic and new offshore projects, while pressing for data centers to run on renewables. 

Powering the AI Boom 

Without continuous electricity, the AI boom falters. Nvidia, Microsoft, and OpenAI are already pushing the limits of available capacity. In April, Microsoft confirmed it will buy power from the planned restart of the Three Mile Island Unit 1 reactor — mothballed since 2019 — to feed its growing data-center fleet in Pennsylvania. “We’re essentially connecting a small city’s worth of demand to the grid,” said an energy executive involved in the project. “Data centers are an order of magnitude larger than anything we’ve built for before.” 

That “small city” reference is not an exaggeration. A single hyperscale facility can draw 100 megawatts — roughly the load of 80,000 households. Dozens of such projects are under construction. 

And while the industry’s largest players are also buying wind and solar power contracts, they admit that renewables alone cannot meet the 24-hour load. “When the model is training, you can’t tell it to pause because the sun set,” one data-center engineer quipped. 

The Economics of Constraint 

From an economic perspective, what matters is not only rising demand but constrained supply. Regulations restricting oil, gas, and pipeline development keep marginal electricity generation expensive. Permitting delays for transmission lines slows the build-out of new capacity. At the same time, federal subsidies distort investment toward intermittent sources that require backup generation — often natural gas — to stabilize the grid. 

A perfect storm of policy contradictions may be brewing: a government that wants both a carbon-neutral grid and dominance in energy-hungry AI. 

“The irony is that the very politicians demanding AI leadership are the ones making it harder to power,” said economist Stephen Moore. “You can’t have artificial intelligence without real energy.” 

In a free market, higher demand would spur rapid expansion of supply. Investors would drill, build, and innovate to capture new profit opportunities. Instead, production and permitting are politically constrained, so prices must rise until demand is choked off. That is the dynamic now visible in electricity bills — and in the Senate’s sudden search for someone to blame. 

The Global Race 

Complicating it all, to say the least, is the geopolitical dimension. China, the European Union, and the Gulf states are racing to build their own AI infrastructure. Beijing’s Ministry of Industry announced plans for 50 new “intelligent computing centers” by 2027, powered largely by coal. In the Middle East, sovereign wealth funds are backing data-center projects co-located with gas fields to guarantee cheap electricity. 

If the US restricts its own energy production, it risks ceding the field. “Energy is now the limiting reagent for AI,” venture capitalist Marc Andreessen wrote this summer. “Whichever country solves cheap, abundant power wins the century.”

That insight revives old debates about industrial policy. Should Washington subsidize domestic chip foundries and their power plants, or should it clear the regulatory thicket that deters private capital from building both? Innovation thrives on liberty, not mircomanagement. 

The New Factories 

Are data centers so different from factories of the industrial age? They convert raw inputs like electricity, silicon, cooling water, and capital into valuable outputs: trained models and real-time AI services. But unlike the factories of the past, they employ few workers directly. A billion-dollar hyperscale facility may have fewer than 200 staff. That does not sit well with the communities in which the vast data centers are located. The wealth is created upstream and downstream: in chip design, software, and the cascade of productivity gains AI enables. 

Still, the indirect productivity is vast. AI-driven logistics shave fuel costs, AI-assisted medicine accelerates diagnosis, and AI-powered coding tools raises output per worker. But all of it depends on those humming, appallingly noisy, heat-filled halls of servers. As OpenAI’s Sam Altman remarked last year, “A lot of the world gets covered in data centers over time.” 

If true, America’s next great industrial geography will not be steel towns or tech corridors, but the power corridor: regions anywhere that electricity is plentiful, cheap, and politically welcome. 

Already, states like Texas and Georgia are advertising low-cost energy as a lure for AI investment. 

Markets Versus Mandates 

From a free-market perspective, the lesson is straightforward. Economic growth follows energy freedom. When government treats energy as a controlled substance — rationed through regulation, taxed for vice, or distorted by subsidies — innovation slows. When markets are allowed to meet demand naturally, abundance results. 

In the early industrial age, the United States became the world’s workshop because it embraced abundance: of coal, oil, and later electricity. Every new machine and factory depended on those resources, and entrepreneurs supplied them without central direction. Today’s equivalent is the AI data center. Its prosperity depends on letting energy producers compete, invest, and innovate without political interference. 

Politics Ahead 

Over the next year, expect the power issue to dominate AI politics. Democrats will press for efficiency mandates and carbon targets; Republicans will frame energy freedom as essential to national strength. Federal officials already are discussing a kind of “clean AI” certification system tied to renewable sourcing — critics say that could amount to a de facto quota on computer power. 

Meanwhile, utilities are rethinking grid design for a world where data centers behave like factories that never sleep. The market is responding: small, modular nuclear reactors, advanced gas turbines, and geothermal projects are attracting venture funding as potential baseload sources for AI campuses. 

For policymakers, the challenge is to resist the urge to micromanage. As AIER’s scholarship often finds, spontaneous order, not centralized control, produces both efficiency and resilience. Allowing prices to signal scarcity and opportunity will attract the investment necessary to balance America’s energy equation.

The Freedom to Compute 

In the end, the debate over data centers and electricity bills is really about the freedom to compute. The same economic laws that governed the Industrial Revolution still apply: productivity rises when entrepreneurs can transform energy into work — whether mechanical or digital. 

Artificial intelligence may be virtual, but its foundations are unmistakably physical. To sustain the AI boom without bankrupting ratepayers, the United States must choose policies that unleash energy production rather than constrict it. 

The “cloud” will always have a power bill. The question is whether that bill becomes a burden of regulation or a dividend of freedom.