Category

Economy

Category

In most sectors of the American economy, we celebrate the moment when insiders break away to build something better. Engineers start their own firms. Chefs open their own restaurants. Innovators leave incumbents and test their mettle in the market. Only in US healthcare do we treat that entrepreneurial impulse as a threat worthy of prohibition. 

Section 6001 of the 2010 Affordable Care Act froze the growth of physician-owned hospitals (POHs) by barring new POHs from getting paid by Medicare and Medicaid, and by restricting the expansion of existing POHs. It did not ban POHs outright, but it had roughly the effect of a ban; after years of growth, the number of POHs in the US abruptly plateaued at around 230-250, and practically no new POHs have opened since 2010.   

Supporters of the ban on POHs say it is needed to prevent conflicts of interest, cream-skimming, and overuse.

One argument is that without such a ban, POHs would cherry-pick the healthier and more profitable patients, leaving other hospitals with sicker and more costly patients. There is some evidence that physician-owned specialty hospitals tend to attract healthier patients and tend to focus on lucrative service lines. But why does that justify a ban on POHs? Specialization is one way that entrepreneurs create value. Cardiac centers, orthopedic hospitals, and focused surgical facilities exist precisely because repetition and standardization can improve outcomes and reduce costs. Specialty hospitals can even exert a positive influence on surrounding general hospitals to improve quality and reduce costs for everyone. 

Another argument is that uncontrolled self-referral would result in the overutilization of services and a rise in healthcare spending. Overutilization is a major contributor to wasteful spending in healthcare, which has been estimated to account for approximately 25 percent of total healthcare spending, or between $760 billion and $935 billion nationwide. The reasoning is that if physicians are able to refer patients internally for services, procedures, and tests, then physicians will cease to exercise careful cost control. This, however, is more of an indictment of the current price and payment systems than an indictment of physician ownership. By setting prices via committee instead of relying on genuine market prices, policymakers have created in Medicare and Medicaid a gameable system that rewards volume. The response to poorly designed reimbursement mechanisms should be to fix the mechanisms, not blame ownership models.

The POH issue illustrates how, in a mixed economy, controls beget controls. To keep the program politically popular, Medicare’s pre-payment review and protections against waste are generally less stringent than those found in the private insurance world. Given that context, preventing physicians from referring patients to the entities they own can seem like a sensible check against waste and abuse.

In a more market-driven system, however, the problem would evaporate without the need for a ban on POHs. Individuals (or their plan sponsors) would control more of their healthcare dollars; prices would be transparent and site-neutral; and hospitals and physician-led facilities would compete on bundled prices, warranties, and measured outcomes. The alleged perils of physician ownership would be addressed through competition and reputation. Insurers and self-funded employers would exercise discipline on overuse through selective contracting, reference-based pricing, and value-based payments, and patients would reward cost-effective specialists. 

In a free-market system, a physician’s ownership stake in a hospital is no more a threat to the taxpayer than a chef’s ownership stake in a restaurant is to an individual looking for a good place to dine. 

Often in US health policy, we are in the position of needing to make multiple fixes simultaneously in order to take a real step forward. Philosophically, the ban is indefensible. Physicians should be as free as any other professionals to become entrepreneurs and form, finance, and run institutions. Entrepreneurship should not require special permission. In nearly every other industry, the very engine of specialization, quality improvement, and cost discipline is entrepreneurship. Entrepreneurial profit is a reward for foresight, innovation, and service. But prior policy decisions give the ban a veneer of justification.

If we let the POH ban stand, then incumbency triumphs over innovation, with large hospital systems holding a legislated shield against potential competitors. If we lift the ban but make no accompanying changes, some fleecing of the taxpayer could occur.

We ought to lift the ban on POHs while simultaneously making reforms that let individuals control more of their own healthcare dollars. This would incentivize physicians to compete on value, mitigating concerns about overutilization.

One way to do this is to pair the repeal of the POH ban with payment neutrality and consumer control. This would end the artificial price differences that federal policy has assigned to different sites of care. MedPAC has long recommended site-neutral payment to strip away hospital markups for services that can be safely delivered in lower-cost settings. Efficient entrants will thrive by being better at care, instead of being better at “location arbitrage.”

Another way to do this is to put more real dollars under patient control. Empowering individuals with flexible accounts — yes, even in the Medicare and Medicaid contexts — would guard against overutilization. Evidence shows that when consumers face prices and control the marginal dollar, spending becomes more disciplined. This could be the proving ground for broader reforms involving the pairing of portable health savings accounts with catastrophic coverage in the Medicare and Medicaid populations.

Maintaining the ban on POHs is wrong. It denies clinicians the freedom to build their own institutions, and it denies patients the freedom to choose them. However, simply repealing the ban without making any other changes could open the door to overutilization at the expense of taxpayers, which is why we should pair the lifting of the ban with other changes. We should protect voluntary exchange among free individuals, while taking steps to align incentives so that patients, not political pull, direct the flow of dollars.

The longest federal shutdown in US history has created deep gaps in the flow of economic data, preventing calculation of the Business Conditions Monthly indices. Most BCM components depend on federal statistical agencies, including the US Bureau of Labor Statistics, Census Bureau, Bureau of Economic Analysis, and the Federal Reserve, that were unable to collect, process, or publish October 2025 data. As a result, critical indicators such as payroll employment, labor force participation, consumer price index, industrial production, housing starts, retail sales, construction spending, business inventories, factory orders, personal income, and several Conference Board composites remain unavailable or were published without the sub-series needed for BCM methodology. Agencies have already confirmed that several October datasets were never collected and cannot be reconstructed. And while a handful of private and market-based measures (University of Michigan consumer expectations, FINRA margin balances, heavy truck sales, commercial paper yields, and yield-curve spreads) continued updating normally, the BCM cannot be produced unless all 24 components are available for the same month; missing even one Census or BLS series renders the entire month unusable.

Because the October data will not be produced, that month is permanently lost for BCM purposes. The indices can resume only once federal agencies complete their post-shutdown catch-up work and release full, internally consistent datasets for the next available month in which all 24 BCM components exist. Even once resumption begins, calculations based on the first complete month may reflect a gap that renders that initial reading economically suspect. Based on current release schedules, the earliest realistic timeframe for restoring the BCM is early 2026, once a complete set of post-shutdown data is again available. 

This new data void is a graphic illustration of how short-term, error-prone, and erratic US economic policy has become, echoing earlier episodes such as the “transitory” miscalculation of 2021, the ruinous and clumsily-handled pandemic responses, and the panicked Fed rate hikes between 2022 and 2023 which resulted in a minor banking crisis. 

Discussion, October – November 2025

September’s inflation data (released October 24th) offered a rare clean signal in an otherwise muddied environment, confirming a broad though modest cooling in both headline and core CPI before the federal shutdown froze statistical agencies. Headline CPI rose 0.31 percent and core 0.23 percent, both softer than expected, with year-over-year core easing to 3.0 percent. Goods inflation softened, helped by declining vehicle prices and deflation among low–tariff-exposure categories. Core services, meanwhile, slowed sharply on a sizable drop in shelter inflation. Firms continued to pass through roughly 26 cents of every dollar of tariff costs, leaving price pressures elevated but stable, and diffusion indices showed slightly narrower breadth, with fewer extreme increases or declines. Combined, these data reinforced market expectations for another rate cut in December.

Producer price data (released November 25th) painted a similar picture of contained underlying pressures, reinforcing the disinflationary tilt suggested by the CPI. September headline PPI firmed to 0.3 percent on an energy spike, but core PPI rose only 0.1 percent, below expectations, and categories feeding into the Fed’s preferred core PCE gauge were mixed. Portfolio-management fees fell sharply, medical services posted uneven readings, and airfares jumped, suggesting pockets of resilient discretionary spending. Overall producer-side inflation remained tame. Prices for steel and aluminum products covered by Section 232 tariffs have risen about 7.6 percent since March yet appear to be leveling off, supporting the observation that tariff-driven pressures are largely one-time rather than accelerating. The challenge ahead is that October CPI and several subsequent releases will be heavily compromised: two-thirds or more of price quotes were never collected during the shutdown, forcing the Bureau of Labor Statistics to rely on imputation well into spring 2026. As a result, September’s moderate inflation reading may be the last clean data point for months, complicating the Fed’s ability to gauge true disinflation progress even as markets continue to anticipate further easing.

Against this backdrop, the Fed entered its October 28–29 meeting with more uncertainty than usual and opted for the path of least resistance: cutting rates by 25 basis points and announcing that quantitative tightening via the balance sheet will be dialed back starting on December 1st, citing tightening liquidity conditions and a lack of reliable data as the shutdown froze much of the federal statistical system. Policymakers framed the cut as insurance against downside labor market risks even as Chair Powell used his press conference to push back against the idea that another cut at the December 9–10 meeting is guaranteed, emphasizing sharply divided views on the Committee, evidence that bank reserves are slipping from “abundant” to merely “ample,” and the need to pause without fresh official readings on employment or inflation. The statement’s sober description of growth as “moderate,” despite private-sector estimates nearer 4 percent, underscored how the absence of October CPI, payroll data, and other inputs are forcing the Fed to rely on partial and private data, much of which points to softening hiring but continued consumer spending. Markets initially assumed a follow-up cut in December, but Powell’s more hawkish tone, noting lingering inflation frustrations, mixed labor signals, and uncertainty about whether recent growth is real or overstated, pulled those odds down sharply. Investors are now bracing for a data-blind December decision in which alternative labor indicators may carry more weight than any official release.

This dynamic is sharpened by the fact that September’s nonfarm payrolls report is now the only official labor data point available to the Fed before the December meeting, complicating case for another rate cut at a time when the shutdown has halted JOLTS (next release: August 2025, on December 9th), ADP (October 2025, on December 3rd), and every other major labor indicator for October and November. Payrolls rose by 119,000, more than double the consensus, with gains concentrated in construction, health care, and leisure and hospitality. The prior two months were revised down, August job creation turned negative; the unemployment rate rose to 4.44 percent; the latter primarily because labor force participation jumped. Wage growth slowed to 0.2 percent, and sector-level data showed uneven hiring with services expanding, transportation and warehousing shrinking and unemployment inflows continuing to exceed outflows for a third month. In total, the report suggests gradual softening of labor market conditions beneath the surface. 

With October and November employment reports cancelled and the next release tentatively planned for December 16, policymakers are left to make a December decision based on a single, stale release, private proxies, and fragmentary signals.

Meanwhile, October’s Institute for Supply Management surveys offered a split view of the underlying economy, reinforcing the sense that growth is uneven but still resilient in places. Services activity accelerated meaningfully, with the headline index rising on the back of strong new orders and renewed business activity — these were partially fueled by data center demand and a burst of mergers and acquisitions in tech and telecom. Contrarily, manufacturing slipped further into contraction at as production reversed sharply following September’s jump. Yet beneath the manufacturing headline, several forward-looking indicators improved, including new orders, backlogs, and employment, all alongside price pressures easing as producers reported input costs rising at a slower pace. Services told the opposite inflation story, with the prices-paid index surging to its highest reading since 2022 and respondents explicitly citing tariffs as a driver of higher contract costs even as service-sector employment contracted more slowly. Taken together the ISM data depict an economy still expanding on the services side while manufacturing remains weak but stabilizing, with demand firming across both sectors even as inflation dynamics sharply diverge.

Those mixed signals contrast with a sharp deterioration in household sentiment. Consumer sentiment fell in November 2025 to one of the lowest readings ever recorded as Americans reported the weakest views of their personal finances since 2009 and the worst buying conditions for big-ticket goods on record. Despite inflation expectations easing for both the one-year (4.5 percent) and long-term (3.4 percent) horizons, households remain deeply strained by high prices, eroding incomes, and growing job insecurity, with the probability of job loss rising to its highest level since mid-2020 and continuing unemployment claims climbing to a four-year high. The survey also highlighted a widening split between wealthier households (whose stock market gains and assets cushion them) and non-stockholders, whose financial positions are deteriorating even as headline economic data appear steady. Of particular note American consumer views darkened even after the federal shutdown ended, suggesting that sentiment is being driven less by political theater and more by lived economic pressure.

View on the other side of the cash register were not materially brighter, which reinforces the broader theme of a cooling but still functioning economy. Small business sentiment slipped to a six-month low in October with the National Federation of Independent Business optimism index falling as firms reported weaker earnings, softer sales, and rising input costs. Half of the index’s components declined, including a notable drop in owners’ expectations for future economic conditions – now at their lowest since April – while the share reporting stronger recent earnings posted its steepest decline since the Covid pandemic. Hiring challenges eased, with only 32 percent of respondents unable to fill openings and fewer firms citing a lack of qualified applicants. Yet near-term hiring plans ticked down for the first time since May, reflecting caution rather than confidence. Price pressures moderated, planned price hikes slipped to a net 30 percent, and somewhat paradoxically the uncertainty index fell to its lowest level of the year (yet remained high by historical standards). The consequent picture is one where firms are still uneasy, yet not panicking, about souring trends in demand, margins, and the broader economic trajectory.

In retail consumption the recent narrative is similar: signs of slowing momentum but not collapse. September brought a modest downshift from August’s brisk pace as households eased off goods purchases after an unusually strong back-to-school season, even as discretionary spending at restaurants and bars remained solid. Headline retail sales rose just 0.2 percent, with most of the softness concentrated in nonstore retail, autos, and the control group categories (clothing, sporting goods, hobby items, and online purchases) all of which gave back part of the summer’s surge. Food services and drinking places, by contrast, continued to post healthy gains, suggesting the pullback in goods was more a matter of normalization than retrenchment, and that spending momentum remained intact through the end of the third quarter. Despite the mixed monthly profile, strength earlier in the summer left real consumer spending on track for a robust 3.2 percent annualized gain in the third quarter, underscoring that households, however stretched and anxious, were still spending steadily heading into the shutdown.

All of this must be interpreted through the lens of the unprecedented disruption of the federal statistical system. The next industrial production and capacity utilization readings are likely to be released on December 3, but beyond that, the timing of most other releases remains uncertain, and agency leaders must now decide which October data can be reconstructed and on what schedule. The CPI presents the thorniest case: with two-thirds of its 100,000 monthly price quotes gathered through in-person store visits, none of which occurred in October 2025, the probability is high that no October CPI will ever be published, and the November CPI may also be delayed beyond the December FOMC meeting. Missing shelter data will complicate rent calculations well into the first quarter of 2026, while surveys fundamental to unemployment measurement simply cannot be recreated weeks after the fact. Although payroll employment and GDP are less vulnerable, because both can be backfilled from employer and business records, the broader effect is essentially the same: for the next several months, official US data will be patchy, delayed, and in some cases permanently incomplete.

Even once agencies resume full operations, the statistical damage will ripple outward, affecting not only headline indicators but also numerous dependent series and long-running supplements. The unemployment rate may post its first missing observation in more than 75 years, since labor market transition measures cannot be estimated. The education supplement to the October household survey will disappear entirely. More immediately, the delays in the November employment report and CPI mean the Fed’s December rate decision will be made with little to no official visibility on inflation or labor conditions for two full months — an extraordinarily rare and consequential impairment. While most of the record will eventually be repaired, the next several weeks will hinge on crucial judgments by BLS, BEA, Census Bureau, and Federal Reserve officials about accuracy, feasibility, and timing. Those choices will inexorably determine how quickly the economy’s statistical foundation regains its footing.

Back to the macroeconomic outlook: soft data show the economy’s split personality: services expanding while manufacturing contracts but stabilizes, consumer sentiment collapsing to near-record lows amid deteriorating personal finances and job anxiety with consumption remaining strong, and small-business optimism slipping on weaker earnings and softer sales. The loss of October and November’s core indicators, plus gaps going forward, mean that policymakers have no reliable read on inflation momentum or labor market cooling, forcing them to evaluate the economy through anecdotes and information patches rather than a full picture. In this environment, even modest surprises — whether in private-sector labor trackers, ISM reports, or high-frequency spending data — carry outsized weight, shaping market expectations and policy debates in ways that would never occur under normal statistical conditions. For now, the lone clear signal amid the noise is the price of gold, and its message is unmistakably cautious.

CAPITAL MARKET PERFORMANCE

Can regulation work when a market changes faster than a case can be litigated?

The Justice Department filed its antitrust case against Google in 2020. By the time Judge Amit Mehta issued his ruling in 2024, AI large-language models had already begun to change how people search for information online. As Judge Mehta put it, “the emergence of GenAI changed the course of this case.”  

Indeed, between the 2020 filing and his 2025 remedy decision, the competitive context shifted fundamentally. Google’s antitrust case was argued in one market, and the remedy will be implemented in another, one in which AI is challenging traditional search engines. 

Google’s dominance is already slipping, with the company rapidly losing market share to ChatGPT and other AI chatbots. 

In other words, the market moved faster than the litigation.  That is not an anomaly; it is becoming the norm.

In fast-moving technology sectors, markets often evolve while regulatory and legal processes are still underway, increasing the risk of ill-timed remedies.

Consider ride-hailing. When New York City debated how to regulate Uber and Lyft in 2015, regulators worked within frameworks built for a capped number of taxi medallions and tightly controlled entry.  Yet between 2015 and 2018, the number of for-hire vehicles in the city surged from 63,000 to over 100,000. By the time comprehensive rules emerged, they governed a market fundamentally different from the one initially under review.  

Commercial drones show a similar pattern. Zipline began large-scale medical drone delivery operations in Rwanda in 2016 and spent years seeking comparable authorization in the United States. The company received emergency FAA waivers in 2020 and Part 135 air carrier certification in June 2022. From 2016 to 2022, Zipline completed hundreds of thousands of international deliveries with the same drones it sought to operate in the US. 

The merger of Hewlett-Packard Enterprise (HPE) and Juniper Networks, approved by the Department of Justice under certain structural remedies, illustrates similar dynamics. Cisco remains the largest player, and yet even its market share doesn’t come close to clearing 50 percent, as it did nearly a decade ago. To think that some state attorneys general are trying to convince a judge to reverse the DOJ’s decision to allow not Cisco but HPE-Juniper — Cisco’s smaller competitor — to merge is preposterous. 

These examples point to the same issue: when regulators analyze markets more slowly than the markets themselves change, they risk setting rules for conditions that no longer exist, and enforcement can arrive when the competitive landscape has already shifted.  

Gail Slater, the Assistant Attorney General for Antitrust, acknowledged as much in a September speech. 

“Premature regulation can be particularly harmful in incipient industries at the early stages of development because it imposes broad, ex ante rules,” she said, “and these rules have the effect of limiting the direction of innovation across the entire industry.”

She’s right. The next generation of disruptive technologies is emerging: AI assistants, autonomous systems, and eventually quantum computing. The question is whether oversight evolves with these markets or continues to govern conditions that no longer exist.  

There is a practical way to address this issue: allow regulation to update over time instead of assuming that market conditions will remain stable. Other domains already do this. Financial regulators periodically update capital and trading rules. Regulation works best when it is continuous, not static, and when it adapts as information changes.

This is not an argument against regulation. It is an argument for regulation that can adjust as firms race to commercialize new technologies. Periodic reviews would allow rules to tighten when risks emerge and to adjust when market conditions change. Regulators can either build systems that adapt alongside these markets or spend the next decade enforcing remedies designed for the last one.

A regulatory framework that adapts isn’t more lenient — it’s more effective. And effective regulation is what keeps markets open to new competitors.

Introduction

The gold standard was a monetary system that defined a unit of a nation’s currency as a fixed weight of gold and made the two mutually exchangeable. For much of modern history, several versions of this pairing served as the foundation of global trade and finance. Under the gold standard, governments promised to redeem paper money for a defined amount of gold on demand, which made the value of currencies stable and predictable. That stability fueled unprecedented global integration, linking the prosperity of many nations through the shared economic logic of gold.

The gold standard was largely abandoned during the twentieth century, but debate over its virtues and flaws endures. Supporters see it as a bulwark against inflation and government overspending; critics call it too rigid for modern economies. Understanding what the gold standard was, how it worked, and why it fell out of favor helps to clarify not only a pivotal era in economic history but also recurring arguments about money, fiscal discipline, and currency stability. 

What Is the Gold Standard?

Under an active gold standard, a country defines its currency as equivalent toa specific weight of gold. Governments or central banks advertise willingness to buy or sell gold at that fixed price, ensuring that paper money is “as good as gold.” When the United States adopted the classical gold standard, one dollar equaled about one-twentieth of an ounce of gold. Anyone could, in theory, exchange paper currency for that amount of metal. 

This convertibility linked every participating currency to gold, and to one another, creating a system of fixed exchange rates. A dollar, a pound, or a franc all represented certain weights of gold, making international trade and investment far more predictable. Because the supply of gold changed only slowly, the total amount of money governments could print was naturally limited. That constraint is what advocates of the gold standard consider its greatest strength: it restricted governments from printing money without real value behind it. 

Over time, the gold standard evolved in several forms. The gold specie standard, dominant in the nineteenth century, involved coins made of gold circulating alongside paper notes that were fully redeemable for gold. After World War I, many nations moved to a gold bullion standard, in which paper money could be exchanged for large bars of gold held by central banks, but gold coins disappeared from daily use. Later, the gold exchange standard — most notably the Bretton Woods system after 1944 — linked national currencies indirectly to gold through reserve currencies such as the US dollar. Each version reflected an attempt to preserve gold’s stability while adapting to changing political and economic conditions. 

How the Gold Standard Worked

The gold standard operated through a simple but powerful mechanism: every unit of currency was a claim on a fixed quantity of gold held by the issuing authority. Central banks or treasuries maintained gold reserves to back that commitment. When a country ran a trade surplus, gold flowed in; when it ran a deficit, gold flowed out. These movements automatically regulated domestic money supplies and prices. 

This dynamic was captured in the price-specie flow mechanism, first described by the nineteenth-century economist David Hume. If a nation imported more than it exported, gold left the country to pay for those goods. The resulting contraction of the money supply reduced prices and wages, making exports cheaper and imports dearer until balance was restored. Conversely, gold inflows expanded the money supply and lifted prices, damping exports and stimulating imports. In theory, this automatic adjustment kept the global economy in equilibrium without the need for government manipulation. 

The gold standard’s self-correcting nature was both a discipline and a constraint. Governments could not simply expand credit or pursue inflationary spending without risking a drain of gold reserves. At the same time, this rigidity left little room for active responses to recession, war, or financial panic. 

By the late nineteenth century, the major industrial nations — Britain, Germany, France, Japan, and the United States — had adopted this system. Their currencies were convertible into gold at fixed rates, creating what historians call the classical gold standard (1870s–1914). The resulting predictability underpinned an era of extraordinary growth in trade, capital flows, and industrialization. 

Advantages of the Gold Standard

A number of benefits distinguished the gold standard from later fiat-money systems.

Price Stability 

Because gold production increases only slowly, the total supply of money expands at a slow and generally steady pace. This natural limitation kept long-term inflation low. Over decades, average prices under the classical gold standard remained remarkably stable, especially when compared to the persistent inflation of the fiat-currency era. 

Predictability and Confidence 

The promise that paper money could be converted into gold made currencies credible. Businesses could plan investments and trade agreements without fearing sudden currency devaluations. Fixed exchange rates reduced uncertainty in international commerce and encouraged the flow of capital across borders. 

Fiscal and Monetary Discipline 

Linking money creation to gold restrained governments from overspending or financing deficits by printing currency. Monetary policy was effectively automatic: a nation could not expand its money supply unless it acquired more gold. For this reason, advocates view the gold standard as a guardrail against political manipulation of money and a deterrent to reckless borrowing. 

Promotion of International Trade 

A universal gold anchor simplified exchange and reduced transaction costs. With stable exchange rates, traders and investors faced fewer risks, and international settlements could be made in a currency recognized everywhere. 

Protection Against Manipulation 

Unlike modern systems, in which central banks can devalue currencies or engage in “quantitative easing,” the gold standard made competitive devaluations and “currency wars” far more difficult. Its rules constrained the temptation to seek economic advantage through monetary distortion. 

Encouragement of Saving and Investment 

Stable prices preserved the purchasing power of money, fostering an environment in which long-term planning, capital accumulation, and thrift were rewarded. Investors could rely on real returns rather than on nominal gains eroded by inflation.

To the gold standard’s defenders, these traits explain why the classical gold standard coincided with rapid industrialization, robust trade expansion, and rising living standards across much of the world. 

Alleged Disadvantages of the Gold Standard: A Balanced Examination

Critics of the gold standard see those same features — discipline and rigidity — as liabilities. But many alleged flaws reflect implementation failures or modern misinterpretations, rather than inherent defects. 

Inflexibility and Limited Policy Response 

Opponents argue that tying money to gold prevents governments and central banks from acting decisively during crises. Under the gold standard, expanding the money supply or lowering interest rates risked losing gold reserves. Supporters counter that this discipline prevented the political misuse of money and forced governments to confront fiscal realities instead of masking them with currency inflation. 

Deflationary Tendencies 

Because gold supplies grow slowly, economies under the standard could face mild deflation during periods of rapid productivity growth. Critics warn that falling prices increase debt burdens and discourage investment. Much of this “deflation,” however, was of the benign kind — reflecting efficiency gains rather than collapsing demand — and often coincided with strong economic growth. 

Vulnerability to Gold Supply Shocks 

The discovery of new gold deposits could modestly increase money supplies, while scarcity could constrain growth. Still, such changes were gradual and predictable (about one percent per year) compared with the abrupt inflationary shocks that fiat regimes can unleash through policy error or political expediency. 

Constraints on Growth 

Some economists claim that a gold-based system limits credit creation. Historically, however, banking systems developed fractional-reserve practices that allowed credit to expand well beyond physical gold holdings, so long as public confidence remained intact. The industrial revolutions of Britain, Germany, and the United States unfolded entirely under gold-linked regimes. 

Difficult International Coordination 

The interwar period demonstrated how uneven adherence to gold rules could destabilize the system. Yet the problem lay in inconsistent policies — overvalued currencies, protectionist trade barriers, and poor coordination — rather than in gold itself. 

Exposure to Crises 

Some have claimed that the gold standard worsened bank runs by restricting emergency liquidity. But under the classical system, private clearinghouses often filled that role effectively by issuing temporary certificates and policing member banks. Such crises also occur under fiat systems; their frequency since 1971 suggests that discretion is no panacea. 

Historical Instability 

The Great Depression is often cited as proof that the gold standard was fatally flawed. In fact, many economists — including Barry Eichengreen and Milton Friedman — acknowledge that poor policy choices, such as Britain’s overvalued return to pre-war parity and the Federal Reserve’s inaction in 1931-33, deepened the downturn. Nations that left gold earlier — like Britain in 1931 — recovered faster than those that clung rigidly to it. The failure was less about gold itself than about governments’ unwillingness to adapt intelligently. 

In short, while the gold standard imposed constraints, many of its supposed defects stemmed from mismanagement or misunderstanding. Every monetary system involves trade-offs; gold’s discipline may appear harsh, but it also forestalled the chronic inflation and debt accumulation that define modern economies. 

Rise of the Gold Standard

Gold has served as money for millennia because of its scarcity, divisibility, and durability. Ancient civilizations used gold coins as units of account and stores of value, but the formal linkage between gold and national currencies developed gradually with the rise of modern banking.

In early modern Europe, goldsmiths issued paper receipts for stored metal, which began circulating as money. The realization that not all depositors redeemed their gold simultaneously led to fractional-reserve banking — a key innovation that allowed credit expansion beyond physical reserves. 

Britain was the first major nation to codify a gold standard, officially adopting it in 1821 after years of wartime inflation. Its global influence ensured that others followed: Germany in 1871, the United States in 1879, France and Japan soon thereafter. By the 1870s, the classical gold standard had become the backbone of international finance. Currencies were freely convertible into gold, exchange rates were fixed, and trade imbalances were corrected through automatic gold flows. 

This system coincided with rapid globalization. Capital moved freely, shipping and communication costs fell, and international investment flourished. The gold standard’s credibility helped unify the world economy in a way unmatched until late in the twentieth century. 

Collapse of the Gold Standard

The end of the gold standard came not from economic theory, but from the pressures of war, depression, and political expedience. 

World War I (1914) 

The classical gold standard’s first collapse came when belligerent nations suspended convertibility to finance massive military spending. Paper money flooded economies, and inflation followed. By the war’s end, the system was in tatters. 

The Interwar Gold Exchange (or “Managed”) Standard (1919–1933) 

After the war, several nations tried to restore the pre-war order. Britain returned to gold in 1925 at its old parity, overvaluing the pound and triggering deflation. Other countries followed with similar missteps, attempting to maintain gold convertibility without the fiscal discipline that had once supported it. The result was a fragile and uncoordinated system that collapsed under the strain of the Great Depression. Britain abandoned gold in 1931; the United States followed in 1933 for domestic use, though it maintained limited international convertibility. 

The Bretton Woods System (1944–1971) 

In the wake of World War II, nations sought a more flexible gold-based order. The Bretton Woods agreement pegged other currencies to the US dollar, while the dollar itself was convertible into gold at $35 per ounce. For two decades, the system promoted stability and growth. Yet in its success were the seeds of its downfall. As global trade expanded, the supply of dollars grew far faster than US gold reserves. Massive spending on the military in Vietnam and on expansive social programs at home fueled deficits and inflation. Confidence in the dollar waned. 

In August 1971, President Richard Nixon suspended the dollar’s convertibility into gold — a moment known as the Nixon Shock. Within two years, the world’s major economies had shifted to floating exchange rates. By 1973, the gold standard, in all its forms, had come to an end. 

Conclusion

The gold standard shaped global economic history for nearly two centuries. It imposed a clear, transparent rule linking money to a tangible asset, thereby restraining inflation and curbing political manipulation. That very discipline, however, proved incompatible with the fiscal demands of modern warfare, welfare states, and activist monetary policy. 

The shift to fiat money systems brought flexibility to spend more but also chronic inflation, recurring financial crises, and rising public debt. Today, few economists advocate a full return to gold, recognizing that the scale and complexity of global finance make it impractical. But the gold standard remains a touchstone in debates over monetary integrity, symbolizing a time when money was anchored in something real — and when the value of currency depended less on trust in the discretion of governments than on the weight of a metal measured in ounces. 

Even if the world never returns to a gold-based system, understanding how it worked — and why it failed — offers enduring lessons. Stability and discipline come at a cost, but so does the freedom to create money without constraint. The long arc of monetary history suggests that neither extreme provides a permanent answer, yet the gold standard endures as a benchmark against which every modern experiment is, in some sense, still judged.

References

Bordo, M. D., & Schwartz, A. J. (Eds.). (1984). A Retrospective on the Classical Gold Standard, 1821–1931. University of Chicago Press. 

Bordo, M. D. (1981). The classical gold standard: Some lessons for today. Federal Reserve Bank of St. Louis Review, 63(5), 2–17. 

Eichengreen, B. (1996). Globalizing Capital: A History of the International Monetary System (2nd ed.). Princeton University Press. 

Eichengreen, B., & Sachs, J. (1985). Exchange rates and economic recovery in the 1930s. Journal of Economic History, 45(4), 925–946. 

Friedman, M., & Schwartz, A. J. (1963). A Monetary History of the United States, 1867–1960. Princeton University Press. 

Luther, W. J., & Earle, P. C. (2021). The Gold Standard: Retrospect and Prospect. 

Menger, C. (1892). On the origin of money. Economic Journal, 2(6), 239–255. 

Officer, L. H. (2008). The price of gold and the exchange rate since 1791. Journal of Economic Perspectives, 22(1), 115–134. 

Rockoff, H. (1984). Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge University Press. 

Smith, V. (1990). The Rationale of Central Banking and the Free Banking Alternative (L. H. White, Ed.). Liberty Fund. (Original work published 1936)

On Capitol Hill this week, five Democratic senators accused the Trump administration of “sweetheart deals with Big Tech” that have “driven up power bills for ordinary Americans.” 

Their letter, addressed to the White House, faulted the administration for allowing data-center operators to consume “massive new volumes of electricity without sufficient safeguards for consumers or the climate.”

But the senators’ complaint points to a deeper reality neither party can ignore: artificial intelligence is changing America’s energy economy faster than policy can adapt. Every conversation with ChatGPT, every AI-generated image, every search query now runs through vast new physical infrastructure — data centers — that consume more electricity than some nations. 

The world’s appetite for digital intelligence is colliding with its appetite for cheap, reliable power. 

A New Industrial Landscape 

The anonymous-looking gray boxes—bigger than football fields—rising across Virginia, Texas, and the Arizona desert look like nothing special from the highway. Inside, however, they house the machinery of the new economy: tens of thousands of high-end processors performing trillions of calculations per second. These are the “intelligence factories,” where neural networks are trained, deployed, and refined — and where America’s energy system is pushed to its limits and beyond. 

“People talk about the cloud as if it were ethereal,” energy analyst Jason Bordoff said recently. “But it’s as physical as a steel mill — and it runs on megawatts.” 

According to the Pew Research Center, US data centers consumed about 183 terawatt-hours (TWh) of electricity in 2024 — some 4 percent of total US power use, and about the same as Pakistan. By 2030, that figure could exceed 426 TWh, more than double today’s level. The International Energy Agency (IEA) warns that, worldwide, data-center electricity demand will double again by 2026, growing four times faster than total global power demand. 

The driver is artificial intelligence. Training and running large language models (LLMs) like ChatGPT and other models requires enormous computing clusters powered by specialized chips — notably Nvidia’s graphics processing units (GPUs). Each new generation of AI systems multiplies power requirements. OpenAI’s GPT-4 reportedly demanded tens of millions of dollars’ worth of electricity just to train. Multiply that by hundreds of companies now racing to build their own AI models, and the implications for the grid are staggering. 

Where the Power Is Going 

The American and global epicenter (for now) of this new build-out remains Loudoun County, Virginia — nicknamed “Data Center Alley” — where nearly 30 percent of that county’s electricity now flows to data facilities. Virginia’s utilities estimate that data centers consume more than a quarter of the whole state’s total generation.

Elsewhere in America, the story is similar. Microsoft’s burgeoning data center complex near Des Moines has forced MidAmerican Energy to accelerate new natural-gas generation. Arizona Public Service now plans to build new substations near Phoenix to serve a cluster of AI facilities; Texas grid operator ERCOT says data centers will add 3 gigawatts of demand by 2027. 

And the trend, by the way, isn’t limited to electricity. Most facilities require water for cooling. A single “hyperscale” campus can use billions of gallons per year, prompting local backlash in drought-prone regions.

The Political Blame Game 

Soaring demand has begun to translate into electric-rate filings. US utilities asked for $29 billion in rate increases in the first half of 2025, nearly double the total for the same period last year. Executives cite “data-center growth and grid reinforcement” as drivers. 

And so, we get the letter from Senate Democrats — among them Elizabeth Warren and Sheldon Whitehouse — urging the Department of Energy to impose “efficiency standards” and “consumer protections” before authorizing new power contracts for AI operators. “We cannot allow Silicon Valley’s hunger for compute to be fed by higher bills in the heartland,” they wrote. 

The Trump administration shot back a reply. Press Secretary Karoline Leavitt said, “The president will not let bureaucrats throttle America’s leadership in AI or its supply of affordable energy. If the choice is between progress and paralysis, he chooses progress.” 

That framing “progress versus paralysis” captures the larger divide. The administration has prioritized energy abundance, reopening leasing on federal lands, greenlighting LNG export terminals, rolling back environmental restrictions of all kinds, and signaling renewed support for coal and nuclear power. Democrats, fixated on climate commitments, have continued to oppose expanded drilling in Alaska’s Arctic and new offshore projects, while pressing for data centers to run on renewables. 

Powering the AI Boom 

Without continuous electricity, the AI boom falters. Nvidia, Microsoft, and OpenAI are already pushing the limits of available capacity. In April, Microsoft confirmed it will buy power from the planned restart of the Three Mile Island Unit 1 reactor — mothballed since 2019 — to feed its growing data-center fleet in Pennsylvania. “We’re essentially connecting a small city’s worth of demand to the grid,” said an energy executive involved in the project. “Data centers are an order of magnitude larger than anything we’ve built for before.” 

That “small city” reference is not an exaggeration. A single hyperscale facility can draw 100 megawatts — roughly the load of 80,000 households. Dozens of such projects are under construction. 

And while the industry’s largest players are also buying wind and solar power contracts, they admit that renewables alone cannot meet the 24-hour load. “When the model is training, you can’t tell it to pause because the sun set,” one data-center engineer quipped. 

The Economics of Constraint 

From an economic perspective, what matters is not only rising demand but constrained supply. Regulations restricting oil, gas, and pipeline development keep marginal electricity generation expensive. Permitting delays for transmission lines slows the build-out of new capacity. At the same time, federal subsidies distort investment toward intermittent sources that require backup generation — often natural gas — to stabilize the grid. 

A perfect storm of policy contradictions may be brewing: a government that wants both a carbon-neutral grid and dominance in energy-hungry AI. 

“The irony is that the very politicians demanding AI leadership are the ones making it harder to power,” said economist Stephen Moore. “You can’t have artificial intelligence without real energy.” 

In a free market, higher demand would spur rapid expansion of supply. Investors would drill, build, and innovate to capture new profit opportunities. Instead, production and permitting are politically constrained, so prices must rise until demand is choked off. That is the dynamic now visible in electricity bills — and in the Senate’s sudden search for someone to blame. 

The Global Race 

Complicating it all, to say the least, is the geopolitical dimension. China, the European Union, and the Gulf states are racing to build their own AI infrastructure. Beijing’s Ministry of Industry announced plans for 50 new “intelligent computing centers” by 2027, powered largely by coal. In the Middle East, sovereign wealth funds are backing data-center projects co-located with gas fields to guarantee cheap electricity. 

If the US restricts its own energy production, it risks ceding the field. “Energy is now the limiting reagent for AI,” venture capitalist Marc Andreessen wrote this summer. “Whichever country solves cheap, abundant power wins the century.”

That insight revives old debates about industrial policy. Should Washington subsidize domestic chip foundries and their power plants, or should it clear the regulatory thicket that deters private capital from building both? Innovation thrives on liberty, not mircomanagement. 

The New Factories 

Are data centers so different from factories of the industrial age? They convert raw inputs like electricity, silicon, cooling water, and capital into valuable outputs: trained models and real-time AI services. But unlike the factories of the past, they employ few workers directly. A billion-dollar hyperscale facility may have fewer than 200 staff. That does not sit well with the communities in which the vast data centers are located. The wealth is created upstream and downstream: in chip design, software, and the cascade of productivity gains AI enables. 

Still, the indirect productivity is vast. AI-driven logistics shave fuel costs, AI-assisted medicine accelerates diagnosis, and AI-powered coding tools raises output per worker. But all of it depends on those humming, appallingly noisy, heat-filled halls of servers. As OpenAI’s Sam Altman remarked last year, “A lot of the world gets covered in data centers over time.” 

If true, America’s next great industrial geography will not be steel towns or tech corridors, but the power corridor: regions anywhere that electricity is plentiful, cheap, and politically welcome. 

Already, states like Texas and Georgia are advertising low-cost energy as a lure for AI investment. 

Markets Versus Mandates 

From a free-market perspective, the lesson is straightforward. Economic growth follows energy freedom. When government treats energy as a controlled substance — rationed through regulation, taxed for vice, or distorted by subsidies — innovation slows. When markets are allowed to meet demand naturally, abundance results. 

In the early industrial age, the United States became the world’s workshop because it embraced abundance: of coal, oil, and later electricity. Every new machine and factory depended on those resources, and entrepreneurs supplied them without central direction. Today’s equivalent is the AI data center. Its prosperity depends on letting energy producers compete, invest, and innovate without political interference. 

Politics Ahead 

Over the next year, expect the power issue to dominate AI politics. Democrats will press for efficiency mandates and carbon targets; Republicans will frame energy freedom as essential to national strength. Federal officials already are discussing a kind of “clean AI” certification system tied to renewable sourcing — critics say that could amount to a de facto quota on computer power. 

Meanwhile, utilities are rethinking grid design for a world where data centers behave like factories that never sleep. The market is responding: small, modular nuclear reactors, advanced gas turbines, and geothermal projects are attracting venture funding as potential baseload sources for AI campuses. 

For policymakers, the challenge is to resist the urge to micromanage. As AIER’s scholarship often finds, spontaneous order, not centralized control, produces both efficiency and resilience. Allowing prices to signal scarcity and opportunity will attract the investment necessary to balance America’s energy equation.

The Freedom to Compute 

In the end, the debate over data centers and electricity bills is really about the freedom to compute. The same economic laws that governed the Industrial Revolution still apply: productivity rises when entrepreneurs can transform energy into work — whether mechanical or digital. 

Artificial intelligence may be virtual, but its foundations are unmistakably physical. To sustain the AI boom without bankrupting ratepayers, the United States must choose policies that unleash energy production rather than constrict it. 

The “cloud” will always have a power bill. The question is whether that bill becomes a burden of regulation or a dividend of freedom.

Central planners just can’t help themselves. They feel obligated to solve the world’s problems. Consider this year’s Orwellianly-named “Conference of the Parties” (COP-30), the thirtieth annual climate conference sponsored by the United Nations Framework Convention on Climate Change (UNFCCC).

Thousands of government officials and members of international non-government organizations (which, of course, get millions of dollars from governments) have descended on Belem, Brazil for their annual COP-30 gathering. The past two gatherings, COP-29 in Azerbaijan and COP-28 in the United Arab Emirates, were rough to say the least. 

They failed to get big commitments from wealthy countries, they didn’t win over many less-developed countries, and increasingly the conference looked co-opted by fossil fuel interests. That low bar means this year’s conference may hit a brighter note – although less than half-way through the conference, protestors tried to force their way in and harmed several security personnel. So maybe we’ll see a new low in the war to “save” the planet.

As an advocate for freedom, these annual COP meetings can be disheartening. They are an annual reminder of the immense resources and machinery for coercion and central planning that keep grinding relentlessly on (with taxpayer funds) no matter how unhappy the people of the world are with their “benevolent” planning. As with most centralized solutions, the proposals put forward at COP-30 will do little to help or to empower those who are really at risk.

The Amazon rainforest figures prominently in the climate agenda this year. In fact, the indigenous protestors at this year’s climate conference live in and near that rainforest. COP attendees busily work away at hammering out agreements for wealthy countries to compensate poorer countries for “environmental damages.” These billions of dollars that attendees hope to extort from guilt-ridden governments and wealthy corporate managers will enrich themselves and their cronies by funding more travel, more conferences, and more studies. The money will also grease the palms of government officials in recipient countries. And then, whatever crumbs are left, might make their way into the hands of those most harmed by environmental problems.

But that’s not what these protestors want. As one protestor said “We can’t eat money….We want our lands free from agribusiness, oil exploration, illegal miners and illegal loggers.” Brazil’s left-leaning President Luiz Lula da Silva (known as “Lula”) has said that this year’s conference and the government of Brazil is committed to working with indigenous communities. Unfortunately, for Lula and the COP-30 attendees, paternalism does not seem to be what these indigenous communities want.

There is a much better and simpler solution: protect the rights of the vulnerable. The indigenous peoples in Brazil don’t want a handout orchestrated by the global elite, they should have the property rights to their land defined and protected. Ironically, a better regime of property rights and contract will benefit the rainforest more than carbon offset schemes.

Clear cutting and deforestation happen because of deficiencies in property rights and the rule of law. People rush in to get as much wood as they can because if they don’t, someone else will. Logging companies that own their own land, on the other hand, rarely engage in clear-cutting because it will reduce the value of their land. Instead, they manage their land with an eye towards the future. They plant new trees. They protect their trees from fire and other calamities. 

There is no reason to think the indigenous peoples in and around the Amazon rainforest will act differently if they are given real ownership of their land. They will have the best knowledge of how to manage the rainforest – how to protect it, how to harvest its resources, and how to maintain its value. 

They should not live under the whims or the hubris of lawyers in Brussels or The Hague, do-gooder corporate management, or small armies of government bureaucrats around the world. Instead, they should have autonomy and the right to determine their own future – to decide what is best for themselves and their families – not to become clients in a perverse environmental patronage system.

And this lesson from indigenous peoples in the Amazon rainforest applies in dozens of other ways to people around the globe. The UN/Davos/NGO elites want to regulate every part of our lives in their quixotic quest to prevent the planet from warming “too much.” We will lose the right to decide how to ship goods, grow crops, generate electricity, travel, and otherwise govern ourselves. 

Unfortunately, the global environmental machinery will continue grinding unless those in power are replaced by champions of freedom and innovation.

Do you sell cupcakes, run a home photography studio, or tutor kids in your living room? If so, you might be breaking the law.

In the US, zoning ordinances often treat modest home enterprises as threats to the neighborhood. If you’re just running an online business, local governments generally won’t bother you, but if clients are coming to your home, then they try to limit the visibility and impact of your business. Have these regulations gone too far? Should state governments tell local governments to leave home-based businesses alone, within certain limits?

Major companies have gotten their start in someone’s garage. At this point, it’s almost a mandatory back-story for any Silicon Valley company. Apple, Hewlett-Packard, Google, and Amazon all boast garage-based origins.

And it’s not just fledgling tech startups. In residential neighborhoods across America, home-based businesses are the hidden sinews of resilient local economies, from the mom who takes care of several neighborhood kids while their parents work, to the independent tax accountant hanging out his shingle. From the Russian immigrant baking and selling honey cakes to those in the know, to a group of families that started a micro-school during the pandemic.

The rise of remote work makes home-based businesses more viable, because the potential customer base for small neighborhood retail is growing. More Americans could now benefit from the convenience of doing business close to home. Already, about 50 percent of all small businesses and 69 percent of startups are home-based.

New Hampshire is the only state with a complete dataset of local zoning regulations on home-based businesses, based on a survey of laws conducted by the Initiative for Housing Policy and Practice at Saint Anselm College. About 20 percent of the state’s independent zoning authorities outright ban home-based businesses and occupations in at least one residential district. Twenty towns ban the Hewlett-Packard origin story — a business in a detached garage — outright. Twenty-six more require a special permit. Other finicky regulations include requirements to build more parking (56 towns) or prohibitions on building more parking (14 towns), subjecting home businesses to site plan review, which comes with a public hearing and potentially costly requirements to run tests and studies (89 towns), and strict limits on the square footage a resident may use for running a business (22 towns set limits at or below 600 square feet). 

You can bet that if “Live Free or Die” New Hampshire has these regulations, other states’ rules are at least as onerous. In 2021, Florida legalized home-based businesses in all residential districts but otherwise allows local governments to regulate their operation. New Hampshire, California, Washington, Oregon, and Colorado have legalized home-based childcare statewide.

What would happen if we relaxed the rules on home-based businesses? 

Japan’s experience may offer a glimpse. American tourists return entranced by the tiny shops and offices that people run out of their homes. In Japan’s “exclusively residential” zoning category, “[s]mall shops, dental clinics, hair salons, and day cares are all permitted.” The widespread availability of small shops makes life more convenient if you live in a small house. You no longer have to store all the daily necessities of life within your residence. Jane Jacobs made the safety case for mixed-use neighborhoods in The Death and Life of Great American Cities. With more “eyes on the street,” someone is more likely to see anything bad that happens and take action.

Now, there are good reasons for Americans not to adopt Japanese zoning wholesale. Japan’s homicide rate is 25 times lower than America’s. Americans are less likely to tolerate strangers roaming around their neighborhoods. And while Jacobs’ “eyes on the street” thesis explains why mixed-use neighborhoods are safer than downtowns that empty out after work hours, bringing bars and perhaps some other commercial uses into residential neighborhoods appears to increase crime (though higher residential density reduces it).

Still, much of the US is extremely safe, and it just isn’t plausible that making it easy to start a home daycare is going to spawn a neighborhood crime wave. Reasonable limits make sense for home businesses that bring in lots of vehicle traffic, create lots of noise, or attract an undesirable clientele, but if we set aside these problematic uses, why not let people offer more services and sell more stuff out of their home?

Opponents might point to the virtues of “local control.” State legislators, they say, have less knowledge of local neighborhoods and potentially incompatible uses than local officials do.

Indeed, local governments are more competent at regulating commercial uses than homebuilding. Local governments over-regulate housing because they capture only part of the benefits of new local housing while paying the full costs. New property tax revenues and more up-to-date housing stock do benefit a locality, but the benefits of lower housing costs, lower homelessness, and stronger business conditions from new supply accrue to a larger region.

By contrast, localities reap the lion’s share of the benefit of allowing new business, including more employment opportunities, higher property tax revenues, and more convenience and amenities for residents. So we should expect local governments to treat commercial uses better than they do dense residential uses, and that’s generally what they do.

Even so, localities often err on the side of overregulation. Local officials lack the information, the incentives, and the flexibility that market prices provide. They can’t know what services people really want, public hearings amplify anti-change voices, and getting variances is too costly for the average homeowner.

Thus, state legislatures can play a useful role in using their legal and administrative expertise to carve out safe harbors for low-impact uses that local officials may not have even considered.

One example of this opportunity is in-home childcare. Many towns regulate in-home childcare like any other home-based business. But childcare is a low-impact use, and there’s no reason to treat it the same way we would, for example, auto repair. The lack of affordable childcare is also a major national economic issue. Thus, it’s no surprise that states have started to set aside local zoning regulations that ban or strictly regulate home childcare.

Could states also require or encourage localities to allow entrepreneurs to do business at home in other ways? States have the expertise and capacity to help local governments figure out opportunities to ease the burdens on small-scale entrepreneurship, without turning residential neighborhoods into central business districts. Just ask Bill Gates or Steve Jobs. America’s garages could be launching pads for global enterprise if we have the vision to let them.

The modern American right could stand to gain from the insight of Richard M. Weaver. Weaver, a twentieth-century conservative of the Southern tradition, perceived the dangers of radical ideologies as well as the extent to which American thinking offered a viable alternative. Amid the disagreements and controversies of our present moment, today’s various libertarians, conservatives, classical liberals, and others are in need of clear thinking about our own ideas as well as those of our opponents. As such, we might learn from Weaver’s powerful dissections of authoritarianism. 

A key component of Weaver’s philosophy was a recognition of the natural distinctions of individuals within society, what might also be termed “social bond” individualism. As the revolutionary movements of the twentieth century demonstrated through both communism and fascism, the overruling of this human basis was a harbinger of immense danger to the freedom of individuals and the natural order of their civilization. However, Weaver explained how, according to its largely Jeffersonian principles, the American South perceived the threat of these destructive movements sooner and more substantively than other regions.  

In a 1944 essay entitled “The South and the Revolution of Nihilism,” Weaver laid out the reasons why fascism was (and still is) fundamentally opposed to the genuine traditions of American thought and society. He argued that fascism was at its core a revolutionary break with the Enlightenment ideas that had themselves transformed much of the Western world, especially since the French Revolution. Since, in his view, the South never fully entered the French Revolutionary schema of breaking down all social distinctions and “deep-rooted traditions,” fascism was rightly perceived not as a restoration of lost principles, but as societal upheaval.  

Weaver contended that the American South, never having wholly embraced the leveling forces of the Enlightenment, stood rooted in its own history, which it had “learned the hard way.” He depicted the region and its society as being composed of individuals who operated within a unique sense of spontaneous customs and social bonds to one another. Fascism, by contrast, was understood as a movement destructive of society’s natural structure and instead tended towards the “substitution of the formless mass manipulated by a group of Machiavellians.” This distinction meant that fascism was a malignant and incompatible force not to be trifled with or appeased, despite the wishful efforts of many in the West. 

What was really at issue during the Second World War, according to Weaver, was a foundational conflict between traditional arrangements developed from the bottom-up versus regimented structures imposed upon society from the top-down. Centralization meant an alliance between the “mass” and a single dictatorial leader, a stark contrast to the decentralized approach with its roots in local authority and individualism. Seeing fascism as the “extreme proletarian nihilism” that it was, Weaver perceived “that the promise of fascism to restore the ancient virtues is counteracted by this process, and that the denial of an ethical basis for the state means the loss of freedom and humanity.” Despite the fascists’ claims of returning to lost traditions, Weaver and other Southerners understood that the heavily centralized nature of fascist regimes negated the spontaneous orders people develop within society. In essence, fascism may give lip service to traditional social arrangements, but it is at its core revolutionary because it seeks to impose an order, rather than being born out of a pre-existing order. 

Having described fascism as the authoritarian concoction that it was, Weaver likewise held no illusions about the other revolutionary system of the twentieth century, communism. In his excellent 1957 article, “Life Without Prejudice,” Weaver skewered the Marxist tactic of sowing seeds for a Utopia that never blooms. He noted how communists recognized that to implement their own dogmatic vision of the world, they must first clear away the existing society, one pillar at a time. Whether playing upon public resentment about the “existence of rich men,” or “the right to acquire and use property privately,” or some other issue, communists seek to “vilify this as founded upon ‘prejudice.’” It was, in effect, a nihilistic strategy for implementing their own prejudices.  

While the term “prejudice” has lost its regularity in conversation since Weaver’s time, it is not difficult to see the same strategy at play in modern discourse. There are numerous examples in recent years of people being pilloried as “racist,” “sexist,” “homophobic,” “antisemitic,” or various other “prejudices” which supposedly negate argument and justify cancellation or worse. As Weaver made clear, this is the communist deconstruction tactic at work once more. This strategy is crucial for wannabe tyrants, who must first defeat the existing society before ushering in their own manufactured one, imposed from above, much like fascism. They must inspire skepticism about the current order by oversimplifying everything as arising from malicious “prejudices” held by their opponents.   

Instead, Weaver noted the natural role of prejudice, rightly understood, in individual thinking and personality. He explained that not every aspect of an individual’s thoughts and actions could be verified by a mountain of facts or logic. In contrast to the radicals who claimed objective certainty about what’s best for everyone, “The man who frankly confesses to his prejudices is usually more human and more humane. He adjusts amicably to the idea of his limitations. A limitation once admitted is a kind of monition not to try acting like something superhuman. The person who admits his prejudices, which is to say his unreasoned judgments, has a perspective on himself.” This perception is a meaningful counter to the moral framework of communism because it elevates humility above ideological presumption; it is an endorsement of genuine principles over presuppositions.  

Ultimately, Richard Weaver presented insightful arguments for rejecting the devastating radicalisms of his era. It would stand to reason, then, that in our own uneasy era we too could gain by understanding the alternative he championed. 

To reject the upheavals offered by communism and fascism, the American right must instead reinforce its principles by embracing its vast intellectual tradition. We can reaffirm our commitments to liberty and order while so many others give way to the siren songs of centralized collectivism, whether fascist, communist, or otherwise. The stringencies of ideology ultimately impair our sense of humanity and can justify disastrous outcomes, as the history of the twentieth century attests. As Weaver put it, we must recognize that schemes for “a life without prejudice” are as inhuman and destructive as the life pursued strictly for the “satisfaction of physical man.” 

What kind of goods and experiences comprise a “normal life”? In 1900, Henry George thought millionaires lived abnormally because they had telephones in their bedrooms. Looking back, it’s remarkable how quickly the abnormal becomes ordinary. Today, even the poorest people — not only in rich countries but also in developing ones — carry a phone (which does much more than ring) in their pocket.

From Luxuries to Necessities 

That’s one of the miracles of the free market. French sociologist Gabriel Tarde noticed that forks and spoons were once luxuries reserved for the elite, but by his time had become universal. Ludwig von Mises drew inspiration from Tarde’s insight, calling it one of capitalism’s greatest virtues: the transformation of luxuries into necessities. “What was once a luxury becomes in the course of time a necessity,” he wrote. In Mises’s view, this is the inherent tendency of capitalism — to shorten that time lag and make the luxurious accessible to the masses. One might add that in socialist economies, the opposite happens: necessities become luxuries.

But this transformation is only possible through freedom — the freedom of consumers to experiment with new products, and of producers to innovate and take risks. On the supply side, the liberty of entrepreneurs and capitalists to test new methods of production — even when those methods appear “unjust” or “wasteful” at first — opens the door for millions to enjoy the fruits of innovation. As F. A. Hayek put it, capitalism enables “experimentation with a style of living that will eventually be available to many.”

Yet supply is only half the story. Consumers play an equally vital role. Mises called capitalism the sovereignty of the consumers. And yet, in recent years, a “war on consumers” has emerged from both the left and the right.

The War on Consumers

Five months ago, Donald Trump, defending his trade war with China, remarked, “Maybe the children will have two dolls instead of thirty dolls.” On the other side, Bernie Sanders has declared that we “don’t need 23 choices of deodorant or 18 choices of sneakers when kids are going hungry.” 

In both cases, ordinary consumers — those walking through Walmart comparing groceries or choosing between brands — are portrayed as the problem. “Why do you need thirty dolls?” they ask. “Why twenty-three deodorants?”

This disdain for consumer choice has deep intellectual roots — not just in populist rhetoric but in academia. From Thorstein Veblen’s theory of conspicuous consumption to John Kenneth Galbraith’s The Affluent Society, many thinkers have looked down on consumer tastes. Galbraith once dismissed American cars as “big, ungainly, [and] unfunctional.” But ugly by whose standard? Dysfunctional according to what measure? The essence of the free market is that consumers decide for themselves — and the normative defense of this system is straightforward: individuals know their own interests better than any politician or professor.

Critics — from Veblen to Marxists who claim capitalists “manufacture” desires — forget what liberal economists understood well: that consumption in a modern economy is not merely about survival, but experience. We don’t just buy things to use them; we buy them to experience them. Marketing, far from being pure manipulation, is part of that experience. Buying a perfume endorsed by your favorite celebrity is not just about smelling pleasant — it’s about identity, aspiration, and emotion. Because preferences are subjective, it’s meaningless to draw a hard line between “needs” and “wants.” Who could have predicted that humanity “needed” airplanes or automobiles before they existed?

Through trial and error, consumers discover what they value. There is no objective measure of “need.” In fact, the unpredictability of human desire is itself a defense of the free market: we need its discovery process to learn what tomorrow’s needs will be. What looks like frivolous consumption today often becomes the gateway for widespread prosperity tomorrow.

Critics of marketing also ignore basic business logic. Which is easier for a firm: to spend vast sums inventing a new “need” and then developing a product for it, or simply to observe what people already want and produce accordingly? The latter is common sense. Marketing’s informative function is often overlooked; if it were purely deceptive, businesses would have little incentive to rely on it. Real profits come from loyal, long-term customers — something deception cannot buy.

As the economist Stanley Lebergott once wrote, “It is an unacknowledged excellence of modern economics that its foundations are pitched on the sands of human desire.” Modern economies achieve miracles not through the commands of kings or planners, but through individuals pursuing their own interests — and that is a virtue, not a sin. 

This “unacknowledged excellence” is the moral beauty of the liberal market order: where consumers are free to choose, society has no forced mission — and yet it prospers precisely because of that freedom.

For more than a century, America’s stock listings have been dominated by two addresses: Wall Street’s New York Stock Exchange and Nasdaq’s MarketSite in Times Square. That may soon change. On September 30, 2025, the US Securities and Exchange Commission approved the Texas Stock Exchange (TXSE) to operate as a national securities exchange.

Headquartered in Dallas and backed by major financial institutions, TXSE plans to begin trading in early 2026 — marking the first serious challenge in decades to the entrenched exchange duopoly and opening a new chapter for American capital markets.

Texas offers both symbolism and substance for such an endeavor. With roughly $2.7 trillion in annual economic output, the state represents about one-tenth of the entire US economy. It is home to more than a tenth of the nation’s publicly listed companies, and its mix of rapid growth, favorable taxes, and business-friendly regulation makes it a natural candidate for a financial hub. The creation of a new national exchange in Dallas isn’t just a regional milestone — it’s a sign that financial innovation is no longer bound to Manhattan’s geography or culture.

The Texas Stock Exchange aims to reintroduce competition into a sector that has grown listless and increasingly consolidated. It’s undoubtedly true that the existing exchanges have played a crucial role in maintaining transparency and corporate accountability; their listing standards have strengthened governance and investor protections. Yet those same regulatory frameworks have also drifted into areas far removed from financial performance. In recent years, both the NYSE and Nasdaq have woven social and political priorities — what critics describe as wokeness — into disclosure and board-composition rules. They are costly distractions from capital formation. The TXSE proposes a more neutral approach: maintaining high financial and ethical standards while allowing firms to focus on profitability, innovation, and shareholder value.

What distinguishes the TXSE is not a break from federal oversight — the SEC will supervise it under the same 1934 Exchange Act framework — but a fresh philosophy of exchange governance. Its listing rules, approved by the SEC in late 2025, emphasize issuer friendliness without relaxing quantitative standards. Companies may request confidential pre-application eligibility reviews at no cost, an innovation that can save months of uncertainty and advisory fees. The exchange also plans lower recurring costs and streamlined compliance obligations, designed to appeal especially to midsize and emerging-growth firms that find New York’s red tape prohibitive. For issuers, the advantages are procedural rather than ideological: less bureaucracy, clearer guidance, and faster time to market — all within the same legal protections that govern other national exchanges.

Importantly, the TXSE is not creating a parallel arbitration or mediation framework distinct from existing US securities law. Disputes will remain under conventional regulatory and judicial channels. What TXSE offers instead is predictability and professional competence — a governance regime grounded in financial expertise rather than social activism or politicized mandates. Texas’s recent corporate-law reforms, offering expanded safe harbors for directors and officers of Texas-based or TXSE-listed corporations, further reinforce that business-friendly environment.

Publicly traded companies are not abstract entities — they are the backbone of the US economy. Collectively, they employ roughly 28 million Americans, investing hundreds of billions each year in facilities, equipment, research, and expansion. Publicly traded paper also allows firms that might not be cash-rich to acquire or merge with others, achieving efficiencies of scale, spreading innovation faster, and delivering better and more affordable products and services to consumers. When those firms can operate and raise capital efficiently, the benefits ripple widely through communities and households alike.

If successful, the TXSE’s impact may reach far beyond the companies it lists. A dynamic marketplace disciplines incumbents: the very existence of a new exchange could push legacy venues to innovate, lower costs, and revisit how they define “best practices.” As competition increases, issuers may find not only a cheaper but also a fairer playing field — one where governance expectations are tied to financial prudence rather than fashionable politics.

Building an exchange is no small task. To achieve price discovery, a critical mass of liquidity is necessary, and accumulating that liquidity depends on both performance and confidence. The NYSE, Nasdaq, and other market centers have deep, long-established pools of trading activity that reinforce their dominance. For TXSE to thrive, it must persuade a broad array of market participants — from investment banks and hedge funds to retail brokerages and pension funds — that Dallas can host a market as vibrant and reliable as New York’s. (JP Morgan has already asserted its view on that matter.) That will require trust, technological strength, and seamless integration with the national trading network.

Yet Texas’s position is unusually strong. Its economy is vast and diversified; its infrastructure modern; its talent base deep in both finance and energy technology. A more geographically diverse system of exchanges spreads operational risk, encourages regional specialization, and gives investors and entrepreneurs alternatives to the cultural and regulatory monolith that New York has become. TXSE’s lower listing costs, emphasis on issuer engagement, and alignment with Texas’s pro-business climate make it the most credible new exchange entrant in generations.

To the uninitiated observer, another stock exchange might sound redundant, or more cynically like another gaming venue for the wealthy. The current US President, in fact, once expressed the view that the New York Stock Exchange is “the biggest casino in the world.” 

In truth, exchanges are the plumbing of capitalism — the place where savings become investment and new industries find their footing. By one account, Ludwig von Mises once commented that stock exchanges are ultimately the dividing line between market and collectivist economic systems. Murray Rothbard recounted (“Making Economic Sense”):

One time I asked Professor von Mises, the great expert on the economics of socialism, at what point on this spectrum of statism would he designate a country as “socialist” or not. At that time, I wasn’t sure that any definite criterion existed to make that sort of clear-cut judgment. And so I was pleasantly surprised at the clarity and decisiveness of Mises’s answer. “A stock market,” he answered promptly. A stock market is crucial to the existence of capitalism and private property. For it means that there is a functioning market in the exchange of private titles to the means of production. There can be no genuine private ownership of capital without a stock market: there can be no true socialism if such a market is allowed to exist.

A more competitive and decentralized exchange system strengthens that foundation and keeps commercial blood flowing through the country’s economic arteries.

Despite socialistic structural rigidities, changes are coming to US financial markets, albeit slowly. With its regulatory green light secured, and trading expected to begin in early 2026, the Texas Stock Exchange represents more than a new address for American capital markets. It is a bet on openness, competition, and the belief that — just as in the marketplace for ideas — the market for capital works best when it is competitive and free.