Category

Economy

Category

At one time, the rich could generally count on the Republican Party not begrudging them financial success, even of the outlying variety. That’s no longer the case. Arguments that such elites may be bad for America, and maybe just bad, period, now come from both sides of the political spectrum. Some even propose class genocide.

“Billionaires should not exist,” said Vermont Senator Bernie Sanders when introducing a plan for a new wealth tax.

In Why Democracy Needs the Rich, John O. McGinnis, a law professor at Northwestern University, offers a different opinion.

He didn’t title the book Why Our Economy Needs the Rich. McGinnis does include the standard case for the wealthy, that through hard work, risk taking and foresight, they make our shared economy more productive. Without Elon Musk, for example, Tesla wouldn’t be what it is. If its customers, employees, and the IRS all benefit from that, why shouldn’t Musk be rewarded?

In McGinnis’ book, that line of reasoning is an afterthought. His main concern is whether the wealthy, especially the very wealthy, make our democracy better than it would be without them.

It’s an important question because if being rich is wrong, then the US is wrong. As McGinnis notes, we are both the richest nation in the world and the richest per capita of any with a population over 20 million.

And while each person in our democracy has one vote, to expect that everyone will have equal influence on political outcomes is naïve. Some work harder at it. They form political action committees, knock on doors for a candidate, or run for office. Others have exceptional speaking skills or large social media platforms for promoting policies.

As McGinnis puts it, “elite influence in democracy is not only inevitable but often beneficial, channeling expertise and coherence into public debate.” Consequently, the political realm has its own “one percent” whose influence exceeds their numbers.

He identifies these elites as those holding influential positions in special interest groups, the government bureaucracy, and the clerisy — the latter including prominent celebrities, academics, journalists, and other members of what is sometimes called the cultural elite. The problem, McGinnis argues, is that these groups tend to skew left politically.

He offers data to support this claim. Among federal bureaucrats, 95 percent of donations in the 2016 presidential election went to Hillary Clinton. In journalism, a 2004 Pew survey found that liberals outnumbered conservatives five to one. In academia, McGinnis estimates the ratio of liberal to conservative professors at top universities today is likely twenty to one. Most strikingly, in the film industry, a study of political contributions from 996 leading actors, directors, producers, and writers found they supported Democrats over Republicans by a 115-to-1 ratio.

Such dominance is maintained, McGinnis believes, through gatekeeping that favors the training and hiring of, for instance, new academics and journalists who think like their superiors. And this is where the wealthy, who also possess outsized political influence, can improve things by being a democratic counterweight to entrenched left-leaning power.

There are many routes to acquiring wealth. “Unlike the intelligentsia,” McGinnis writes, “the wealthy cannot easily exclude individuals with unorthodox views from joining their ranks.” For that reason, the rich arrive at their positions from a variety of backgrounds, beliefs, and political leanings. For every George Soros, there is a Peter Thiel. For every Bill Gates, there is a Miriam Adelson. In other words, the wealthy look like America, ideologically speaking.

And contrary to popular belief, the rich are also a dynamic and constantly churning class, especially at the highest levels. McGinnis notes that almost 60 percent of those on the current Forbes 400 list were not on it twelve years earlier. And 90 percent of the grandchildren of the wealthiest one percent drop out of that lofty tier. Recently, the dynamism of the wealthy may even be on the rise. In 1982, 60 percent of the Forbes 400 came from wealthy backgrounds. That is only 32 percent today. McGinnis even questions the received wisdom that the rich are getting richer in relative terms. He notes that in 1937, John D. Rockefeller’s net worth was 1.5 percent of US GDP, almost the same as Elon Musk’s 1.6 percent share in 2025.

In making these points, McGinnis never decries the right of left-leaning elites to have outsized influence on our political process. He only claims that the wealthy serve as an important counterweight to them. “A democracy, like a tree, flourishes with many roots,” he writes. In a nation founded on freedom of thought and a never-ending contest of ideas, a fuller representation of national perspectives promotes better political outcomes.

It’s a nuanced argument, which McGinnis bolsters by noting that the financially successful tend to have a more pragmatic worldview than other elites, as their wealth invests them in the economic success of the nation while also insulating them from worry about disapproval.

The wealthy’s activities also spread benefits across the political spectrum, McGinnis argues. The rich are traditionally leading supporters of the arts and charity. The first hospital in the United States appeared in 1751 thanks to a group of successful merchants that included Benjamin Franklin. More recently, rich alumni helped Harvard University weather the storm of President Trump cutting off their public funding.

It seems everyone hates corporations these days, but that is nothing new. For more than a century, Americans have swung between denouncing large firms as predatory Leviathans and attempting to conscript them for nonbusiness ends. That process may now be entering a new phase — one with broader implications for whether America remains a free country.

In the Progressive Era, corporations were portrayed as extractive engines of class power, tolerated only if constrained by supposedly impartial regulators and administrative oversight. Since the New Deal, many of those same critics shifted ground, arguing that corporations could be harnessed to advance environmental goals, collect taxes, deliver health insurance, impose maximum working hours, and pursue public priorities that legislatures had avoided, delayed, or even rejected.

Now the New Right has mounted its own indictment, charging corporate America with “woke” cultural coercion, economic disloyalty, and an unhealthy intimacy with left-wing regulators and the administrative state. The result is a curious consensus of hostility, in which corporations are cast either as tyrants or as sycophants, rather than as what they are in a free society: organizations that coordinate capital and labor to produce goods, services, and prosperity within the rule of law.

The Progressive Era attacks on corporations grew out of the massive expansion in economic activity following the Industrial Revolution. Local markets merged into a national economy, and firms scaled up in response. The federal government began regulating at the national level under the Constitution’s Commerce Clause, with measures like the Interstate Commerce Commission and the Sherman Antitrust Act asserting authority over what was seen as harmful corporate conduct.

The perceived harm took many forms: corporate profit was equated with exploitation, and corporate power was viewed as an instrument of entrenched wealth and class division, sometimes even a tool of political corruption. Over time, new corporate sins were added — manipulation of consumers, suppression of workers’ rights, and eventually the perpetuation of inequality and environmental degradation. In effect, the American left developed a theory of corporate vice, holding that corporate incentives are inherently misaligned with the public good.

The application of this theory of vice led to several purported remedies. Foremost was regulation and the entire apparatus of three-letter agencies that today intrude into almost every area of life, in the name of democratic control. Equally, however, if less conspicuous, was a growing suspicion of the idea of shareholder primacy, and the emergence of the idea that markets are morally insufficient. The ultimate result of this theory gaining dominance was the New Deal, with not just restrictions on virtually every area of corporate activity, but its attempt to use corporations directly to serve public ends.

The theory of vice eventually hit its limits. Courts and Congress applied some restraint, and thinkers like Milton Friedman persuaded many that ordinary corporate activity was not inherently suspect. By the late twentieth century, the American left had developed a new framework — a theory of corporate virtue.

This new theory held that corporations were not only morally redeemable but could advance broader social, economic, and political goals. It built on a key premise of the earlier theory of vice: that firms should be managed not solely for owners and investors but for all stakeholders, including society at large. Initially framed around corporate social responsibility, it evolved in the twenty-first century into ESG (environmental, social, and governance) and its subset, DEI (diversity, equity, and inclusion), which spread rapidly across corporate America.

As this theory took hold, corporations became vehicles for a wide range of initiatives. Diversity mandates reshaped hiring, climate priorities filtered through supply chains, and platform moderation influenced acceptable speech. Corporate activity itself became a form of political signaling. These efforts were reinforced by new internal structures — vice presidents of sustainability, proxy advisers, and external scoring systems.

By the time of COVID and the Black Lives Matter movement, much of corporate America and its surrounding institutions had embraced this framework. The older regulatory superstructure reinforced it. Firms that resisted could face political pressure, lawsuits, or penalties. Corporations became political actors not because markets demanded it, but because political incentives pushed them in that direction.

The political right has since mounted its own response, developing a rival theory of corporate vice. Much of it mirrors the left’s earlier critique. Corporations are now cast as coercive actors imposing social change that cannot win at the ballot box. Where regulation was once justified as democratic control, opposition to ESG reflects the same impulse in reverse — using state power to counter corporate influence.

This new critique also revives older themes. Claims that profit-seeking drove outsourcing echo long-standing labor arguments. Concerns about immigration — both low-skilled and high-skilled — reprise earlier critiques of corporate labor practices. These strands converge in the charge that corporations have “hollowed out” American communities and abandoned local ties.

The regulatory machinery built in the Progressive Era is now being redeployed in the opposite direction — against ESG and DEI. What regulators once encouraged, they now discourage through familiar tools: pressure, litigation, and penalties. The result is political whiplash. As administrations alternate, compliance burdens shift with the electoral cycle, and firms adjust accordingly.

This dynamic is predictable. Corporations respond to incentives, including political ones. When alignment with political power offers advantages, firms will adapt. As public choice economics suggests, political actors have incentives to expand their influence, not limit it.

There are signs, however, that the New Right is also developing its own theory of corporate virtue. In principle, such a theory could be constructive — emphasizing political neutrality, wealth creation, and a focus on core business functions within the rule of law. That approach would align with a traditional conservative view of enterprise.

In practice, the emerging version points elsewhere: toward protectionism, industrial policy, closer ties between firms and the state, and reliance on political patronage. This is a different form of corporate entanglement — less ideological, perhaps, but no less political.

The consequences are similar. When firms prioritize political alignment over serving customers and investors, resources are misallocated and incentives distorted. It is a formula not for growth, but for stagnation.

A better path is a classical liberal theory of corporate virtue: firms exist to coordinate labor and capital for productive ends; their social contribution is wealth creation within the rule of law; profit signals value creation rather than moral failure; and business and politics should remain separate. Regulators should set stable, predictable rules — not direct outcomes — and market discipline should guide behavior.

The choice should be clear. A return to mission-focused enterprise depends on it. Free enterprise, not political enterprise, built America — and it remains the only path to sustaining it.

“I hunted for, and stole, a source of fire … and it has shown itself to be mortals’ great resource and their teacher of every skill.”

So says Prometheus, the Titan of Greek mythology, in Aeschylus’s Prometheus Bound, explaining why he suffers in chains. For giving fire to mankind, he was condemned to eternal torment, bound to a rock while an eagle fed upon him each day. Fire was not merely warmth. It was power, independence, production, protection, and the first great escape from literal and figurative darkness. The human story began to change not when mankind learned restraint, but when it learned mastery. Civilization began not with renunciation, but with defiance.

Atop this civilization rests an odd, yet revealing modern ritual: Earth Hour. Today, Saturday, March 28, 2026, at 8:30 p.m. local time, people around the world will again be asked to switch off their non-essential lights for one hour. Organized by the World Wildlife Fund, or WWF, which was founded in 1961, Earth Hour is meant to dramatize concern for nature and the conservation of the planet’s resources. The campaign now marks 20 years and includes landmarks such as Christ the Redeemer in Rio de Janeiro, the Sydney Opera House in Sydney, and the Empire State Building in New York City.

Originally a grassroots movement, Earth Hour now presents itself as “a symbol of hope for nature and climate.” Lofty appeals to help nature and wildlife recover, reduce deforestation, and protect future generations now accompany the annual ritual of switching off the lights on an otherwise unremarkable Saturday in March. Yet even on its own terms, the story is less straightforward than the rhetoric suggests. As Song et al. wrote in a 2018 Nature study, “contrary to the prevailing view that forest area has declined globally—tree cover has increased by 2.24 million km2 (+7.1% relative to the 1982 level).”

The point is not that every environmental problem has vanished, but that global improvement does not always depend on a mass movement of symbolic austerity. Earth Hour’s gesture remains simple enough: dim the world briefly to express concern for the planet. But that symbolism points, perhaps unintentionally, to a deeper truth. Turning the lights off is easy. The true achievement of civilization was learning how to turn them on in the first place. If future generations are to inherit a better world, they will need more than rituals of restraint. They will need the abundance, safety, and human progress that only widespread access to energy can provide.

That is where rugged individualism shines most brightly in history. Thomas Edison and Nikola Tesla were not men of managed consensus, however they both belonged to the same civilizational current: the transformation of electricity from scientific possibility into mass reality. Their fierce competition in the 19th century sparked invention after invention. Edison’s incandescent lamp patent, US Patent No. 223,898, was issued on January 27, 1880; two years later, his Pearl Street Station began selling electricity in lower Manhattan. Tesla’s great leap came in 1888, when George Westinghouse purchased the rights to his polyphase alternating-current system, helping launch the battle of the currents and laying the groundwork for long-distance power transmission. 

The true genius of capitalism was not merely generating power, but to conduct it outward until light, warmth, and safety ceased to be luxuries for the few and became ordinary facts of life for the many. More than a century later, we still live inside the world that this rivalry charged into existence.

Electricity did not merely give cities more light. It gave them more order. In New York City, added street lighting has been associated with significant reductions in nighttime crime, including assaults, homicides, and weapons offenses. It also gave them greater protection from the elements. The Health Department reports that more than 500 New Yorkers die prematurely each year because of hot weather, with lack of air conditioning being the clearest risk factor for heat-stress death. Furthermore, electricity made cities more productive, not less. Research on US manufacturing shows that electrification raised labor productivity by reorganizing production around more efficient machinery and factory layouts. Light, warmth, safety, and output, these were the real gifts of electrification.

It is precisely this history that makes today’s sneers at rugged individualism sound so hollow, especially in New York City. For example, in his inaugural address on January 1, 2026, Mayor Zohran Mamdani promised to replace “the frigidity of rugged individualism with the warmth of collectivism.” But in the very city where Edison’s Pearl Street Station began selling electricity in 1882, that line reverses cause and effect. After a winter that brought one of New York City’s longest freezing stretches since 1963, the real source of warmth was not collectivist poetry, but the electric infrastructure that competition, capital, and invention made possible. If collectivism had accomplished even half of what competition did, New Yorkers might still be warming themselves by candlelight while calling it moral progress. 

For one hour each year, Earth Hour asks the world to rehearse darkness. But from Prometheus onward, the human story has been one of escaping it. Fire, then electricity, enlarged human freedom. The achievement worth honoring is not symbolic dimness, but the civilizational brilliance that made light ordinary.

In the United States, cloud seeding has long been a subject of controversy. The process involves releasing small quantities of compounds such as Silver Iodide (AgI) into the atmosphere, causing clouds to produce rain or snow. Critics call it “weather modification,” but cloud seeding is a moderate and cost-effective effort to enhance rainfall that can benefit the water-strapped Southwest by fortifying its water supply.

Although cloud seeding is used regionally, it has faced significant backlash. Skeptics point to health concerns, flooding, and other ethical concerns magnified by conspiracy theories rather than scientific evidence. Yet research shows that the chemical concentrations used in cloud seeding are below dangerous thresholds, and there is no credible evidence linking it to floods.

An increasing number of states are working on legislation to restrict or outright ban this form of “geoengineering,” including a bill circulating in Arizona. Nine western states currently use cloud seeding to supplement their water portfolios, benefiting farmers and communities drawing from dwindling reservoirs and shrinking aquifers

Rather than banning innovation in water management, states should encourage it. Cloud seeding offers a high return on investment at a fraction of the cost of permanent water infrastructure. It is most effective when driven by local and private investment and, when implemented correctly, can deliver meaningful results. 

By contrast, large infrastructure projects promise long-term water supply but require years of permitting and construction, massive upfront capital, and costly operations. Dismissing cloud seeding in an era of billion-dollar water proposals is both imprudent and wasteful.

Desalination starkly illustrates these trade-offs: heavily regulated, capital-intensive, and slow to deploy. California’s Carlsbad plant, one of the largest in the U.S., faced years of regulatory delays and cost roughly $1 billion to build. The plant’s energy-intensive water processing has led to an annual operating cost of up to $59 million.

In contrast, cloud seeding is a cost-effective, flexible alternative, with annual costs ranging from $5 million to $7 million and adjustable by season.

Research from North Dakota State University shows that cloud seeding can boost rainfall by five to ten percent at just 40 cents per planted acre. It benefits southwestern agriculture — especially water-intensive alfalfa — without draining overstressed groundwater or requiring costly infrastructure projects.

Like many economic issues, water management faces a knowledge problem. While bans on cloud seeding are imprudent, statewide mandates are also flawed because they fail to consider local water conditions. Private and local investment would better assess water needs. Large western states with diverse environments experience regional variances in precipitation patterns.

For example, Hiouchi, California, averages 79.31 inches of rain annually, while Stovepipe Wells only receives two inches. These differences in rainfall make fixed targets ineffective. Locally informed approaches enable communities and private businesses to adapt to weather conditions, rather than relying on fixed goals.

Privately and locally funded cloud seeding programs date back to the early pioneers of the industry. North American Weather Consultants (NAWC) has operated since the 1950s, providing services to water districts, municipalities, universities, and private companies. Ski resorts in Colorado and Utah also use cloud seeding to boost snowfall for recreational needs.

The long history of small-scale, decentralized programs demonstrates that local operations can meet water needs effectively without statewide mandates. State governments should be cautious with regulation, rather than stifling another tool for strengthening local water supplies.

Private investment has also driven innovation in weather modification, making research and development more impactful. Public funding, by contrast, often slows progress with regulatory red tape, appropriation limits, and political constraints. When federal support for cloud seeding was sharply reduced in the 1980s, private, local, and state funding became essential to sustain technological advances.

Even traditional water infrastructure faces political hurdles. In 2022, the California Coastal Commission rejected the proposed Huntington Beach desalination plant despite years of planning. By contrast, private cloud seeding operations have long enjoyed the autonomy to experiment and refine their methods — without leaving taxpayers responsible for uncertain outcomes.

Private firms such as North American Weather Consultants and Weather Modification Inc. have driven innovation for decades, incorporating radar-guided weather tracking, modeling, hybrid ground-and-air deployment, and aircraft to improve timing, make operations more efficient, and monitor results. 

Cutting-edge startups like Rainmaker have introduced autonomous drones for dispensing precipitation-enhancing chemicals.

It was private companies incentivized by performance and market demand, not federal grants or fickle political priorities, that made these innovations a reality. If companies are free to respond to the market, little federal involvement is needed.

Cloud seeding might be shrouded in controversy, but state governments shouldn’t ban it; they should embrace it. Cloud seeding is cost-effective, easily adaptable to regional water needs, and can be successful if it isn’t crushed by overbearing regulation. 

In an age of water scarcity, limiting effective solutions is costly — especially for arid, landlocked western states that would benefit from an additional source of water.

For over two decades, gold’s role as a staple investment has grown more pronounced in the global financial system. Since 2000, the commodity has outperformed all major US stock indices. It has preserved purchasing power, protected investors during crises, and hedged against policy shifts.

The forces propelling gold higher today extend beyond its safe-haven status. A mix of technological change and geopolitical restructuring is reshaping how investors view gold. The result is a powerful combination of structural demand and constrained supply. These conditions help explain gold’s strong performance and why many believe its appeal is far from over. 

Below are thirteen major forces shaping the modern gold market.

1. Safe Haven in a Crisis 

Gold is a store of value. When currencies depreciate and governments falter, gold is the primary place of refuge for concerned investors. That reputation drives demand and pushes capital flows into gold during uncertain times.

2. Geopolitical Concerns 

Global tensions remain a powerful catalyst. Conflicts in Eastern Europe, instability in the Middle East, and shifting power dynamics in Asia have increased demand for assets that exist outside political control. Gold has been a major beneficiary from this environment.  

3. Preservation of Purchasing Power 

History offers a striking comparison: roughly 200 ounces of gold bought an average home decades ago, and roughly the same amount still does today. While prices in dollars have changed dramatically, gold has preserved long-term real value. This property continues to attract investors seeking protection from currency debasement.

4. Central Bank Accumulation 

Some of the biggest buyers of gold are central banks and governments. Many of them are diversifying their holdings from currencies to hard assets. This shift reflects concerns about debt levels, currency risks, and geopolitical tensions. Central bank purchases have become a significant source of demand in the market.

5. Expanding Sovereign Debt 

The public debt has increased substantially globally. The US now carries a level of debt significantly higher than in previous decades relative to GDP, and other large economies are under similar pressures. This could reduce confidence in long-term currency stability, making gold an attractive store of value. 

6. Structural Policy Divides Across the World 

Differences in trade, regulation, energy, and industry policies have divided the world economically. With each economic bloc having its own set of priorities, the level of uncertainty in financial markets rises. Gold performs well in such a scenario where the level of coordination is low and perceptions of risk are high.

7. Lower Growth and Structural Economic Changes

In some developed countries, the rate of productivity growth has been lower, and at the same time, there has been an increase in the complexity of regulations. Some investors see this environment as less supportive of capital growth and profitability. As a result of lower growth expectations, there has been an increase in allocation to defensive assets. 

8. Strong Relative Performance 

Gold has outperformed various large U.S. equity market indices from 2000 through the mid-2020s, beaten inflation, and grown at a rate that well surpassed economic growth. Even in times when markets experienced strong rallies, gold performed well.

9. Global Reserve Rebalancing and Dedollarization

The rise of new economic blocs, such as the BRICS (Brazil, Russia, India, China, South Africa) countries and others, has led to an increase in their gold reserves as part of their reserve diversification policies. The US dollar is still the leading currency, but its share of global reserves has been gradually falling in recent years. The share of gold reserves has been rising correspondingly.

10. Technological and Industrial Demand 

Gold is a financial asset and an industrial metal. It is highly conductive and corrosion resistant. It is an essential component in the electronics industry, supercomputing infrastructure, and manufacturing. As technology advances, industrial demand places structural pressure on supply.

11. Digital Assets 

Digital asset markets are beginning to use gold as collateral. Many stablecoin issuers now hold substantial gold reserves alongside traditional securities. Stablecoin adoption has driven capital flows that support the underlying commodities to which they are pegged.

12. Portfolio Diversification and Low Correlation 

Gold has always been known for its low correlation with stocks and bonds. When stocks fall sharply, gold often moves in a different direction. Consequently, institutional investors are increasingly recognizing the role of gold as a diversifier and not as a speculative asset. 

13. Demand Continues to Outpace Supply 

Worldwide demand has been at record levels in the past few years. The rate of growth in mining production is low, and new discoveries are few. As demand grows at a rate that exceeds supply, prices are likely to move higher. 

The Bigger Picture

Investors often turn to gold for wealth preservation and long-term appreciation. But recent price action suggests there is more to gold than meets the eye.

Gold has been rising even when equity markets perform well, real interest rates increase, and inflation remains moderate. This suggests gold is being driven by forces beyond traditional crisis-related demand.

Gold now sits at the crossroads of monetary policy, geopolitics, technology, and broader changes in the global financial system.

Gold’s Expanding Role

Gold’s rise reflects more than fear or inflation. It reflects a world in transition. Governments are managing higher debt. Financial systems are evolving. Technology is expanding industrial demand. Reserve strategies are shifting.

Investors continue to seek assets that hold value outside political and monetary systems. Unless these underlying forces reverse in a meaningful way, gold’s role in global finance is likely to remain strong.

America’s fiscal and monetary problems look like two separate crises. They aren’t. Runaway government spending and an unruly Federal Reserve are two sides of the same coin. When Congress spends beyond its means, it creates pressure on the central bank to print money and paper over the debt. When the Fed operates without clear rules, it becomes the silent enabler of fiscal recklessness. Fix one without fixing the other and you haven’t solved anything. That is where we find ourselves today.

As I argued in my first book, the Fed has a rule problem: It doesn’t have one. For decades, monetary policymakers have operated under broad discretionary authority, adjusting interest rates and the money supply based on their judgment about what the economy needs. The results have been disappointing.

The case against discretionary monetary policy runs along two tracks: one about competence and one about legitimacy.

Start with competence. Central bankers face serious information problems. The economy is vast and complex, and the signals it sends are noisy. Policymakers receive data that is incomplete, revised, and often contradictory. By the time the Fed diagnoses a problem and adjusts policy, the underlying conditions may have already changed. Discretion sounds like flexibility. In practice, it often means groping in the dark.

But information problems are only half the story. Incentive problems compound them. Bureaucracies develop institutional interests of their own. The Fed, like any government agency, responds to political pressures, professional norms, and the priorities of its leadership. Monetary economists — the experts who advise the Fed and evaluate its performance — constitute their own interest group. They have professional stakes in a powerful, discretionary central bank. And then there’s perhaps the biggest incentive problem of all: the looming threat of fiscal dominance. It’s time to stop thinking about monetary policy in a vacuum.

There is a deeper question here, as was recognized almost 50 years ago by economists Thomas Sargent and Neil Wallace: are fiscal policymakers or monetary policymakers in the driver’s seat? When Congress and the Treasury spend freely and accumulate debt, they create pressure on the central bank to monetize that debt. If the fiscal authority “moves first” and the Fed “follows,” then monetary policy becomes an instrument of fiscal control, not an independent check on inflation. That is precisely what happened after 2020. The government spent at wartime levels even as the emergency receded, and the Fed soon accommodated. Inflation naturally followed.

So the problem is not simply that the Fed made mistakes. It is that the institutional structure invites those mistakes. A discretionary Fed embedded in a debt-heavy fiscal environment will tend to prioritize the short-term over the long-term, accommodation over restraint, and political convenience over monetary discipline.

The solution is a Fed regime change. We need actual legislation to change the central bank’s mandate. Administrations change. Personnel change. But laws can become, as James Buchanan put it, “relatively absolute absolutes.” If Congress replaces the Fed’s current mandate, which includes employment and interest rate targets alongside price stability, with a single, clear mandate for price stability, the Fed can credibly commit to refrain from underwriting future deficit spending. Congress can’t count on the Fed bailing it out if the Fed’s price level target limits the printing press.

The goal is not to make the Fed powerless but to make its power legible and therefore predictable. A rule-bound Fed, focused solely on price stability, empowers planning by businesses and households. It rewards saving. It discourages the kind of speculative boom-and-bust cycles that discretionary policy tends to produce. And it will force fiscal policymakers to get their mismanaged affairs in order.

Other proposed solutions won’t work. First, we should reject presidential control over monetary policy. Giving the executive branch direct authority over interest rates would politicize money even further. Second, simply appointing more “conservative” central bankers offers no durable fix. Hawkish Fed chairs come and go; without a reformed mandate, the institutional logic reasserts itself.

Inflation has cooled from its recent peaks and deficits are not as high now as during the COVID period, yet the underlying institutional dysfunction remains. The Fed is still improvising, still subject to fiscal pressure, still operating without the kind of clear rules that would make its behavior predictable and its decisions defensible. Monetary policy by bureaucratic fiat is not good enough. To prevent money mischief and fiscal folly, only the discipline of rules will do. The solution is a single mandate: price stability alone.

America has spent more than $20 trillion on fighting poverty since the introduction of President Johnson’s Great Society program in 1964. Sixty years later, how are we doing?

That depends, as it turns out, on how you measure it.

Last month, Senator Kennedy (R-LA) introduced a bill that would require the Census Bureau to report a new poverty metric as an alternative to the Official Poverty Measure (OPM) by including both cash and non-cash welfare benefits in its calculations.

As Kennedy points out, this is a much-needed fix. The OPM’s methodological weaknesses are well documented. Most notably, it ignores the hundreds of billions of dollars the government spends each year to assist low-income families through tax credits like the Earned Income Tax Credit and in-kind transfers such as Medicaid, food stamps, and housing subsidies. It also overstates inflation and relies on outdated assumptions about food spending. In short, the OPM paints an egregiously inaccurate picture of material poverty in America.

When one includes taxes and transfers, as economists Richard Burkhauser and Kevin Corinth did in a recent paper with the National Bureau of Economic Research, the “full-income” poverty measure sat at just 3.7 percent in 2023 — 1.6 percent after including employer-provided health insurance — a far more optimistic look than the OPM’s 11.1 percent from the same year.

That sounds like a triumph. But Burkhauser and Corinth take it one step further and use their “full-income” measure to track changes in the poverty rate dating back to 1939. 

Contrary to popular belief, they find that the greatest era of poverty reduction happened before Johnson declared war on it.

From 1939 to 1963, absolute full-income poverty plummeted by 29 percentage points, from 48.5 percent to 19.5 percent. Then, despite the government pouring trillions of taxpayer dollars into combating poverty, poverty fell by only 15.7 percentage points from 1963 to 2023. Barely half the progress in more than twice the time.

But the stagnating decline is only half the story. The more consequential difference is what drove it. 

Before 1964, the main engine of poverty reduction was increases in market income — a measurement that includes wages, salaries, and other forms of income from employment. From 1939 to 1959, market income poverty fell by 26.1 percentage points, nearly all of the 27.3 percent decline in full-income poverty among working-age adults over the same period. In short, before the rapid expansion of the welfare state, most people were earning their way out of poverty.

After 1964, that engine stalled. Market income poverty fell by just 3.9 percentage points from 1967 to 2023, while post-tax, post-transfer poverty fell by 10 percentage points. Even though poverty has continued to decline over the past six decades, most of that was due to the ever-expanding generosity of government transfers.

While low-income Americans were benefiting from the biggest poverty reduction in the country’s history, the percentage of working-age adults relying on government transfers for more than half their income decreased from 2.9 percent in 1939 to 2.7 percent in 1959.

By 2023, this number had nearly tripled to 7.6 percent, even reaching as high as 15 percent in some years.

As Mercatus scholar Jack Salmon put it: “The War on Poverty changed the how of poverty reduction, but it didn’t accelerate the how much.” 

If anything, by changing the former, it may have blunted the latter. A 76 percent increase in real median income, paired with rising employment and higher productivity, all driven by rapid postwar economic expansion, pulled more people out of poverty in 24 years than trillions of dollars in government-imposed wealth redistribution have done in 60.

Some may argue that this trend is to be expected. After all, reducing poverty from 48 percent to 20 percent is arithmetically easier than reducing it further because there are simply fewer people left below the poverty line, and those who remain tend to face the most entrenched barriers to self-sufficiency.

Fair enough. But as Burkhauser and Corinth point out, full-income poverty largely stagnated starting in the 1970s — right as welfare spending was ramping up dramatically. In short, taxpayers have been paying for a multitrillion-dollar boondoggle that has yielded increasingly diminishing marginal returns. 

So, what was the main driver behind the pre-1964 miracle? Simple: Economic growth.

The pre-1964 record, along with centuries of evidence, suggests that nothing has worked better than economic growth in helping individuals, especially those at the bottom of the income ladder, to achieve a higher quality of life. Across the world, economic growth driven by liberalization helped pull almost one billion people out of extreme poverty from 1990 to 2010.

Here at home, the pattern still holds. The Fraser Institute’s research shows that North American states with higher and increasing levels of economic freedom tend to have higher income growth and employment, more income mobility, especially among low-income households, higher economic growth, less homelessness, and lower levels of food insecurity.

The fruits of economic growth are visible in ways that poverty statistics fail to capture, especially for America’s poor. As Joseph Heath points out, 95 percent of American households below the poverty line have electricity, indoor plumbing, a refrigerator, a stove, and a color television. More than 80 percent have an air conditioner and a cell phone, and two-thirds own a washing machine and dryer. Economic growth, not government programs, is what helped make these once-luxury goods unavailable to many wealthy households now accessible to nearly everyone. It continues to bear fruit today — wages for typical American workers are at all-time highs.

The most powerful anti-poverty program had no enrollment forms, caseworkers, or spending bills. It was a growing economy that helped millions of people earn their way to a better life. As such, subsequent efforts should focus on removing government-created barriers to economic growth, occupational opportunities, and job market entry rather than adding another layer of expensive, inefficient wealth transfers.

Senator Kennedy is right to say we need a more accurate measure of poverty. When analyzing the best ways to combat poverty, policymakers should reflect on whether the welfare state was ever the right tool for the job.

The extended partial government shutdown has led to long lines of frustrated passengers at airports nationwide as unpaid Transportation Security Administration (TSA) agents walk out. Officials even warn that small airports may shut down due to the absences. If we for a moment disregard the Washington Monument syndrome likely also at play, the lesson to be learned here is not the importance of funding government services — but the exact opposite.

The TSA has a long history of failing to such a degree that it could never survive had it not been run by and within the government. Costing taxpayers and travelers $10 billion annually, not counting the inconvenience and time lost, the agency fails even on its own terms. The failure rate in 2015 was over 90 percent. The same in 2017. If these data seem dated, it is because they are. Instead of fixing the problems, the results of the agency’s internal testing were classified. In the absence of data, the only reasonable interpretation is that the agency remains a catastrophic failure to this day.

The recent airport chaos stresses how the security theater has become an unbearable bottleneck. It also stresses how dysfunctional government services become problematic beyond the waste of resources and the inconveniences they cause. The difference between government services and market solutions offered by businesses is stark. A private business that fails to deliver loses customers, and therefore both revenue and market share. Its failure is its own problem, which is a strong incentive to fix it.

As a government agency, the TSA’s failure is not its problem but is instead shifted onto travelers (their “customers,” as it were), who are, in some cases, left waiting six hours in line to get through the security checkpoint. In fact, this failure can easily be construed as a benefit for the TSA, which now — because the government requires all passengers to pass through its bottleneck — has leverage to demand more funding. As a result, the destruction wrought by dysfunctional government becomes an argument for more of it, and taxpayers are left with the bill.

The arguably zero value added by the TSA’s security theater thus becomes a self-enforcing bloating of the bureaucracy, making the agency an ever-expanding jobs program that burdens taxpayers while harassing travelers.

Imagine if security had instead been the responsibility of airlines. Rather than cause constant delays and inconvenience, it would be in the airlines’ interest to streamline the process and make it as unobtrusive as possible. A failure to staff security functions would not be travelers’ (customers’) problem but the airlines’, who benefit only when we fly — and remain liable to keep travelers safe. The TSA has no such responsibility.

But a government service is worse than what can be explained by destructive operative incentives. We often fail to realize that what exists in the present is a result of developments in the past and that the future too will be different. In other words, the market economy as well as society overall are processes in constant flux, not a static state. Privately provided security would, just like any other service offered in the market, be subject to constant innovations — creative destruction, as economist Joseph Schumpeter called it. 

Creative destruction is the power of disruptive entrepreneurship to cause leaps of improvement. As entrepreneurs introduce innovations that bring great benefit, consumers abandon the solutions they previously chose to use. For example, when Henry Ford introduced the Model T, people flocked to the affordable automobile — the greater value — and stopped relying on horses and carriages. The automobile became the new, higher standard for transportation. Automobile manufacturing and gas stations replaced horse breeders and stables. 

We would thus see continuously improved security measures provided at lower cost — taking less time and being more convenient for travelers. The value to airlines is that it benefits their customers. It’s a competitive advantage and a value-add.

The very opposite is true for government services such as the TSA. They have nothing to benefit from providing the service they are tasked with effectively and efficiently. In fact, the very opposite is true: if the TSA would find ways of reducing the cost, the agency’s budget would likely be cut in response. They would effectively be penalized for improving. 

And therein lies the crux: government agencies have little or no incentive to serve the users of their service. But private businesses stand and fall by providing customers with value. It is no surprise, therefore, that airport security is a hassle and inconvenience — and that it is expensive. The TSA is a bureaucracy and a jobs program that does not keep us safe. 

Recognizing this fact helps us understand the chaos at airports. More funding would do more harm than good.

A good few years before the AI craze, my Oxford lecturer gave a presentation on the shifting nature of work. An economic historian by trade, Judy Stephenson traced the arc of compensation from labor market considerations in early modern London, and wove a full-circle tale of being paid piecemeal (output) in the nineteenth century, to hours (input) for most of the twentieth century, and then back again in the twenty-first-century gig economy. 

The delivery and ride-sharing services were the major concerns of the intelligentsia during the 2010s. You were paid not for your time but for the output you quite often physically delivered, with resulting debates over unions, safety, and wages. 

Stephenson accounted for the changes on very Coasean terms: In the assembly-line work of a century ago, it wasn’t worth the transaction costs of figuring out exactly whose contribution was worth how much, so you just roughly averaged out everyone’s hours with some extra perks for responsibility or long service. And compared to the at-home weavers of the century before, it was much less clear who was responsible for the exact value-add. Put differently, the loss of efficiency associated with time-based pay (free-riding, monitoring, slacking off, or shirking work) might have been less than the costly efforts necessary to constantly re-establish rates for specific tasks.

Economic textbooks, heavy on the modeling, might imply that performance pay is more efficient since it aligns incentives and minimizes free-riding. Enter computers and digital markets matching supply and demand, plus standalone gig workers entirely responsible for their own output. Those institutional changes shifted the bargaining power and the Coasean transaction costs involved — making the real world so much more like the sketched model of an economics textbook. 

Easily Replicated Abundance Meets the Economics of Infinite Content

There’s an obvious self-selection in the current labor-related worries coming our way: It is precisely those of us who have invested most in this credentialist commentariat, who have sacrificed our lives and oriented our identities around the very cognitive and generative skills that LLMs now so effortlessly replicate. 

It’s no longer that hard to have ChatGPT write like me (just train it on my past writing), Claude to code like a programmer with a decade of experience, or have a combined AI effort produce a beautiful, two-minute, period-piece ad spot for $100. 

In The Great Harvest, a recent and ironically mostly AI-generated book, Adam Livingston captures the white-collar workplace revolution underway: It’s “not that your career will vanish overnight but that it was always just a fragile assemblage of solvable problems, [… your job] was actually a collection of separate functions waiting to be identified, isolated, and optimized away.”

The music industry and the economic value of songs were early indicators here, with supply and production way surpassing any feasible consumption or customer on the other side. Even though its economic threat stemmed originally from pirated rather than easily replicated material, the marginal value unavoidably fell to around zero. While Taylor Swift rakes in royalties from streams and other artificially scarce legal arrangements, she generates more economic value from concerts and merch. Her physical being becomes the ultimate, rivalrous, nonreplicable luxury good. 

With the marginal cost of producing videos, images, music, or words going to zero, we should have expected infinite content and next-to-no meaning — see YouTube, TikTok, or Twitter. 

With the rest of the arts and white-collar knowledge industry up next, it’s a little bit of an economist’s puzzle why prices (i.e., wages) haven’t dramatically fallen yet, to adjust relative scarcity and the now much more abundant supply — stories of social anchoring or nominal contract rigidities, no doubt. So far, we’re much more likely to see quantity adjusting, meaning fewer workers or worse labor-market conditions for programmers, journalists, accountants, and other white-collar jobs. 

Where’s the Value? Humans as Tastemakers

A lot of digital ink has been spilled on trying to identify where we go from here. In a world of informational abundance and adequately generated text at everyone’s fingertips, where’s the economic value?

“Brainpower is now a commodity that is going cheap,” Andrew Yang reflected this month. Perhaps the best thing we can say about his UBI-infused presidential bid in 2020 is that it was premature. 

“We have a love-hate relationship with working for a living,” Tim Harford observed for the Financial Times; the pain and hardship of working is heavily bound up with meaning and identity. Fred Krueger and Ben Sigman, in another recent book, observe that the “labor theory of value collapses when machines do all the labor,” and that “scarcity pricing becomes meaningless when AI makes many things abundant.” As intelligence becomes infinite, they conclude, the finite becomes infinitely valuable. 

These reflections might as easily have been titled “The Return of the Labor Theory of Value,” not because the LTV was a particularly revolutionary economic theory, but because of what it indicates about our infinitely replicable information and knowledge system going forward. If everything from music to code, words, and video can be created at the press of a button, the only scarce thing left beside the physical world is our human attention. The things we choose to do, choose to look at, choose to labor on.

Fiction writers, faced with the nearly infinite onslaught of storylines and millions of predominantly self-published titles each year, have realized this: Their words or imagined characters may not be scarce, but the very fact that they labored intensively over them is what other humans recognize as worthwhile and impressive. (We might ultimately decide to pay a premium for human connection, attention, or presence.)

Book sales, while pretty stagnant in nominal and real terms, might be monetary votes of appreciation more than actual desire or follow-through to consume the work. 

In the past, these industries had an overhang of gatekeepers and tastemakers deciding what was good music, good art, good writing, or good journalism. In the past few decades, it might have been liberating to have the gatekeepers shoved aside via technological means, but it’s only now that they’re gone that we’re starting to miss them. The artificial scarcity they imposed elevated excess economic value onto songs and books and movies that can now be generated and duplicated by the millions. 

One way out, then, is to recreate the gatekeeping — not in production, that ship has sailed, but in attention and awareness. We might look to respectable minds, like we once did respectable labels or studios or outlets, not for reporting what is in the journalists’ style, but what matters. Trusting in their vision of what matters, using their long and somewhat obsolete experience as a filtering mechanism against the information overload we’re otherwise doomed to. 

Muscle lost its economic dominance long ago; we all know that story. Now that machines are coming for the brains, we have a similar story of scarcity, abundance, and obsolete skills to contend with. What remains scarce — attention, trust, physicality, judgment, embodied presence — will command the rent. 

Two years after the European Union (EU)’s Digital Markets Act (DMA) took effect, the results have been mixed to negative. Promises about certainty, lower enforcement costs, and a more innovative and competitive digital ecosystem haven’t materialized.

Rather than learn from Europe’s mistakes, Californian policymakers and federal proponents of Sen. Amy Klobuchar (D-MN)’s American Innovation and Choice Online Act (AICOA) would import similar ideas to ostensibly help small businesses and hold tech giants accountable. The EU’s experience shows that DMA-style proposals aren’t just unlikely to achieve these goals. They’re also likely to harm consumers, competition, and innovation.

The DMA was intended to support “fairness” and “market contestability” for small businesses that rely on large digital “gatekeeper” platforms, like Amazon, Google and Meta, to reach customers. The “gatekeepers” are mainly American tech giants. The DMA bans them from engaging in certain business practices, even if those practices benefit consumers or competition

For instance, the DMA prevents Google from integrating its Maps, Flights and Hotel Ads tools into search results as this would be “self-preferencing” over third-party booking sites. Evidence shows that this ban has degraded the user experience by increasing the number of clicks required to see prices and make bookings, leading to reduced hotel bookings. Similarly, Apple is limited from excluding third-party apps and app stores from its App Store and iOS even though this has degraded security features, IP protections and trustworthiness in Apple’s products by increasing the proliferation of pirated, less secure and pornographic apps. 

These mandates help some businesses, but harm others, including developers of apps aimed at children who rely on parental trust in the highly curated app store, and hotels that benefited from traffic directed through Google’s tools. Rather than upholding competitive markets, they let governments “pick winners” and undermine digital platforms’ ability to differentiate themselves or experiment to better meet consumer and business needs. This goes against American antitrust law’s focus on consumer welfare over punishing firms for size and success, or shielding businesses from competition- an ethos that has let the US produce leading tech firms that have eclipsed would-be European peers.

Like the DMA, AICOA bans large digital platforms from self-preferencing and from using third-party seller and service provider data to refine their own offerings or better serve consumers—even though such practices are routine in non-digital industries, like grocery stores. The bill also claims to provide legal certainty for businesses, yet its language is vague and grants regulators broad discretion. For example, it uses amorphous phrases like “materially harm,” which courts must interpret without precedent, and allows the FTC to define what constitutes an anti-competitive practice through guidelines.

In Europe, the DMA’s ambiguity about the conditions and costs a platform can impose on third-party services—intended to maintain security and ensure fair value—has led regulators to impose heavy, retrospective fines on Apple without providing clear instructions for compliance, all while soliciting feedback from competing app stores and developers on what Apple should do. This uncertainty has delayed the rollout of new features, including AI tools, for European Apple and Google users.

AI development depends on deploying new technology at scale to gather data, refine foundation models, and solicit user feedback. Rules like the DMA, which create legal uncertainty and impose arbitrary limits, can discourage AI infrastructure and software investments, stifle innovation, and undermine U.S. tech leadership, as well as the ability of small businesses that rely on AI-integrated platforms to compete.

Unlike AICOA and the DMA, recent California Law Reform Commission (CLRC) recommendations that could be adopted by that state’s legislature apply to even non-digital businesses and would dramatically lower evidentiary thresholds for market power. The reforms penalize broad swathes of conduct for firms deemed to hold “significant market power”, including self-preferencing, without need to show likely or actual consumer harm or weigh pro- and anticompetitive effects. By banning “predatory pricing” without need to show that alleged offenders would likely recoup their losses by raising prices later, the reforms discourage businesses from legitimately competing on price. The CLRC’s proposals radically pivot antitrust law from protecting consumers to protecting competitor businesses and stakeholders such as “trading partners.”

Such restrictions arbitrarily favor some businesses over others, leaving the competitive process at the mercy of government diktats instead of consumer demand.

Existing US federal and state antitrust laws already punish tech giants and platforms for anti-competitive behavior on a case-by-case basis that also allows judges to limit inadvertent restrictions to competition or harm to consumers that could result from legal fixes, as recent rulings against Google and Apple show. Existing laws can and should be strengthened only if there is a strong rationale supported by economic evidence. Importing flawed foreign competition policies would only empower government officials and some competitors at the expense of consumers, innovation and America’s global competitiveness.