Category

Economy

Category

“I hunted for, and stole, a source of fire … and it has shown itself to be mortals’ great resource and their teacher of every skill.”

So says Prometheus, the Titan of Greek mythology, in Aeschylus’s Prometheus Bound, explaining why he suffers in chains. For giving fire to mankind, he was condemned to eternal torment, bound to a rock while an eagle fed upon him each day. Fire was not merely warmth. It was power, independence, production, protection, and the first great escape from literal and figurative darkness. The human story began to change not when mankind learned restraint, but when it learned mastery. Civilization began not with renunciation, but with defiance.

Atop this civilization rests an odd, yet revealing modern ritual: Earth Hour. Today, Saturday, March 28, 2026, at 8:30 p.m. local time, people around the world will again be asked to switch off their non-essential lights for one hour. Organized by the World Wildlife Fund, or WWF, which was founded in 1961, Earth Hour is meant to dramatize concern for nature and the conservation of the planet’s resources. The campaign now marks 20 years and includes landmarks such as Christ the Redeemer in Rio de Janeiro, the Sydney Opera House in Sydney, and the Empire State Building in New York City.

Originally a grassroots movement, Earth Hour now presents itself as “a symbol of hope for nature and climate.” Lofty appeals to help nature and wildlife recover, reduce deforestation, and protect future generations now accompany the annual ritual of switching off the lights on an otherwise unremarkable Saturday in March. Yet even on its own terms, the story is less straightforward than the rhetoric suggests. As Song et al. wrote in a 2018 Nature study, “contrary to the prevailing view that forest area has declined globally—tree cover has increased by 2.24 million km2 (+7.1% relative to the 1982 level).”

The point is not that every environmental problem has vanished, but that global improvement does not always depend on a mass movement of symbolic austerity. Earth Hour’s gesture remains simple enough: dim the world briefly to express concern for the planet. But that symbolism points, perhaps unintentionally, to a deeper truth. Turning the lights off is easy. The true achievement of civilization was learning how to turn them on in the first place. If future generations are to inherit a better world, they will need more than rituals of restraint. They will need the abundance, safety, and human progress that only widespread access to energy can provide.

That is where rugged individualism shines most brightly in history. Thomas Edison and Nikola Tesla were not men of managed consensus, however they both belonged to the same civilizational current: the transformation of electricity from scientific possibility into mass reality. Their fierce competition in the 19th century sparked invention after invention. Edison’s incandescent lamp patent, US Patent No. 223,898, was issued on January 27, 1880; two years later, his Pearl Street Station began selling electricity in lower Manhattan. Tesla’s great leap came in 1888, when George Westinghouse purchased the rights to his polyphase alternating-current system, helping launch the battle of the currents and laying the groundwork for long-distance power transmission. 

The true genius of capitalism was not merely generating power, but to conduct it outward until light, warmth, and safety ceased to be luxuries for the few and became ordinary facts of life for the many. More than a century later, we still live inside the world that this rivalry charged into existence.

Electricity did not merely give cities more light. It gave them more order. In New York City, added street lighting has been associated with significant reductions in nighttime crime, including assaults, homicides, and weapons offenses. It also gave them greater protection from the elements. The Health Department reports that more than 500 New Yorkers die prematurely each year because of hot weather, with lack of air conditioning being the clearest risk factor for heat-stress death. Furthermore, electricity made cities more productive, not less. Research on US manufacturing shows that electrification raised labor productivity by reorganizing production around more efficient machinery and factory layouts. Light, warmth, safety, and output, these were the real gifts of electrification.

It is precisely this history that makes today’s sneers at rugged individualism sound so hollow, especially in New York City. For example, in his inaugural address on January 1, 2026, Mayor Zohran Mamdani promised to replace “the frigidity of rugged individualism with the warmth of collectivism.” But in the very city where Edison’s Pearl Street Station began selling electricity in 1882, that line reverses cause and effect. After a winter that brought one of New York City’s longest freezing stretches since 1963, the real source of warmth was not collectivist poetry, but the electric infrastructure that competition, capital, and invention made possible. If collectivism had accomplished even half of what competition did, New Yorkers might still be warming themselves by candlelight while calling it moral progress. 

For one hour each year, Earth Hour asks the world to rehearse darkness. But from Prometheus onward, the human story has been one of escaping it. Fire, then electricity, enlarged human freedom. The achievement worth honoring is not symbolic dimness, but the civilizational brilliance that made light ordinary.

In the United States, cloud seeding has long been a subject of controversy. The process involves releasing small quantities of compounds such as Silver Iodide (AgI) into the atmosphere, causing clouds to produce rain or snow. Critics call it “weather modification,” but cloud seeding is a moderate and cost-effective effort to enhance rainfall that can benefit the water-strapped Southwest by fortifying its water supply.

Although cloud seeding is used regionally, it has faced significant backlash. Skeptics point to health concerns, flooding, and other ethical concerns magnified by conspiracy theories rather than scientific evidence. Yet research shows that the chemical concentrations used in cloud seeding are below dangerous thresholds, and there is no credible evidence linking it to floods.

An increasing number of states are working on legislation to restrict or outright ban this form of “geoengineering,” including a bill circulating in Arizona. Nine western states currently use cloud seeding to supplement their water portfolios, benefiting farmers and communities drawing from dwindling reservoirs and shrinking aquifers

Rather than banning innovation in water management, states should encourage it. Cloud seeding offers a high return on investment at a fraction of the cost of permanent water infrastructure. It is most effective when driven by local and private investment and, when implemented correctly, can deliver meaningful results. 

By contrast, large infrastructure projects promise long-term water supply but require years of permitting and construction, massive upfront capital, and costly operations. Dismissing cloud seeding in an era of billion-dollar water proposals is both imprudent and wasteful.

Desalination starkly illustrates these trade-offs: heavily regulated, capital-intensive, and slow to deploy. California’s Carlsbad plant, one of the largest in the U.S., faced years of regulatory delays and cost roughly $1 billion to build. The plant’s energy-intensive water processing has led to an annual operating cost of up to $59 million.

In contrast, cloud seeding is a cost-effective, flexible alternative, with annual costs ranging from $5 million to $7 million and adjustable by season.

Research from North Dakota State University shows that cloud seeding can boost rainfall by five to ten percent at just 40 cents per planted acre. It benefits southwestern agriculture — especially water-intensive alfalfa — without draining overstressed groundwater or requiring costly infrastructure projects.

Like many economic issues, water management faces a knowledge problem. While bans on cloud seeding are imprudent, statewide mandates are also flawed because they fail to consider local water conditions. Private and local investment would better assess water needs. Large western states with diverse environments experience regional variances in precipitation patterns.

For example, Hiouchi, California, averages 79.31 inches of rain annually, while Stovepipe Wells only receives two inches. These differences in rainfall make fixed targets ineffective. Locally informed approaches enable communities and private businesses to adapt to weather conditions, rather than relying on fixed goals.

Privately and locally funded cloud seeding programs date back to the early pioneers of the industry. North American Weather Consultants (NAWC) has operated since the 1950s, providing services to water districts, municipalities, universities, and private companies. Ski resorts in Colorado and Utah also use cloud seeding to boost snowfall for recreational needs.

The long history of small-scale, decentralized programs demonstrates that local operations can meet water needs effectively without statewide mandates. State governments should be cautious with regulation, rather than stifling another tool for strengthening local water supplies.

Private investment has also driven innovation in weather modification, making research and development more impactful. Public funding, by contrast, often slows progress with regulatory red tape, appropriation limits, and political constraints. When federal support for cloud seeding was sharply reduced in the 1980s, private, local, and state funding became essential to sustain technological advances.

Even traditional water infrastructure faces political hurdles. In 2022, the California Coastal Commission rejected the proposed Huntington Beach desalination plant despite years of planning. By contrast, private cloud seeding operations have long enjoyed the autonomy to experiment and refine their methods — without leaving taxpayers responsible for uncertain outcomes.

Private firms such as North American Weather Consultants and Weather Modification Inc. have driven innovation for decades, incorporating radar-guided weather tracking, modeling, hybrid ground-and-air deployment, and aircraft to improve timing, make operations more efficient, and monitor results. 

Cutting-edge startups like Rainmaker have introduced autonomous drones for dispensing precipitation-enhancing chemicals.

It was private companies incentivized by performance and market demand, not federal grants or fickle political priorities, that made these innovations a reality. If companies are free to respond to the market, little federal involvement is needed.

Cloud seeding might be shrouded in controversy, but state governments shouldn’t ban it; they should embrace it. Cloud seeding is cost-effective, easily adaptable to regional water needs, and can be successful if it isn’t crushed by overbearing regulation. 

In an age of water scarcity, limiting effective solutions is costly — especially for arid, landlocked western states that would benefit from an additional source of water.

For over two decades, gold’s role as a staple investment has grown more pronounced in the global financial system. Since 2000, the commodity has outperformed all major US stock indices. It has preserved purchasing power, protected investors during crises, and hedged against policy shifts.

The forces propelling gold higher today extend beyond its safe-haven status. A mix of technological change and geopolitical restructuring is reshaping how investors view gold. The result is a powerful combination of structural demand and constrained supply. These conditions help explain gold’s strong performance and why many believe its appeal is far from over. 

Below are thirteen major forces shaping the modern gold market.

1. Safe Haven in a Crisis 

Gold is a store of value. When currencies depreciate and governments falter, gold is the primary place of refuge for concerned investors. That reputation drives demand and pushes capital flows into gold during uncertain times.

2. Geopolitical Concerns 

Global tensions remain a powerful catalyst. Conflicts in Eastern Europe, instability in the Middle East, and shifting power dynamics in Asia have increased demand for assets that exist outside political control. Gold has been a major beneficiary from this environment.  

3. Preservation of Purchasing Power 

History offers a striking comparison: roughly 200 ounces of gold bought an average home decades ago, and roughly the same amount still does today. While prices in dollars have changed dramatically, gold has preserved long-term real value. This property continues to attract investors seeking protection from currency debasement.

4. Central Bank Accumulation 

Some of the biggest buyers of gold are central banks and governments. Many of them are diversifying their holdings from currencies to hard assets. This shift reflects concerns about debt levels, currency risks, and geopolitical tensions. Central bank purchases have become a significant source of demand in the market.

5. Expanding Sovereign Debt 

The public debt has increased substantially globally. The US now carries a level of debt significantly higher than in previous decades relative to GDP, and other large economies are under similar pressures. This could reduce confidence in long-term currency stability, making gold an attractive store of value. 

6. Structural Policy Divides Across the World 

Differences in trade, regulation, energy, and industry policies have divided the world economically. With each economic bloc having its own set of priorities, the level of uncertainty in financial markets rises. Gold performs well in such a scenario where the level of coordination is low and perceptions of risk are high.

7. Lower Growth and Structural Economic Changes

In some developed countries, the rate of productivity growth has been lower, and at the same time, there has been an increase in the complexity of regulations. Some investors see this environment as less supportive of capital growth and profitability. As a result of lower growth expectations, there has been an increase in allocation to defensive assets. 

8. Strong Relative Performance 

Gold has outperformed various large U.S. equity market indices from 2000 through the mid-2020s, beaten inflation, and grown at a rate that well surpassed economic growth. Even in times when markets experienced strong rallies, gold performed well.

9. Global Reserve Rebalancing and Dedollarization

The rise of new economic blocs, such as the BRICS (Brazil, Russia, India, China, South Africa) countries and others, has led to an increase in their gold reserves as part of their reserve diversification policies. The US dollar is still the leading currency, but its share of global reserves has been gradually falling in recent years. The share of gold reserves has been rising correspondingly.

10. Technological and Industrial Demand 

Gold is a financial asset and an industrial metal. It is highly conductive and corrosion resistant. It is an essential component in the electronics industry, supercomputing infrastructure, and manufacturing. As technology advances, industrial demand places structural pressure on supply.

11. Digital Assets 

Digital asset markets are beginning to use gold as collateral. Many stablecoin issuers now hold substantial gold reserves alongside traditional securities. Stablecoin adoption has driven capital flows that support the underlying commodities to which they are pegged.

12. Portfolio Diversification and Low Correlation 

Gold has always been known for its low correlation with stocks and bonds. When stocks fall sharply, gold often moves in a different direction. Consequently, institutional investors are increasingly recognizing the role of gold as a diversifier and not as a speculative asset. 

13. Demand Continues to Outpace Supply 

Worldwide demand has been at record levels in the past few years. The rate of growth in mining production is low, and new discoveries are few. As demand grows at a rate that exceeds supply, prices are likely to move higher. 

The Bigger Picture

Investors often turn to gold for wealth preservation and long-term appreciation. But recent price action suggests there is more to gold than meets the eye.

Gold has been rising even when equity markets perform well, real interest rates increase, and inflation remains moderate. This suggests gold is being driven by forces beyond traditional crisis-related demand.

Gold now sits at the crossroads of monetary policy, geopolitics, technology, and broader changes in the global financial system.

Gold’s Expanding Role

Gold’s rise reflects more than fear or inflation. It reflects a world in transition. Governments are managing higher debt. Financial systems are evolving. Technology is expanding industrial demand. Reserve strategies are shifting.

Investors continue to seek assets that hold value outside political and monetary systems. Unless these underlying forces reverse in a meaningful way, gold’s role in global finance is likely to remain strong.

America’s fiscal and monetary problems look like two separate crises. They aren’t. Runaway government spending and an unruly Federal Reserve are two sides of the same coin. When Congress spends beyond its means, it creates pressure on the central bank to print money and paper over the debt. When the Fed operates without clear rules, it becomes the silent enabler of fiscal recklessness. Fix one without fixing the other and you haven’t solved anything. That is where we find ourselves today.

As I argued in my first book, the Fed has a rule problem: It doesn’t have one. For decades, monetary policymakers have operated under broad discretionary authority, adjusting interest rates and the money supply based on their judgment about what the economy needs. The results have been disappointing.

The case against discretionary monetary policy runs along two tracks: one about competence and one about legitimacy.

Start with competence. Central bankers face serious information problems. The economy is vast and complex, and the signals it sends are noisy. Policymakers receive data that is incomplete, revised, and often contradictory. By the time the Fed diagnoses a problem and adjusts policy, the underlying conditions may have already changed. Discretion sounds like flexibility. In practice, it often means groping in the dark.

But information problems are only half the story. Incentive problems compound them. Bureaucracies develop institutional interests of their own. The Fed, like any government agency, responds to political pressures, professional norms, and the priorities of its leadership. Monetary economists — the experts who advise the Fed and evaluate its performance — constitute their own interest group. They have professional stakes in a powerful, discretionary central bank. And then there’s perhaps the biggest incentive problem of all: the looming threat of fiscal dominance. It’s time to stop thinking about monetary policy in a vacuum.

There is a deeper question here, as was recognized almost 50 years ago by economists Thomas Sargent and Neil Wallace: are fiscal policymakers or monetary policymakers in the driver’s seat? When Congress and the Treasury spend freely and accumulate debt, they create pressure on the central bank to monetize that debt. If the fiscal authority “moves first” and the Fed “follows,” then monetary policy becomes an instrument of fiscal control, not an independent check on inflation. That is precisely what happened after 2020. The government spent at wartime levels even as the emergency receded, and the Fed soon accommodated. Inflation naturally followed.

So the problem is not simply that the Fed made mistakes. It is that the institutional structure invites those mistakes. A discretionary Fed embedded in a debt-heavy fiscal environment will tend to prioritize the short-term over the long-term, accommodation over restraint, and political convenience over monetary discipline.

The solution is a Fed regime change. We need actual legislation to change the central bank’s mandate. Administrations change. Personnel change. But laws can become, as James Buchanan put it, “relatively absolute absolutes.” If Congress replaces the Fed’s current mandate, which includes employment and interest rate targets alongside price stability, with a single, clear mandate for price stability, the Fed can credibly commit to refrain from underwriting future deficit spending. Congress can’t count on the Fed bailing it out if the Fed’s price level target limits the printing press.

The goal is not to make the Fed powerless but to make its power legible and therefore predictable. A rule-bound Fed, focused solely on price stability, empowers planning by businesses and households. It rewards saving. It discourages the kind of speculative boom-and-bust cycles that discretionary policy tends to produce. And it will force fiscal policymakers to get their mismanaged affairs in order.

Other proposed solutions won’t work. First, we should reject presidential control over monetary policy. Giving the executive branch direct authority over interest rates would politicize money even further. Second, simply appointing more “conservative” central bankers offers no durable fix. Hawkish Fed chairs come and go; without a reformed mandate, the institutional logic reasserts itself.

Inflation has cooled from its recent peaks and deficits are not as high now as during the COVID period, yet the underlying institutional dysfunction remains. The Fed is still improvising, still subject to fiscal pressure, still operating without the kind of clear rules that would make its behavior predictable and its decisions defensible. Monetary policy by bureaucratic fiat is not good enough. To prevent money mischief and fiscal folly, only the discipline of rules will do. The solution is a single mandate: price stability alone.

America has spent more than $20 trillion on fighting poverty since the introduction of President Johnson’s Great Society program in 1964. Sixty years later, how are we doing?

That depends, as it turns out, on how you measure it.

Last month, Senator Kennedy (R-LA) introduced a bill that would require the Census Bureau to report a new poverty metric as an alternative to the Official Poverty Measure (OPM) by including both cash and non-cash welfare benefits in its calculations.

As Kennedy points out, this is a much-needed fix. The OPM’s methodological weaknesses are well documented. Most notably, it ignores the hundreds of billions of dollars the government spends each year to assist low-income families through tax credits like the Earned Income Tax Credit and in-kind transfers such as Medicaid, food stamps, and housing subsidies. It also overstates inflation and relies on outdated assumptions about food spending. In short, the OPM paints an egregiously inaccurate picture of material poverty in America.

When one includes taxes and transfers, as economists Richard Burkhauser and Kevin Corinth did in a recent paper with the National Bureau of Economic Research, the “full-income” poverty measure sat at just 3.7 percent in 2023 — 1.6 percent after including employer-provided health insurance — a far more optimistic look than the OPM’s 11.1 percent from the same year.

That sounds like a triumph. But Burkhauser and Corinth take it one step further and use their “full-income” measure to track changes in the poverty rate dating back to 1939. 

Contrary to popular belief, they find that the greatest era of poverty reduction happened before Johnson declared war on it.

From 1939 to 1963, absolute full-income poverty plummeted by 29 percentage points, from 48.5 percent to 19.5 percent. Then, despite the government pouring trillions of taxpayer dollars into combating poverty, poverty fell by only 15.7 percentage points from 1963 to 2023. Barely half the progress in more than twice the time.

But the stagnating decline is only half the story. The more consequential difference is what drove it. 

Before 1964, the main engine of poverty reduction was increases in market income — a measurement that includes wages, salaries, and other forms of income from employment. From 1939 to 1959, market income poverty fell by 26.1 percentage points, nearly all of the 27.3 percent decline in full-income poverty among working-age adults over the same period. In short, before the rapid expansion of the welfare state, most people were earning their way out of poverty.

After 1964, that engine stalled. Market income poverty fell by just 3.9 percentage points from 1967 to 2023, while post-tax, post-transfer poverty fell by 10 percentage points. Even though poverty has continued to decline over the past six decades, most of that was due to the ever-expanding generosity of government transfers.

While low-income Americans were benefiting from the biggest poverty reduction in the country’s history, the percentage of working-age adults relying on government transfers for more than half their income decreased from 2.9 percent in 1939 to 2.7 percent in 1959.

By 2023, this number had nearly tripled to 7.6 percent, even reaching as high as 15 percent in some years.

As Mercatus scholar Jack Salmon put it: “The War on Poverty changed the how of poverty reduction, but it didn’t accelerate the how much.” 

If anything, by changing the former, it may have blunted the latter. A 76 percent increase in real median income, paired with rising employment and higher productivity, all driven by rapid postwar economic expansion, pulled more people out of poverty in 24 years than trillions of dollars in government-imposed wealth redistribution have done in 60.

Some may argue that this trend is to be expected. After all, reducing poverty from 48 percent to 20 percent is arithmetically easier than reducing it further because there are simply fewer people left below the poverty line, and those who remain tend to face the most entrenched barriers to self-sufficiency.

Fair enough. But as Burkhauser and Corinth point out, full-income poverty largely stagnated starting in the 1970s — right as welfare spending was ramping up dramatically. In short, taxpayers have been paying for a multitrillion-dollar boondoggle that has yielded increasingly diminishing marginal returns. 

So, what was the main driver behind the pre-1964 miracle? Simple: Economic growth.

The pre-1964 record, along with centuries of evidence, suggests that nothing has worked better than economic growth in helping individuals, especially those at the bottom of the income ladder, to achieve a higher quality of life. Across the world, economic growth driven by liberalization helped pull almost one billion people out of extreme poverty from 1990 to 2010.

Here at home, the pattern still holds. The Fraser Institute’s research shows that North American states with higher and increasing levels of economic freedom tend to have higher income growth and employment, more income mobility, especially among low-income households, higher economic growth, less homelessness, and lower levels of food insecurity.

The fruits of economic growth are visible in ways that poverty statistics fail to capture, especially for America’s poor. As Joseph Heath points out, 95 percent of American households below the poverty line have electricity, indoor plumbing, a refrigerator, a stove, and a color television. More than 80 percent have an air conditioner and a cell phone, and two-thirds own a washing machine and dryer. Economic growth, not government programs, is what helped make these once-luxury goods unavailable to many wealthy households now accessible to nearly everyone. It continues to bear fruit today — wages for typical American workers are at all-time highs.

The most powerful anti-poverty program had no enrollment forms, caseworkers, or spending bills. It was a growing economy that helped millions of people earn their way to a better life. As such, subsequent efforts should focus on removing government-created barriers to economic growth, occupational opportunities, and job market entry rather than adding another layer of expensive, inefficient wealth transfers.

Senator Kennedy is right to say we need a more accurate measure of poverty. When analyzing the best ways to combat poverty, policymakers should reflect on whether the welfare state was ever the right tool for the job.

The extended partial government shutdown has led to long lines of frustrated passengers at airports nationwide as unpaid Transportation Security Administration (TSA) agents walk out. Officials even warn that small airports may shut down due to the absences. If we for a moment disregard the Washington Monument syndrome likely also at play, the lesson to be learned here is not the importance of funding government services — but the exact opposite.

The TSA has a long history of failing to such a degree that it could never survive had it not been run by and within the government. Costing taxpayers and travelers $10 billion annually, not counting the inconvenience and time lost, the agency fails even on its own terms. The failure rate in 2015 was over 90 percent. The same in 2017. If these data seem dated, it is because they are. Instead of fixing the problems, the results of the agency’s internal testing were classified. In the absence of data, the only reasonable interpretation is that the agency remains a catastrophic failure to this day.

The recent airport chaos stresses how the security theater has become an unbearable bottleneck. It also stresses how dysfunctional government services become problematic beyond the waste of resources and the inconveniences they cause. The difference between government services and market solutions offered by businesses is stark. A private business that fails to deliver loses customers, and therefore both revenue and market share. Its failure is its own problem, which is a strong incentive to fix it.

As a government agency, the TSA’s failure is not its problem but is instead shifted onto travelers (their “customers,” as it were), who are, in some cases, left waiting six hours in line to get through the security checkpoint. In fact, this failure can easily be construed as a benefit for the TSA, which now — because the government requires all passengers to pass through its bottleneck — has leverage to demand more funding. As a result, the destruction wrought by dysfunctional government becomes an argument for more of it, and taxpayers are left with the bill.

The arguably zero value added by the TSA’s security theater thus becomes a self-enforcing bloating of the bureaucracy, making the agency an ever-expanding jobs program that burdens taxpayers while harassing travelers.

Imagine if security had instead been the responsibility of airlines. Rather than cause constant delays and inconvenience, it would be in the airlines’ interest to streamline the process and make it as unobtrusive as possible. A failure to staff security functions would not be travelers’ (customers’) problem but the airlines’, who benefit only when we fly — and remain liable to keep travelers safe. The TSA has no such responsibility.

But a government service is worse than what can be explained by destructive operative incentives. We often fail to realize that what exists in the present is a result of developments in the past and that the future too will be different. In other words, the market economy as well as society overall are processes in constant flux, not a static state. Privately provided security would, just like any other service offered in the market, be subject to constant innovations — creative destruction, as economist Joseph Schumpeter called it. 

Creative destruction is the power of disruptive entrepreneurship to cause leaps of improvement. As entrepreneurs introduce innovations that bring great benefit, consumers abandon the solutions they previously chose to use. For example, when Henry Ford introduced the Model T, people flocked to the affordable automobile — the greater value — and stopped relying on horses and carriages. The automobile became the new, higher standard for transportation. Automobile manufacturing and gas stations replaced horse breeders and stables. 

We would thus see continuously improved security measures provided at lower cost — taking less time and being more convenient for travelers. The value to airlines is that it benefits their customers. It’s a competitive advantage and a value-add.

The very opposite is true for government services such as the TSA. They have nothing to benefit from providing the service they are tasked with effectively and efficiently. In fact, the very opposite is true: if the TSA would find ways of reducing the cost, the agency’s budget would likely be cut in response. They would effectively be penalized for improving. 

And therein lies the crux: government agencies have little or no incentive to serve the users of their service. But private businesses stand and fall by providing customers with value. It is no surprise, therefore, that airport security is a hassle and inconvenience — and that it is expensive. The TSA is a bureaucracy and a jobs program that does not keep us safe. 

Recognizing this fact helps us understand the chaos at airports. More funding would do more harm than good.

A good few years before the AI craze, my Oxford lecturer gave a presentation on the shifting nature of work. An economic historian by trade, Judy Stephenson traced the arc of compensation from labor market considerations in early modern London, and wove a full-circle tale of being paid piecemeal (output) in the nineteenth century, to hours (input) for most of the twentieth century, and then back again in the twenty-first-century gig economy. 

The delivery and ride-sharing services were the major concerns of the intelligentsia during the 2010s. You were paid not for your time but for the output you quite often physically delivered, with resulting debates over unions, safety, and wages. 

Stephenson accounted for the changes on very Coasean terms: In the assembly-line work of a century ago, it wasn’t worth the transaction costs of figuring out exactly whose contribution was worth how much, so you just roughly averaged out everyone’s hours with some extra perks for responsibility or long service. And compared to the at-home weavers of the century before, it was much less clear who was responsible for the exact value-add. Put differently, the loss of efficiency associated with time-based pay (free-riding, monitoring, slacking off, or shirking work) might have been less than the costly efforts necessary to constantly re-establish rates for specific tasks.

Economic textbooks, heavy on the modeling, might imply that performance pay is more efficient since it aligns incentives and minimizes free-riding. Enter computers and digital markets matching supply and demand, plus standalone gig workers entirely responsible for their own output. Those institutional changes shifted the bargaining power and the Coasean transaction costs involved — making the real world so much more like the sketched model of an economics textbook. 

Easily Replicated Abundance Meets the Economics of Infinite Content

There’s an obvious self-selection in the current labor-related worries coming our way: It is precisely those of us who have invested most in this credentialist commentariat, who have sacrificed our lives and oriented our identities around the very cognitive and generative skills that LLMs now so effortlessly replicate. 

It’s no longer that hard to have ChatGPT write like me (just train it on my past writing), Claude to code like a programmer with a decade of experience, or have a combined AI effort produce a beautiful, two-minute, period-piece ad spot for $100. 

In The Great Harvest, a recent and ironically mostly AI-generated book, Adam Livingston captures the white-collar workplace revolution underway: It’s “not that your career will vanish overnight but that it was always just a fragile assemblage of solvable problems, [… your job] was actually a collection of separate functions waiting to be identified, isolated, and optimized away.”

The music industry and the economic value of songs were early indicators here, with supply and production way surpassing any feasible consumption or customer on the other side. Even though its economic threat stemmed originally from pirated rather than easily replicated material, the marginal value unavoidably fell to around zero. While Taylor Swift rakes in royalties from streams and other artificially scarce legal arrangements, she generates more economic value from concerts and merch. Her physical being becomes the ultimate, rivalrous, nonreplicable luxury good. 

With the marginal cost of producing videos, images, music, or words going to zero, we should have expected infinite content and next-to-no meaning — see YouTube, TikTok, or Twitter. 

With the rest of the arts and white-collar knowledge industry up next, it’s a little bit of an economist’s puzzle why prices (i.e., wages) haven’t dramatically fallen yet, to adjust relative scarcity and the now much more abundant supply — stories of social anchoring or nominal contract rigidities, no doubt. So far, we’re much more likely to see quantity adjusting, meaning fewer workers or worse labor-market conditions for programmers, journalists, accountants, and other white-collar jobs. 

Where’s the Value? Humans as Tastemakers

A lot of digital ink has been spilled on trying to identify where we go from here. In a world of informational abundance and adequately generated text at everyone’s fingertips, where’s the economic value?

“Brainpower is now a commodity that is going cheap,” Andrew Yang reflected this month. Perhaps the best thing we can say about his UBI-infused presidential bid in 2020 is that it was premature. 

“We have a love-hate relationship with working for a living,” Tim Harford observed for the Financial Times; the pain and hardship of working is heavily bound up with meaning and identity. Fred Krueger and Ben Sigman, in another recent book, observe that the “labor theory of value collapses when machines do all the labor,” and that “scarcity pricing becomes meaningless when AI makes many things abundant.” As intelligence becomes infinite, they conclude, the finite becomes infinitely valuable. 

These reflections might as easily have been titled “The Return of the Labor Theory of Value,” not because the LTV was a particularly revolutionary economic theory, but because of what it indicates about our infinitely replicable information and knowledge system going forward. If everything from music to code, words, and video can be created at the press of a button, the only scarce thing left beside the physical world is our human attention. The things we choose to do, choose to look at, choose to labor on.

Fiction writers, faced with the nearly infinite onslaught of storylines and millions of predominantly self-published titles each year, have realized this: Their words or imagined characters may not be scarce, but the very fact that they labored intensively over them is what other humans recognize as worthwhile and impressive. (We might ultimately decide to pay a premium for human connection, attention, or presence.)

Book sales, while pretty stagnant in nominal and real terms, might be monetary votes of appreciation more than actual desire or follow-through to consume the work. 

In the past, these industries had an overhang of gatekeepers and tastemakers deciding what was good music, good art, good writing, or good journalism. In the past few decades, it might have been liberating to have the gatekeepers shoved aside via technological means, but it’s only now that they’re gone that we’re starting to miss them. The artificial scarcity they imposed elevated excess economic value onto songs and books and movies that can now be generated and duplicated by the millions. 

One way out, then, is to recreate the gatekeeping — not in production, that ship has sailed, but in attention and awareness. We might look to respectable minds, like we once did respectable labels or studios or outlets, not for reporting what is in the journalists’ style, but what matters. Trusting in their vision of what matters, using their long and somewhat obsolete experience as a filtering mechanism against the information overload we’re otherwise doomed to. 

Muscle lost its economic dominance long ago; we all know that story. Now that machines are coming for the brains, we have a similar story of scarcity, abundance, and obsolete skills to contend with. What remains scarce — attention, trust, physicality, judgment, embodied presence — will command the rent. 

Two years after the European Union (EU)’s Digital Markets Act (DMA) took effect, the results have been mixed to negative. Promises about certainty, lower enforcement costs, and a more innovative and competitive digital ecosystem haven’t materialized.

Rather than learn from Europe’s mistakes, Californian policymakers and federal proponents of Sen. Amy Klobuchar (D-MN)’s American Innovation and Choice Online Act (AICOA) would import similar ideas to ostensibly help small businesses and hold tech giants accountable. The EU’s experience shows that DMA-style proposals aren’t just unlikely to achieve these goals. They’re also likely to harm consumers, competition, and innovation.

The DMA was intended to support “fairness” and “market contestability” for small businesses that rely on large digital “gatekeeper” platforms, like Amazon, Google and Meta, to reach customers. The “gatekeepers” are mainly American tech giants. The DMA bans them from engaging in certain business practices, even if those practices benefit consumers or competition

For instance, the DMA prevents Google from integrating its Maps, Flights and Hotel Ads tools into search results as this would be “self-preferencing” over third-party booking sites. Evidence shows that this ban has degraded the user experience by increasing the number of clicks required to see prices and make bookings, leading to reduced hotel bookings. Similarly, Apple is limited from excluding third-party apps and app stores from its App Store and iOS even though this has degraded security features, IP protections and trustworthiness in Apple’s products by increasing the proliferation of pirated, less secure and pornographic apps. 

These mandates help some businesses, but harm others, including developers of apps aimed at children who rely on parental trust in the highly curated app store, and hotels that benefited from traffic directed through Google’s tools. Rather than upholding competitive markets, they let governments “pick winners” and undermine digital platforms’ ability to differentiate themselves or experiment to better meet consumer and business needs. This goes against American antitrust law’s focus on consumer welfare over punishing firms for size and success, or shielding businesses from competition- an ethos that has let the US produce leading tech firms that have eclipsed would-be European peers.

Like the DMA, AICOA bans large digital platforms from self-preferencing and from using third-party seller and service provider data to refine their own offerings or better serve consumers—even though such practices are routine in non-digital industries, like grocery stores. The bill also claims to provide legal certainty for businesses, yet its language is vague and grants regulators broad discretion. For example, it uses amorphous phrases like “materially harm,” which courts must interpret without precedent, and allows the FTC to define what constitutes an anti-competitive practice through guidelines.

In Europe, the DMA’s ambiguity about the conditions and costs a platform can impose on third-party services—intended to maintain security and ensure fair value—has led regulators to impose heavy, retrospective fines on Apple without providing clear instructions for compliance, all while soliciting feedback from competing app stores and developers on what Apple should do. This uncertainty has delayed the rollout of new features, including AI tools, for European Apple and Google users.

AI development depends on deploying new technology at scale to gather data, refine foundation models, and solicit user feedback. Rules like the DMA, which create legal uncertainty and impose arbitrary limits, can discourage AI infrastructure and software investments, stifle innovation, and undermine U.S. tech leadership, as well as the ability of small businesses that rely on AI-integrated platforms to compete.

Unlike AICOA and the DMA, recent California Law Reform Commission (CLRC) recommendations that could be adopted by that state’s legislature apply to even non-digital businesses and would dramatically lower evidentiary thresholds for market power. The reforms penalize broad swathes of conduct for firms deemed to hold “significant market power”, including self-preferencing, without need to show likely or actual consumer harm or weigh pro- and anticompetitive effects. By banning “predatory pricing” without need to show that alleged offenders would likely recoup their losses by raising prices later, the reforms discourage businesses from legitimately competing on price. The CLRC’s proposals radically pivot antitrust law from protecting consumers to protecting competitor businesses and stakeholders such as “trading partners.”

Such restrictions arbitrarily favor some businesses over others, leaving the competitive process at the mercy of government diktats instead of consumer demand.

Existing US federal and state antitrust laws already punish tech giants and platforms for anti-competitive behavior on a case-by-case basis that also allows judges to limit inadvertent restrictions to competition or harm to consumers that could result from legal fixes, as recent rulings against Google and Apple show. Existing laws can and should be strengthened only if there is a strong rationale supported by economic evidence. Importing flawed foreign competition policies would only empower government officials and some competitors at the expense of consumers, innovation and America’s global competitiveness.

The strains emerging in the roughly $3 trillion private credit market are no longer isolated anecdotes; they are coalescing into a coherent signal of tightening financial conditions at precisely the wrong moment for the broader economy. 

As discussed previously, a growing list of developments has unsettled investors. Now, on top of those at Blue Owl Capital, Apollo Global Management, and Morgan Stanley’s North Haven Private Income Fund, JPMorgan Chase has begun marking down private credit loans. Concerns have gone international. These are not systemic failures, but they do mark the transition of private credit from a benign, yield-enhancing allocation into a market experiencing its first meaningful credit cycle. The sector, which expanded rapidly after the Global Financial Crisis as banks retreated from riskier lending, now faces the test of higher rates, weaker borrower fundamentals, and more discerning capital.

It is critical to mention (or reiterate) that this is not shaping up as a 2008-style solvency crisis. The private credit market is small, leverage is generally lower, and there is little evidence of the kind of widespread fraud or securitization opacity that defined the subprime mortgage crisis. But that comparison risks missing a more relevant dynamic: private credit is a tightening mechanism. Its problems do not need to trigger bank failures to matter. Instead, they transmit stress through funding channels, into refinancing constraints, and ultimately into valuation pressure. Banks’ exposure — variously estimated from under $100 billion to potentially near $1 trillion globally when commitments are included — creates a feedback loop whereby losses or even perceived risks in private credit lead to tighter lending standards broadly. The nature of that tightening is not to remain contained; it ripples outward to impact middle-market firms, consumer borrowing, and ultimately, aggregate demand.

The mechanics of that tightening are already visible. Higher yields increase borrowing costs directly, but they also operate indirectly by raising discount rates, lowering asset valuations, and making refinancing more difficult. Private credit funds, often reliant on bank revolvers and leverage to enhance returns, become more fragile as funding costs rise. Borrowers — especially highly leveraged, floating-rate borrowers such as software firms — face a double bind of rising debt service burdens and deteriorating business prospects, particularly in sectors now facing disruption from generative AI. 

Estimates that 15 percent to 25 percent of private credit portfolios are exposed to such firms underscore the vulnerability, with some projections suggesting default rates could approach eight percent in stressed scenarios. Even absent widespread defaults, the marginal borrower is already being shut out, and that is the entire point: credit availability is shrinking.

Bank of America Private Credit Proxy (white), VettaFi Private Credit Index (blue), and Indxx Private Credit Index (orange), 2018–present

(Source: Bloomberg Finance, LP)

This tightening is unfolding against an increasingly unfavorable macro backdrop. Energy prices are rising, renewing inflationary pressure into an environment where disinflation had only recently begun to take hold. At the same time, yields across the curve have been moving higher, reflecting both inflation concerns and increased term premia. Rate futures markets, which had priced a steady path of easing, are now assigning a small but meaningful probability that policy rates could end the year higher rather than lower. That shift matters disproportionately for private credit, where floating rate structures and short-duration funding expose both lenders and borrowers to immediate changes in financing conditions.

The result is a reinforcing cycle. Higher energy prices push inflation expectations upward, keeping central banks cautious. That sustains higher yields, which tighten financial conditions directly and through channels like private credit. As private credit funds pull back, mark down assets, or restrict redemptions, confidence weakens and liquidity becomes more selective. This, in turn, constrains investment and hiring among other companies, but in some cases, the very firms that have come to depend on private lending. It is a quieter, more diffuse form of stress than in 2008, but consequential nevertheless.

Two factors are making the current moment particularly delicate. The first is that pressures are converging rather than offsetting. In prior cycles, falling energy prices or relaxing yields might have cushioned a credit tightening episode. Today, the opposite is occurring: energy, rates, and credit conditions are all moving in a direction that arrests growth. Private credit is not the epicenter of a crisis, but it is an increasingly important transmission channel through which macro tightening is being amplified.

The second is how much remains unknown. There is no centralized reporting, and visibility into indirect exposures is limited. In fact, there is no consistent definition of what the concept of private credit as an asset class ultimately encompasses. Also unclear is where the risks ultimately reside: would losses stay within private credit vehicles, migrate onto bank balance sheets, or into retail portfolios, pensions, and insurance structures that may not fully disclose their exposure? While the situation does not threaten a “Lehman moment” in scale or leverage, the lack of transparency means policymakers and analysts cannot confidently assess whether stresses will remain contained or propagate through tightening credit conditions, making the key risk not what is visible, but what remains hidden.

The emerging strains in private credit should be understood less as a harbinger of systemic collapse and more as an early indicator of economic deceleration. The asset class is doing what credit markets ultimately do in late-cycle conditions: becoming more selective, much more expensive, and far less forgiving. While far from inevitable, that process, especially when synchronized with rising input costs and a shifting rate outlook, is unlikely to be benign.

Paul Ehrlich, famed biologist, died last week at age 93. Ehrlich rose to fame in the 1960s as the author of a book that resonated powerfully with the public, The Population Bomb, and became a recurring guest on late-night talk shows and a frequent subject of discussion in all the major newspapers. The even more famous Johnny Carson, interviewing him in 1980 — more than a decade after the book’s publication, a sign of its lasting impact — said he generated “more mail than any guest we ever had on the show.” 

All in all, he appeared 25 times on one of history’s most famous talk shows.

The Population Bomb arrived at the right time: economic growth was fast across the world, and so was population growth. Given finite resources, population growth (at 3.5 billion people in 1968) would outstrip food production and deplete the stock of key resources (think metals, fossil fuels, farmable land). Eventually, Ehrlich argued, starvation would occur, mass famines would follow, and social collapse would take place. Whatever technological progress could be achieved would only delay the inevitable — and only do so trivially.

To stave off the chain reaction, Ehrlich suggested, economic growth would need to slow down. Overpopulation should be curtailed by discouraging large families, possibly with coercive population control measures. However, Ehrlich did not stop there. He proposed that the Federal Communications Commission should discourage media that portrayed large families positively. He argued for immigration restrictions because allowing the poor of the world to come to America would accelerate their consumption and hasten the collapse. He argued that international aid should be tied to conditions requiring other nations to slow down population growth. All his policy proposals ended up being calls for greater coercion and greater control.

Ultimately, he was proven wrong. We now have more than twice as many humans on this planet as when Ehrlich wrote his doomsday prophecy. We live longer, healthier, wealthier, safer lives on a planet that has, on many dimensions (but not all), grown cleaner. None of the extreme predictions came to pass. Technological innovations were not trivial — they were exceptional. The Green Revolution, improvements in transportation, improvements in energy efficiency have all staved off the predicted catastrophe.

Ehrlich’s intellectual nemesis — population economist Julian Simon — had long argued that humans were capable of producing economic growth and reducing environmental impacts, and of creating and innovating our way out of these problems. Humans, in Simon’s view, were The Ultimate Resource. In all the obituaries for Ehrlich, Simon is mentioned for his contrarian optimism (often labeled Cornucopianism) and for having bet on these outcomes against Ehrlich.

But, in the midst of all the commemorations, claims of vindication, and assertions that Ehrlich was merely “premature,” something has been forgotten: Paul Ehrlich lost even within the environmental movement he had helped fuel. His views have been largely and subtly, not always explicitly, abandoned — in favor of those of Julian Simon.

To see why, think about the explicit premise that Ehrlich held: humans are mouths to feed, polluters, and ultimately trespassers in the ecosystem. In other words, for the biologist that he was, they were a form of parasite. If a population grows too large, correction must come through extinction since the parasite kills the host. Human ingenuity plays little role; at best, it is trivial. After all, a parasite is a parasite. If the parasite innovates, it is to be a better parasite. Humans are not creators or even equal creatures, but burdens upon the ecosystem.

From that premise, it follows naturally that some degree of population control (including coercion) could be justified. Indeed, this view warrants a normative stance that says that some humans are dispensable or can be subjected to things that most would (and did, when Ehrlich’s proposals were applied) find morally repellent.

In contrast, Simon’s view was that humans are not merely consumers. We are creators. Given the right institutions, we can solve environmental problems through innovation. The real question is not population, but the institutional framework within which people operate. In fact, Simon frequently pointed out that Ehrlich’s prediction could come true because of the policies he proposed. Innovation rarely happens under compulsion. Innovation requires open environments that encourage it. Being a libertarian, he argued that the most extreme environmental disasters occurred in coercive regimes such as the USSR, Communist China, and Castro-led Cuba. That coercion is similar in nature (though not in intent) to what Ehrlich desired. Simon also argued that in uncoerced, free-market economies, improvements and innovations emerge to solve problems as they arise.

In Simon’s view, institutions mattered above all else. The term is broad, to be sure. Classical liberals, conservatives, and libertarians — closer to Simon — tend to emphasize secure property rights, open markets, and free trade as drivers of innovation. Social democrats, centrists, and progressives, by contrast, often use the “institutions” to mean a capable state that regulates to solve problems. In their view, markets alone are not sufficient. Government intervention, such as pricing pollution, is justified as a way to change behavior and spur innovation by aligning private incentives with social costs. In this sense, “institutions” carries very different meanings across perspectives.

But this is also where it becomes clear that Paul Ehrlich lost the argument. Consider the case of a carbon tax. Its justification rests on the idea that pricing pollution changes behavior and encourages innovation — not that humans are parasites, but that they respond to incentives. The premise is cooperation, not coercion born of scarcity panic.

All of these perspectives share a crucial assumption: humans are capable of solving problems. Environmental outcomes depend on incentives and institutions, not on reducing the number of “mouths.” In that sense, even Ehrlich’s opponents across the ideological spectrum converge on a common conclusion: humans are not parasites, but the ultimate resource.

This was not always the case. Environmental movements from the 1940s through the 1970s were far more receptive to Paul Ehrlich’s view. Many on the left and the right accepted his core premise, and for a time it was dominant. Today, it is not merely contested; it has largely been abandoned, even by those who neither cite nor sympathize with Julian Simon.

This is the real defeat of Ehrlich — even where one could think he had the most support, he lost ground. His core premises have been largely abandoned by all except the most extreme. In a way, Ehrlich died well after his ideas did.

And those ideas were truly horrible for human welfare. I do not rejoice in Ehrlich’s death. I will, however, dance on the tomb of his ideas and you should too. And when dancing, I will wear my “Julian Simon Fan Club” pin.