Category

Economy

Category

There is an old economics adage that says if you want people to buy more of your good or service, you should raise the price. Right?

You would think something so obviously false would never be tried in the real world. Yet Chicago has decided to put this “law” into practice. Yes — the city has chosen to make visiting more expensive in order to attract more visitors. While contradictory even on the surface, it reflects a deeper assumption common in modern economic policy: policymakers believe they can engineer demand through spending, even when the funding for such spending suppresses demand in the first place.

Chicago recently approved an increase in its hotel tax, raising the rate from 17.5 percent to 19 percent in downtown and nearby areas. The explicit goal is to boost tourism by using the revenue to fund city tourism marketing. The city also created a Tourism Improvement District to fund its tourism organization. At 19 percent, the hotel tax is now among the highest in the United States.

The logic seems straightforward: spend more on promotion, attract more visitors, generate more economic activity. And since tourists do not vote in local elections, perhaps this is even politically painless.

But the policy rests on a major assumption — that demand for visiting Chicago does not respond much to price. Without that assumption, the policy works against itself.

All choices are made at the margin, and tourists are no different. Families planning vacations compare destinations. Convention planners weigh bids from multiple cities. Business travelers may extend or shorten stays based on cost. In all cases, price matters.

A hotel tax directly raises the cost of visiting. A few extra dollars per night may seem trivial in isolation, but travelers rarely book for just one person or one night. Consider conventions involving thousands of room nights. Whether for multi-night stays, getaways, or events, small differences can become decisive. Cities already compete aggressively for tourists and conventions through incentives, adjusted pricing, and other cost advantages — and Chicago has now changed that calculus.

The problem runs deeper than simple price sensitivity. It also reflects circular logic.

Tourism relies on visitors choosing a city based on cost and value. A tax raises the cost of visiting — the very thing the city hopes to stimulate with tax-funded promotion. In effect, the city is trying to offset a price increase with more spending.

This might work if demand were inelastic and marketing fully compensated for the higher cost. But neither is likely. Marketing can inform potential visitors, but it cannot eliminate trade-offs. If Chicago is more expensive relative to other cities, marketing cannot erase that disadvantage — it can only try to work around it. This is a common error among policymakers: assuming spending can substitute for underlying value — even when the spending itself comes from higher costs.

But spending is not value. Tourism does not arise from marketing budgets; it comes from perceived value. Visitors choose destinations based on attractions, safety, convenience, and price — among other factors. Marketing can highlight value, but it cannot create it. Demand cannot be produced directly through spending. If costs rise, marketing can at best mask the problem temporarily.

Hotel taxes often fall on outsiders — tourists who cannot vote — so policymakers see them as convenient revenue sources. But these taxes are not free. Higher prices reduce demand, leading to fewer bookings, shorter stays, and lost conventions. Local businesses like restaurants and service providers bear part of the burden, too. The effects ripple through the entire tourism ecosystem.

Chicago might see higher tourism revenue after the tax. The city might fund visible campaigns or secure high-profile events. On paper, the tax might look like a success. But aggregate numbers can be misleading.

Total tourism revenue could rise even as Chicago loses marginal visitors to cheaper alternatives. Large events might still come, often due to subsidies, while smaller, price-sensitive travelers go elsewhere. The composition of visitors changes, even if totals hold. That is not sustainable and runs contrary to the city’s stated goals.

At its core, Chicago’s hotel tax raises a simple question: can you tax something into existence? The answer is no — a lesson governments seem unwilling to learn.

Tourism, like all market activity, relies on voluntary decisions. Visitors compare costs and benefits. Raising the cost of visiting creates a built-in tension that marketing cannot fully resolve. The method matters: you visit a city because it offers better value than alternatives — not because it spent more on promotion. At its root, this is not a marketing problem, but an economic one.

Educators continue to debate a question that sounds philosophical but is actually quite practical: when a student earns a diploma, what exactly have they earned? Is it proof of real, transferable, labor-market-ready skills? Or is it a signal, a flag planted in the employer’s field of view that says this person showed up, tried hard, and turned things in on time?

Most honest observers land somewhere in the middle. Yes, school teaches skills. And yes, the diploma itself also signals something beyond the skills taught. The degree is both product and receipt.

New research throws a wrench into both sides of that supposed balance. Grade inflation, the practice of awarding grades systematically higher than student performance warrants, manages the impressive feat of being bad for learning and bad for credentialing simultaneously. A teacher who bumps up students’ grades by roughly a quarter of a letter grade beyond what they earned costs their average classroom of students a cumulative $213,872 in lifetime earnings per year. 

With an average class size of roughly 21 students, that’s about $10,000 per student, evaporating into the ether of unearned A-minuses.

But wait, surely higher grades mean better outcomes? Here is where the research gets genuinely counterintuitive. Students taught by grade-inflating teachers are actually less likely to graduate high school within five years. They are less likely to enroll in associate’s or bachelor’s programs in the years that follow high school and are more likely to run up absences and suspensions. When grades stop meaning anything, the incentive to earn them, and even show up, disappears. Students may coast through inflated coursework only to arrive unprepared at high-stakes exams that no single teacher controls. The floor gives way precisely when it matters most.

Critically, this is not just a story about struggling students. The reduction in learning appears across the achievement distribution. High performers are not immune to dulled incentives, and lower-performing students are particularly likely to reduce postsecondary enrollment. When the signal gets noisy, everyone pays.

So why does grade inflation persist? Economics offers a cleaner diagnosis than moral outrage. Consider who actually bears the cost of grade inflation: universities trying to screen applicants and employers trying to hire them. Neither of these groups has any hand in how classroom grades are assigned. The parties who suffer the consequences have zero influence over the output.

Now consider who benefits, at least on the margin. Teachers who inflate grades face fewer complaints, less pushback from students and parents, and reduced pressure from administrators eager to boost school rankings. Students, individually, prefer higher grades for less work — even if, in the long term, they’re being robbed. Administrators face ranking systems that incorporate GPA, creating perverse incentives to inflate the numbers that feed those rankings. Everyone in the school has a small reason to let the grades drift upward, and no one inside the building bears much cost for that grade inflation.

Economists have a name for this predicament: the principal-agent problem. In such scenarios, those tasked with making decisions (teachers and administrators) operate with different incentives and better information than those who ultimately rely on those decisions (universities and employers). This incentive mismatch results in the agents, on the margin, prioritizing their own immediate goals — like reducing conflict, easing pressure, or boosting reported outcomes — over the later participants’ need for reliable signals of ability. This dynamic produces the predictable distortions we see in higher grades, making grade inflation less a moral failure than a structural one baked into misaligned incentives.

On top of the incentive dynamics, the system is stuck in a collective action trap. Imagine a single school decides to get serious about honest grading. Their students’ transcripts suddenly look worse than every competing school’s, not because those students learned less, but because they were graded honestly. The reform-minded school’s graduates would be penalized in admissions and hiring. Real reform requires many schools acting collectively, but no school wants to move first. So everyone keeps inflating.

The deeper lesson here isn’t that teachers are villains or students are lazy. It’s that incentive structures, left unexamined, produce outcomes that no individual actor would consciously choose. Solutions, then, must operate at the level where these incentive problems can actually be addressed, which likely means districts and states, not individual classrooms.

The most promising near-term fix is transparency: require transcripts to list the class average grade alongside each student’s individual grade. A B-plus in a class averaging a B is more meaningful than an A-minus in a class averaging an A-minus. Putting the grade in context can help restore its signal. This is an inexpensive and feasible fix that could be implemented tomorrow at the district level, to neutralize the first-mover problem.

An increased emphasis on standardized assessments, imperfect as they are, can also play a role. When used judiciously, they provide an external benchmark that is harder (though not impossible) to manipulate. Expanding their use as a complement to GPA could help colleges and employers to better interpret academic performance.

For schools willing to take bolder action, forced grade distributions (requiring that grades cluster around a target average) remove the social pressure on individual teachers entirely. Many graduate programs already use this mechanism, and it particularly alleviates pressure for teachers to have high grades relative to their peers.

Colleges and universities could move decisively and require their admissions offices to publish their own historical GPA-to-outcome conversion rates by high school, effectively flagging institutions that inflate grades within the admissions market. Employers that track hiring outcomes could apply similar adjustments. Once these implicit discounts are made public, the incentive to inflate grades would begin to disappear.

None of these reforms will be easy. They require coordination across schools, districts, and possibly states. But the alternative is to continue down the current path, where grades become ever less meaningful and education ever less effective.

An inflated currency loses its value, and so do inflated grades. The only question is whether we fix the signal before the market fully stops believing it.

At one time, the rich could generally count on the Republican Party not begrudging them financial success, even of the outlying variety. That’s no longer the case. Arguments that such elites may be bad for America, and maybe just bad, period, now come from both sides of the political spectrum. Some even propose class genocide.

“Billionaires should not exist,” said Vermont Senator Bernie Sanders when introducing a plan for a new wealth tax.

In Why Democracy Needs the Rich, John O. McGinnis, a law professor at Northwestern University, offers a different opinion.

He didn’t title the book Why Our Economy Needs the Rich. McGinnis does include the standard case for the wealthy, that through hard work, risk taking and foresight, they make our shared economy more productive. Without Elon Musk, for example, Tesla wouldn’t be what it is. If its customers, employees, and the IRS all benefit from that, why shouldn’t Musk be rewarded?

In McGinnis’ book, that line of reasoning is an afterthought. His main concern is whether the wealthy, especially the very wealthy, make our democracy better than it would be without them.

It’s an important question because if being rich is wrong, then the US is wrong. As McGinnis notes, we are both the richest nation in the world and the richest per capita of any with a population over 20 million.

And while each person in our democracy has one vote, to expect that everyone will have equal influence on political outcomes is naïve. Some work harder at it. They form political action committees, knock on doors for a candidate, or run for office. Others have exceptional speaking skills or large social media platforms for promoting policies.

As McGinnis puts it, “elite influence in democracy is not only inevitable but often beneficial, channeling expertise and coherence into public debate.” Consequently, the political realm has its own “one percent” whose influence exceeds their numbers.

He identifies these elites as those holding influential positions in special interest groups, the government bureaucracy, and the clerisy — the latter including prominent celebrities, academics, journalists, and other members of what is sometimes called the cultural elite. The problem, McGinnis argues, is that these groups tend to skew left politically.

He offers data to support this claim. Among federal bureaucrats, 95 percent of donations in the 2016 presidential election went to Hillary Clinton. In journalism, a 2004 Pew survey found that liberals outnumbered conservatives five to one. In academia, McGinnis estimates the ratio of liberal to conservative professors at top universities today is likely twenty to one. Most strikingly, in the film industry, a study of political contributions from 996 leading actors, directors, producers, and writers found they supported Democrats over Republicans by a 115-to-1 ratio.

Such dominance is maintained, McGinnis believes, through gatekeeping that favors the training and hiring of, for instance, new academics and journalists who think like their superiors. And this is where the wealthy, who also possess outsized political influence, can improve things by being a democratic counterweight to entrenched left-leaning power.

There are many routes to acquiring wealth. “Unlike the intelligentsia,” McGinnis writes, “the wealthy cannot easily exclude individuals with unorthodox views from joining their ranks.” For that reason, the rich arrive at their positions from a variety of backgrounds, beliefs, and political leanings. For every George Soros, there is a Peter Thiel. For every Bill Gates, there is a Miriam Adelson. In other words, the wealthy look like America, ideologically speaking.

And contrary to popular belief, the rich are also a dynamic and constantly churning class, especially at the highest levels. McGinnis notes that almost 60 percent of those on the current Forbes 400 list were not on it twelve years earlier. And 90 percent of the grandchildren of the wealthiest one percent drop out of that lofty tier. Recently, the dynamism of the wealthy may even be on the rise. In 1982, 60 percent of the Forbes 400 came from wealthy backgrounds. That is only 32 percent today. McGinnis even questions the received wisdom that the rich are getting richer in relative terms. He notes that in 1937, John D. Rockefeller’s net worth was 1.5 percent of US GDP, almost the same as Elon Musk’s 1.6 percent share in 2025.

In making these points, McGinnis never decries the right of left-leaning elites to have outsized influence on our political process. He only claims that the wealthy serve as an important counterweight to them. “A democracy, like a tree, flourishes with many roots,” he writes. In a nation founded on freedom of thought and a never-ending contest of ideas, a fuller representation of national perspectives promotes better political outcomes.

It’s a nuanced argument, which McGinnis bolsters by noting that the financially successful tend to have a more pragmatic worldview than other elites, as their wealth invests them in the economic success of the nation while also insulating them from worry about disapproval.

The wealthy’s activities also spread benefits across the political spectrum, McGinnis argues. The rich are traditionally leading supporters of the arts and charity. The first hospital in the United States appeared in 1751 thanks to a group of successful merchants that included Benjamin Franklin. More recently, rich alumni helped Harvard University weather the storm of President Trump cutting off their public funding.

It seems everyone hates corporations these days, but that is nothing new. For more than a century, Americans have swung between denouncing large firms as predatory Leviathans and attempting to conscript them for nonbusiness ends. That process may now be entering a new phase — one with broader implications for whether America remains a free country.

In the Progressive Era, corporations were portrayed as extractive engines of class power, tolerated only if constrained by supposedly impartial regulators and administrative oversight. Since the New Deal, many of those same critics shifted ground, arguing that corporations could be harnessed to advance environmental goals, collect taxes, deliver health insurance, impose maximum working hours, and pursue public priorities that legislatures had avoided, delayed, or even rejected.

Now the New Right has mounted its own indictment, charging corporate America with “woke” cultural coercion, economic disloyalty, and an unhealthy intimacy with left-wing regulators and the administrative state. The result is a curious consensus of hostility, in which corporations are cast either as tyrants or as sycophants, rather than as what they are in a free society: organizations that coordinate capital and labor to produce goods, services, and prosperity within the rule of law.

The Progressive Era attacks on corporations grew out of the massive expansion in economic activity following the Industrial Revolution. Local markets merged into a national economy, and firms scaled up in response. The federal government began regulating at the national level under the Constitution’s Commerce Clause, with measures like the Interstate Commerce Commission and the Sherman Antitrust Act asserting authority over what was seen as harmful corporate conduct.

The perceived harm took many forms: corporate profit was equated with exploitation, and corporate power was viewed as an instrument of entrenched wealth and class division, sometimes even a tool of political corruption. Over time, new corporate sins were added — manipulation of consumers, suppression of workers’ rights, and eventually the perpetuation of inequality and environmental degradation. In effect, the American left developed a theory of corporate vice, holding that corporate incentives are inherently misaligned with the public good.

The application of this theory of vice led to several purported remedies. Foremost was regulation and the entire apparatus of three-letter agencies that today intrude into almost every area of life, in the name of democratic control. Equally, however, if less conspicuous, was a growing suspicion of the idea of shareholder primacy, and the emergence of the idea that markets are morally insufficient. The ultimate result of this theory gaining dominance was the New Deal, with not just restrictions on virtually every area of corporate activity, but its attempt to use corporations directly to serve public ends.

The theory of vice eventually hit its limits. Courts and Congress applied some restraint, and thinkers like Milton Friedman persuaded many that ordinary corporate activity was not inherently suspect. By the late twentieth century, the American left had developed a new framework — a theory of corporate virtue.

This new theory held that corporations were not only morally redeemable but could advance broader social, economic, and political goals. It built on a key premise of the earlier theory of vice: that firms should be managed not solely for owners and investors but for all stakeholders, including society at large. Initially framed around corporate social responsibility, it evolved in the twenty-first century into ESG (environmental, social, and governance) and its subset, DEI (diversity, equity, and inclusion), which spread rapidly across corporate America.

As this theory took hold, corporations became vehicles for a wide range of initiatives. Diversity mandates reshaped hiring, climate priorities filtered through supply chains, and platform moderation influenced acceptable speech. Corporate activity itself became a form of political signaling. These efforts were reinforced by new internal structures — vice presidents of sustainability, proxy advisers, and external scoring systems.

By the time of COVID and the Black Lives Matter movement, much of corporate America and its surrounding institutions had embraced this framework. The older regulatory superstructure reinforced it. Firms that resisted could face political pressure, lawsuits, or penalties. Corporations became political actors not because markets demanded it, but because political incentives pushed them in that direction.

The political right has since mounted its own response, developing a rival theory of corporate vice. Much of it mirrors the left’s earlier critique. Corporations are now cast as coercive actors imposing social change that cannot win at the ballot box. Where regulation was once justified as democratic control, opposition to ESG reflects the same impulse in reverse — using state power to counter corporate influence.

This new critique also revives older themes. Claims that profit-seeking drove outsourcing echo long-standing labor arguments. Concerns about immigration — both low-skilled and high-skilled — reprise earlier critiques of corporate labor practices. These strands converge in the charge that corporations have “hollowed out” American communities and abandoned local ties.

The regulatory machinery built in the Progressive Era is now being redeployed in the opposite direction — against ESG and DEI. What regulators once encouraged, they now discourage through familiar tools: pressure, litigation, and penalties. The result is political whiplash. As administrations alternate, compliance burdens shift with the electoral cycle, and firms adjust accordingly.

This dynamic is predictable. Corporations respond to incentives, including political ones. When alignment with political power offers advantages, firms will adapt. As public choice economics suggests, political actors have incentives to expand their influence, not limit it.

There are signs, however, that the New Right is also developing its own theory of corporate virtue. In principle, such a theory could be constructive — emphasizing political neutrality, wealth creation, and a focus on core business functions within the rule of law. That approach would align with a traditional conservative view of enterprise.

In practice, the emerging version points elsewhere: toward protectionism, industrial policy, closer ties between firms and the state, and reliance on political patronage. This is a different form of corporate entanglement — less ideological, perhaps, but no less political.

The consequences are similar. When firms prioritize political alignment over serving customers and investors, resources are misallocated and incentives distorted. It is a formula not for growth, but for stagnation.

A better path is a classical liberal theory of corporate virtue: firms exist to coordinate labor and capital for productive ends; their social contribution is wealth creation within the rule of law; profit signals value creation rather than moral failure; and business and politics should remain separate. Regulators should set stable, predictable rules — not direct outcomes — and market discipline should guide behavior.

The choice should be clear. A return to mission-focused enterprise depends on it. Free enterprise, not political enterprise, built America — and it remains the only path to sustaining it.

“I hunted for, and stole, a source of fire … and it has shown itself to be mortals’ great resource and their teacher of every skill.”

So says Prometheus, the Titan of Greek mythology, in Aeschylus’s Prometheus Bound, explaining why he suffers in chains. For giving fire to mankind, he was condemned to eternal torment, bound to a rock while an eagle fed upon him each day. Fire was not merely warmth. It was power, independence, production, protection, and the first great escape from literal and figurative darkness. The human story began to change not when mankind learned restraint, but when it learned mastery. Civilization began not with renunciation, but with defiance.

Atop this civilization rests an odd, yet revealing modern ritual: Earth Hour. Today, Saturday, March 28, 2026, at 8:30 p.m. local time, people around the world will again be asked to switch off their non-essential lights for one hour. Organized by the World Wildlife Fund, or WWF, which was founded in 1961, Earth Hour is meant to dramatize concern for nature and the conservation of the planet’s resources. The campaign now marks 20 years and includes landmarks such as Christ the Redeemer in Rio de Janeiro, the Sydney Opera House in Sydney, and the Empire State Building in New York City.

Originally a grassroots movement, Earth Hour now presents itself as “a symbol of hope for nature and climate.” Lofty appeals to help nature and wildlife recover, reduce deforestation, and protect future generations now accompany the annual ritual of switching off the lights on an otherwise unremarkable Saturday in March. Yet even on its own terms, the story is less straightforward than the rhetoric suggests. As Song et al. wrote in a 2018 Nature study, “contrary to the prevailing view that forest area has declined globally—tree cover has increased by 2.24 million km2 (+7.1% relative to the 1982 level).”

The point is not that every environmental problem has vanished, but that global improvement does not always depend on a mass movement of symbolic austerity. Earth Hour’s gesture remains simple enough: dim the world briefly to express concern for the planet. But that symbolism points, perhaps unintentionally, to a deeper truth. Turning the lights off is easy. The true achievement of civilization was learning how to turn them on in the first place. If future generations are to inherit a better world, they will need more than rituals of restraint. They will need the abundance, safety, and human progress that only widespread access to energy can provide.

That is where rugged individualism shines most brightly in history. Thomas Edison and Nikola Tesla were not men of managed consensus, however they both belonged to the same civilizational current: the transformation of electricity from scientific possibility into mass reality. Their fierce competition in the 19th century sparked invention after invention. Edison’s incandescent lamp patent, US Patent No. 223,898, was issued on January 27, 1880; two years later, his Pearl Street Station began selling electricity in lower Manhattan. Tesla’s great leap came in 1888, when George Westinghouse purchased the rights to his polyphase alternating-current system, helping launch the battle of the currents and laying the groundwork for long-distance power transmission. 

The true genius of capitalism was not merely generating power, but to conduct it outward until light, warmth, and safety ceased to be luxuries for the few and became ordinary facts of life for the many. More than a century later, we still live inside the world that this rivalry charged into existence.

Electricity did not merely give cities more light. It gave them more order. In New York City, added street lighting has been associated with significant reductions in nighttime crime, including assaults, homicides, and weapons offenses. It also gave them greater protection from the elements. The Health Department reports that more than 500 New Yorkers die prematurely each year because of hot weather, with lack of air conditioning being the clearest risk factor for heat-stress death. Furthermore, electricity made cities more productive, not less. Research on US manufacturing shows that electrification raised labor productivity by reorganizing production around more efficient machinery and factory layouts. Light, warmth, safety, and output, these were the real gifts of electrification.

It is precisely this history that makes today’s sneers at rugged individualism sound so hollow, especially in New York City. For example, in his inaugural address on January 1, 2026, Mayor Zohran Mamdani promised to replace “the frigidity of rugged individualism with the warmth of collectivism.” But in the very city where Edison’s Pearl Street Station began selling electricity in 1882, that line reverses cause and effect. After a winter that brought one of New York City’s longest freezing stretches since 1963, the real source of warmth was not collectivist poetry, but the electric infrastructure that competition, capital, and invention made possible. If collectivism had accomplished even half of what competition did, New Yorkers might still be warming themselves by candlelight while calling it moral progress. 

For one hour each year, Earth Hour asks the world to rehearse darkness. But from Prometheus onward, the human story has been one of escaping it. Fire, then electricity, enlarged human freedom. The achievement worth honoring is not symbolic dimness, but the civilizational brilliance that made light ordinary.

In the United States, cloud seeding has long been a subject of controversy. The process involves releasing small quantities of compounds such as Silver Iodide (AgI) into the atmosphere, causing clouds to produce rain or snow. Critics call it “weather modification,” but cloud seeding is a moderate and cost-effective effort to enhance rainfall that can benefit the water-strapped Southwest by fortifying its water supply.

Although cloud seeding is used regionally, it has faced significant backlash. Skeptics point to health concerns, flooding, and other ethical concerns magnified by conspiracy theories rather than scientific evidence. Yet research shows that the chemical concentrations used in cloud seeding are below dangerous thresholds, and there is no credible evidence linking it to floods.

An increasing number of states are working on legislation to restrict or outright ban this form of “geoengineering,” including a bill circulating in Arizona. Nine western states currently use cloud seeding to supplement their water portfolios, benefiting farmers and communities drawing from dwindling reservoirs and shrinking aquifers

Rather than banning innovation in water management, states should encourage it. Cloud seeding offers a high return on investment at a fraction of the cost of permanent water infrastructure. It is most effective when driven by local and private investment and, when implemented correctly, can deliver meaningful results. 

By contrast, large infrastructure projects promise long-term water supply but require years of permitting and construction, massive upfront capital, and costly operations. Dismissing cloud seeding in an era of billion-dollar water proposals is both imprudent and wasteful.

Desalination starkly illustrates these trade-offs: heavily regulated, capital-intensive, and slow to deploy. California’s Carlsbad plant, one of the largest in the U.S., faced years of regulatory delays and cost roughly $1 billion to build. The plant’s energy-intensive water processing has led to an annual operating cost of up to $59 million.

In contrast, cloud seeding is a cost-effective, flexible alternative, with annual costs ranging from $5 million to $7 million and adjustable by season.

Research from North Dakota State University shows that cloud seeding can boost rainfall by five to ten percent at just 40 cents per planted acre. It benefits southwestern agriculture — especially water-intensive alfalfa — without draining overstressed groundwater or requiring costly infrastructure projects.

Like many economic issues, water management faces a knowledge problem. While bans on cloud seeding are imprudent, statewide mandates are also flawed because they fail to consider local water conditions. Private and local investment would better assess water needs. Large western states with diverse environments experience regional variances in precipitation patterns.

For example, Hiouchi, California, averages 79.31 inches of rain annually, while Stovepipe Wells only receives two inches. These differences in rainfall make fixed targets ineffective. Locally informed approaches enable communities and private businesses to adapt to weather conditions, rather than relying on fixed goals.

Privately and locally funded cloud seeding programs date back to the early pioneers of the industry. North American Weather Consultants (NAWC) has operated since the 1950s, providing services to water districts, municipalities, universities, and private companies. Ski resorts in Colorado and Utah also use cloud seeding to boost snowfall for recreational needs.

The long history of small-scale, decentralized programs demonstrates that local operations can meet water needs effectively without statewide mandates. State governments should be cautious with regulation, rather than stifling another tool for strengthening local water supplies.

Private investment has also driven innovation in weather modification, making research and development more impactful. Public funding, by contrast, often slows progress with regulatory red tape, appropriation limits, and political constraints. When federal support for cloud seeding was sharply reduced in the 1980s, private, local, and state funding became essential to sustain technological advances.

Even traditional water infrastructure faces political hurdles. In 2022, the California Coastal Commission rejected the proposed Huntington Beach desalination plant despite years of planning. By contrast, private cloud seeding operations have long enjoyed the autonomy to experiment and refine their methods — without leaving taxpayers responsible for uncertain outcomes.

Private firms such as North American Weather Consultants and Weather Modification Inc. have driven innovation for decades, incorporating radar-guided weather tracking, modeling, hybrid ground-and-air deployment, and aircraft to improve timing, make operations more efficient, and monitor results. 

Cutting-edge startups like Rainmaker have introduced autonomous drones for dispensing precipitation-enhancing chemicals.

It was private companies incentivized by performance and market demand, not federal grants or fickle political priorities, that made these innovations a reality. If companies are free to respond to the market, little federal involvement is needed.

Cloud seeding might be shrouded in controversy, but state governments shouldn’t ban it; they should embrace it. Cloud seeding is cost-effective, easily adaptable to regional water needs, and can be successful if it isn’t crushed by overbearing regulation. 

In an age of water scarcity, limiting effective solutions is costly — especially for arid, landlocked western states that would benefit from an additional source of water.

For over two decades, gold’s role as a staple investment has grown more pronounced in the global financial system. Since 2000, the commodity has outperformed all major US stock indices. It has preserved purchasing power, protected investors during crises, and hedged against policy shifts.

The forces propelling gold higher today extend beyond its safe-haven status. A mix of technological change and geopolitical restructuring is reshaping how investors view gold. The result is a powerful combination of structural demand and constrained supply. These conditions help explain gold’s strong performance and why many believe its appeal is far from over. 

Below are thirteen major forces shaping the modern gold market.

1. Safe Haven in a Crisis 

Gold is a store of value. When currencies depreciate and governments falter, gold is the primary place of refuge for concerned investors. That reputation drives demand and pushes capital flows into gold during uncertain times.

2. Geopolitical Concerns 

Global tensions remain a powerful catalyst. Conflicts in Eastern Europe, instability in the Middle East, and shifting power dynamics in Asia have increased demand for assets that exist outside political control. Gold has been a major beneficiary from this environment.  

3. Preservation of Purchasing Power 

History offers a striking comparison: roughly 200 ounces of gold bought an average home decades ago, and roughly the same amount still does today. While prices in dollars have changed dramatically, gold has preserved long-term real value. This property continues to attract investors seeking protection from currency debasement.

4. Central Bank Accumulation 

Some of the biggest buyers of gold are central banks and governments. Many of them are diversifying their holdings from currencies to hard assets. This shift reflects concerns about debt levels, currency risks, and geopolitical tensions. Central bank purchases have become a significant source of demand in the market.

5. Expanding Sovereign Debt 

The public debt has increased substantially globally. The US now carries a level of debt significantly higher than in previous decades relative to GDP, and other large economies are under similar pressures. This could reduce confidence in long-term currency stability, making gold an attractive store of value. 

6. Structural Policy Divides Across the World 

Differences in trade, regulation, energy, and industry policies have divided the world economically. With each economic bloc having its own set of priorities, the level of uncertainty in financial markets rises. Gold performs well in such a scenario where the level of coordination is low and perceptions of risk are high.

7. Lower Growth and Structural Economic Changes

In some developed countries, the rate of productivity growth has been lower, and at the same time, there has been an increase in the complexity of regulations. Some investors see this environment as less supportive of capital growth and profitability. As a result of lower growth expectations, there has been an increase in allocation to defensive assets. 

8. Strong Relative Performance 

Gold has outperformed various large U.S. equity market indices from 2000 through the mid-2020s, beaten inflation, and grown at a rate that well surpassed economic growth. Even in times when markets experienced strong rallies, gold performed well.

9. Global Reserve Rebalancing and Dedollarization

The rise of new economic blocs, such as the BRICS (Brazil, Russia, India, China, South Africa) countries and others, has led to an increase in their gold reserves as part of their reserve diversification policies. The US dollar is still the leading currency, but its share of global reserves has been gradually falling in recent years. The share of gold reserves has been rising correspondingly.

10. Technological and Industrial Demand 

Gold is a financial asset and an industrial metal. It is highly conductive and corrosion resistant. It is an essential component in the electronics industry, supercomputing infrastructure, and manufacturing. As technology advances, industrial demand places structural pressure on supply.

11. Digital Assets 

Digital asset markets are beginning to use gold as collateral. Many stablecoin issuers now hold substantial gold reserves alongside traditional securities. Stablecoin adoption has driven capital flows that support the underlying commodities to which they are pegged.

12. Portfolio Diversification and Low Correlation 

Gold has always been known for its low correlation with stocks and bonds. When stocks fall sharply, gold often moves in a different direction. Consequently, institutional investors are increasingly recognizing the role of gold as a diversifier and not as a speculative asset. 

13. Demand Continues to Outpace Supply 

Worldwide demand has been at record levels in the past few years. The rate of growth in mining production is low, and new discoveries are few. As demand grows at a rate that exceeds supply, prices are likely to move higher. 

The Bigger Picture

Investors often turn to gold for wealth preservation and long-term appreciation. But recent price action suggests there is more to gold than meets the eye.

Gold has been rising even when equity markets perform well, real interest rates increase, and inflation remains moderate. This suggests gold is being driven by forces beyond traditional crisis-related demand.

Gold now sits at the crossroads of monetary policy, geopolitics, technology, and broader changes in the global financial system.

Gold’s Expanding Role

Gold’s rise reflects more than fear or inflation. It reflects a world in transition. Governments are managing higher debt. Financial systems are evolving. Technology is expanding industrial demand. Reserve strategies are shifting.

Investors continue to seek assets that hold value outside political and monetary systems. Unless these underlying forces reverse in a meaningful way, gold’s role in global finance is likely to remain strong.

America’s fiscal and monetary problems look like two separate crises. They aren’t. Runaway government spending and an unruly Federal Reserve are two sides of the same coin. When Congress spends beyond its means, it creates pressure on the central bank to print money and paper over the debt. When the Fed operates without clear rules, it becomes the silent enabler of fiscal recklessness. Fix one without fixing the other and you haven’t solved anything. That is where we find ourselves today.

As I argued in my first book, the Fed has a rule problem: It doesn’t have one. For decades, monetary policymakers have operated under broad discretionary authority, adjusting interest rates and the money supply based on their judgment about what the economy needs. The results have been disappointing.

The case against discretionary monetary policy runs along two tracks: one about competence and one about legitimacy.

Start with competence. Central bankers face serious information problems. The economy is vast and complex, and the signals it sends are noisy. Policymakers receive data that is incomplete, revised, and often contradictory. By the time the Fed diagnoses a problem and adjusts policy, the underlying conditions may have already changed. Discretion sounds like flexibility. In practice, it often means groping in the dark.

But information problems are only half the story. Incentive problems compound them. Bureaucracies develop institutional interests of their own. The Fed, like any government agency, responds to political pressures, professional norms, and the priorities of its leadership. Monetary economists — the experts who advise the Fed and evaluate its performance — constitute their own interest group. They have professional stakes in a powerful, discretionary central bank. And then there’s perhaps the biggest incentive problem of all: the looming threat of fiscal dominance. It’s time to stop thinking about monetary policy in a vacuum.

There is a deeper question here, as was recognized almost 50 years ago by economists Thomas Sargent and Neil Wallace: are fiscal policymakers or monetary policymakers in the driver’s seat? When Congress and the Treasury spend freely and accumulate debt, they create pressure on the central bank to monetize that debt. If the fiscal authority “moves first” and the Fed “follows,” then monetary policy becomes an instrument of fiscal control, not an independent check on inflation. That is precisely what happened after 2020. The government spent at wartime levels even as the emergency receded, and the Fed soon accommodated. Inflation naturally followed.

So the problem is not simply that the Fed made mistakes. It is that the institutional structure invites those mistakes. A discretionary Fed embedded in a debt-heavy fiscal environment will tend to prioritize the short-term over the long-term, accommodation over restraint, and political convenience over monetary discipline.

The solution is a Fed regime change. We need actual legislation to change the central bank’s mandate. Administrations change. Personnel change. But laws can become, as James Buchanan put it, “relatively absolute absolutes.” If Congress replaces the Fed’s current mandate, which includes employment and interest rate targets alongside price stability, with a single, clear mandate for price stability, the Fed can credibly commit to refrain from underwriting future deficit spending. Congress can’t count on the Fed bailing it out if the Fed’s price level target limits the printing press.

The goal is not to make the Fed powerless but to make its power legible and therefore predictable. A rule-bound Fed, focused solely on price stability, empowers planning by businesses and households. It rewards saving. It discourages the kind of speculative boom-and-bust cycles that discretionary policy tends to produce. And it will force fiscal policymakers to get their mismanaged affairs in order.

Other proposed solutions won’t work. First, we should reject presidential control over monetary policy. Giving the executive branch direct authority over interest rates would politicize money even further. Second, simply appointing more “conservative” central bankers offers no durable fix. Hawkish Fed chairs come and go; without a reformed mandate, the institutional logic reasserts itself.

Inflation has cooled from its recent peaks and deficits are not as high now as during the COVID period, yet the underlying institutional dysfunction remains. The Fed is still improvising, still subject to fiscal pressure, still operating without the kind of clear rules that would make its behavior predictable and its decisions defensible. Monetary policy by bureaucratic fiat is not good enough. To prevent money mischief and fiscal folly, only the discipline of rules will do. The solution is a single mandate: price stability alone.

America has spent more than $20 trillion on fighting poverty since the introduction of President Johnson’s Great Society program in 1964. Sixty years later, how are we doing?

That depends, as it turns out, on how you measure it.

Last month, Senator Kennedy (R-LA) introduced a bill that would require the Census Bureau to report a new poverty metric as an alternative to the Official Poverty Measure (OPM) by including both cash and non-cash welfare benefits in its calculations.

As Kennedy points out, this is a much-needed fix. The OPM’s methodological weaknesses are well documented. Most notably, it ignores the hundreds of billions of dollars the government spends each year to assist low-income families through tax credits like the Earned Income Tax Credit and in-kind transfers such as Medicaid, food stamps, and housing subsidies. It also overstates inflation and relies on outdated assumptions about food spending. In short, the OPM paints an egregiously inaccurate picture of material poverty in America.

When one includes taxes and transfers, as economists Richard Burkhauser and Kevin Corinth did in a recent paper with the National Bureau of Economic Research, the “full-income” poverty measure sat at just 3.7 percent in 2023 — 1.6 percent after including employer-provided health insurance — a far more optimistic look than the OPM’s 11.1 percent from the same year.

That sounds like a triumph. But Burkhauser and Corinth take it one step further and use their “full-income” measure to track changes in the poverty rate dating back to 1939. 

Contrary to popular belief, they find that the greatest era of poverty reduction happened before Johnson declared war on it.

From 1939 to 1963, absolute full-income poverty plummeted by 29 percentage points, from 48.5 percent to 19.5 percent. Then, despite the government pouring trillions of taxpayer dollars into combating poverty, poverty fell by only 15.7 percentage points from 1963 to 2023. Barely half the progress in more than twice the time.

But the stagnating decline is only half the story. The more consequential difference is what drove it. 

Before 1964, the main engine of poverty reduction was increases in market income — a measurement that includes wages, salaries, and other forms of income from employment. From 1939 to 1959, market income poverty fell by 26.1 percentage points, nearly all of the 27.3 percent decline in full-income poverty among working-age adults over the same period. In short, before the rapid expansion of the welfare state, most people were earning their way out of poverty.

After 1964, that engine stalled. Market income poverty fell by just 3.9 percentage points from 1967 to 2023, while post-tax, post-transfer poverty fell by 10 percentage points. Even though poverty has continued to decline over the past six decades, most of that was due to the ever-expanding generosity of government transfers.

While low-income Americans were benefiting from the biggest poverty reduction in the country’s history, the percentage of working-age adults relying on government transfers for more than half their income decreased from 2.9 percent in 1939 to 2.7 percent in 1959.

By 2023, this number had nearly tripled to 7.6 percent, even reaching as high as 15 percent in some years.

As Mercatus scholar Jack Salmon put it: “The War on Poverty changed the how of poverty reduction, but it didn’t accelerate the how much.” 

If anything, by changing the former, it may have blunted the latter. A 76 percent increase in real median income, paired with rising employment and higher productivity, all driven by rapid postwar economic expansion, pulled more people out of poverty in 24 years than trillions of dollars in government-imposed wealth redistribution have done in 60.

Some may argue that this trend is to be expected. After all, reducing poverty from 48 percent to 20 percent is arithmetically easier than reducing it further because there are simply fewer people left below the poverty line, and those who remain tend to face the most entrenched barriers to self-sufficiency.

Fair enough. But as Burkhauser and Corinth point out, full-income poverty largely stagnated starting in the 1970s — right as welfare spending was ramping up dramatically. In short, taxpayers have been paying for a multitrillion-dollar boondoggle that has yielded increasingly diminishing marginal returns. 

So, what was the main driver behind the pre-1964 miracle? Simple: Economic growth.

The pre-1964 record, along with centuries of evidence, suggests that nothing has worked better than economic growth in helping individuals, especially those at the bottom of the income ladder, to achieve a higher quality of life. Across the world, economic growth driven by liberalization helped pull almost one billion people out of extreme poverty from 1990 to 2010.

Here at home, the pattern still holds. The Fraser Institute’s research shows that North American states with higher and increasing levels of economic freedom tend to have higher income growth and employment, more income mobility, especially among low-income households, higher economic growth, less homelessness, and lower levels of food insecurity.

The fruits of economic growth are visible in ways that poverty statistics fail to capture, especially for America’s poor. As Joseph Heath points out, 95 percent of American households below the poverty line have electricity, indoor plumbing, a refrigerator, a stove, and a color television. More than 80 percent have an air conditioner and a cell phone, and two-thirds own a washing machine and dryer. Economic growth, not government programs, is what helped make these once-luxury goods unavailable to many wealthy households now accessible to nearly everyone. It continues to bear fruit today — wages for typical American workers are at all-time highs.

The most powerful anti-poverty program had no enrollment forms, caseworkers, or spending bills. It was a growing economy that helped millions of people earn their way to a better life. As such, subsequent efforts should focus on removing government-created barriers to economic growth, occupational opportunities, and job market entry rather than adding another layer of expensive, inefficient wealth transfers.

Senator Kennedy is right to say we need a more accurate measure of poverty. When analyzing the best ways to combat poverty, policymakers should reflect on whether the welfare state was ever the right tool for the job.

The extended partial government shutdown has led to long lines of frustrated passengers at airports nationwide as unpaid Transportation Security Administration (TSA) agents walk out. Officials even warn that small airports may shut down due to the absences. If we for a moment disregard the Washington Monument syndrome likely also at play, the lesson to be learned here is not the importance of funding government services — but the exact opposite.

The TSA has a long history of failing to such a degree that it could never survive had it not been run by and within the government. Costing taxpayers and travelers $10 billion annually, not counting the inconvenience and time lost, the agency fails even on its own terms. The failure rate in 2015 was over 90 percent. The same in 2017. If these data seem dated, it is because they are. Instead of fixing the problems, the results of the agency’s internal testing were classified. In the absence of data, the only reasonable interpretation is that the agency remains a catastrophic failure to this day.

The recent airport chaos stresses how the security theater has become an unbearable bottleneck. It also stresses how dysfunctional government services become problematic beyond the waste of resources and the inconveniences they cause. The difference between government services and market solutions offered by businesses is stark. A private business that fails to deliver loses customers, and therefore both revenue and market share. Its failure is its own problem, which is a strong incentive to fix it.

As a government agency, the TSA’s failure is not its problem but is instead shifted onto travelers (their “customers,” as it were), who are, in some cases, left waiting six hours in line to get through the security checkpoint. In fact, this failure can easily be construed as a benefit for the TSA, which now — because the government requires all passengers to pass through its bottleneck — has leverage to demand more funding. As a result, the destruction wrought by dysfunctional government becomes an argument for more of it, and taxpayers are left with the bill.

The arguably zero value added by the TSA’s security theater thus becomes a self-enforcing bloating of the bureaucracy, making the agency an ever-expanding jobs program that burdens taxpayers while harassing travelers.

Imagine if security had instead been the responsibility of airlines. Rather than cause constant delays and inconvenience, it would be in the airlines’ interest to streamline the process and make it as unobtrusive as possible. A failure to staff security functions would not be travelers’ (customers’) problem but the airlines’, who benefit only when we fly — and remain liable to keep travelers safe. The TSA has no such responsibility.

But a government service is worse than what can be explained by destructive operative incentives. We often fail to realize that what exists in the present is a result of developments in the past and that the future too will be different. In other words, the market economy as well as society overall are processes in constant flux, not a static state. Privately provided security would, just like any other service offered in the market, be subject to constant innovations — creative destruction, as economist Joseph Schumpeter called it. 

Creative destruction is the power of disruptive entrepreneurship to cause leaps of improvement. As entrepreneurs introduce innovations that bring great benefit, consumers abandon the solutions they previously chose to use. For example, when Henry Ford introduced the Model T, people flocked to the affordable automobile — the greater value — and stopped relying on horses and carriages. The automobile became the new, higher standard for transportation. Automobile manufacturing and gas stations replaced horse breeders and stables. 

We would thus see continuously improved security measures provided at lower cost — taking less time and being more convenient for travelers. The value to airlines is that it benefits their customers. It’s a competitive advantage and a value-add.

The very opposite is true for government services such as the TSA. They have nothing to benefit from providing the service they are tasked with effectively and efficiently. In fact, the very opposite is true: if the TSA would find ways of reducing the cost, the agency’s budget would likely be cut in response. They would effectively be penalized for improving. 

And therein lies the crux: government agencies have little or no incentive to serve the users of their service. But private businesses stand and fall by providing customers with value. It is no surprise, therefore, that airport security is a hassle and inconvenience — and that it is expensive. The TSA is a bureaucracy and a jobs program that does not keep us safe. 

Recognizing this fact helps us understand the chaos at airports. More funding would do more harm than good.