Category

Economy

Category

The US seizure of Venezuelan leader Nicolás Maduro is being framed publicly as a counternarcotics and democracy-restoration operation. But it is oil — not cocaine or fentanyl — that sits at the center of events. Venezuela’s vast reserves, its role in gray and black energy markets, and its position within a broader geopolitical contest over oil supply explain far more about the timing and scope of the intervention than narcotics enforcement ever could.

Venezuela is no longer the oil superpower it once was. Production has collapsed from more than three million barrels per day in the late 1990s to under one million today, placing the country outside the top tier of global producers. Still, oil remains the backbone of the Venezuelan economy, accounting for roughly 95 percent of export revenue. In a world where energy markets are increasingly shaped by sanctions, supply fragmentation, and political risk, even marginal barrels matter — especially when they are sold at a discount and routed outside formal channels.

In recent years, Venezuelan oil has flowed largely into opaque markets, particularly to China, often via intermediaries and “ghost ships” that mask origins to evade sanctions. These barrels are not priced at global benchmarks; they are sold cheaply, quietly, and strategically. The result is not simply lost revenue for Caracas, but distorted price signals across the global oil market. Interventions disrupt price discovery. Sanctions do not eliminate supply — they reroute it into less-transparent channels, where prices convey less information and capital allocation becomes more politicized.

The US blockade and seizure of sanctioned tankers and the disruption of naphtha imports, critical for transporting Venezuela’s heavy crude, had already begun constraining production even before the military operation. Storage tanks filled, wells were shut, and exports stalled. Yet global oil prices barely moved. That muted response reflects a market already awash with supply and conditioned to treat Venezuelan output as unreliable. Oil markets have learned to discount politically fragile production, which means that sudden interventions often have less immediate price impact than policymakers expect.

The longer-term implications, however, are more significant. A successful political transition followed by large-scale foreign investment could eventually bring Venezuelan production back toward its pre-collapse levels — perhaps to 2.5 million barrels per day over several years. That would represent a meaningful supply shock, potentially lowering global oil prices by several percentage points over time. Such an outcome would benefit refiners, particularly in the US, that are configured for heavy crude, while putting downward pressure on higher-cost producers elsewhere.

But that optimistic scenario rests on fragile assumptions. Oil production is not simply a matter of drilling holes; it requires institutional stability, secure property rights, skilled labor, functioning infrastructure, and credible contracts. Venezuela’s oil collapse was not caused by geology, but by decades of state control, politicized management, expropriation, and capital flight. Reversing that damage will take time and discipline.

There is also a broader pattern worth noting. Within a single week, the United States has been exerting escalating pressure on three oil-producing nations across three continents: Venezuela, Iran, and Nigeria. Whatever the specific justifications in each case, the pattern suggests a strategic shift. A decade ago, Donald J. Trump rose to prominence as an anti-interventionist critic of foreign entanglements. Today, the US is asserting itself as an active enforcer of energy order, using sanctions, seizures, and force to reshape supply flows.

Among other reasons, it matters because energy markets thrive on decentralized discovery and suffer under centralized control. When oil becomes an explicit instrument of geopolitical maneuvers, prices reflect power as much as scarcity. Capital flows follow political signals rather than entrepreneurial ones. The result is not necessarily higher prices, but noisier ones: prices that convey less reliable information about underlying supply and demand.

Discounted oil sold into black markets sustains regimes, finances patronage networks, and reshapes global trade patterns. Controlling that flow is economically consequential in a way that narcotics interdiction rarely is. Whether the US intervention ultimately stabilizes Venezuela or entrenches a prolonged foreign presence, its lasting impact will be felt less in Caracas politics than in the structure — and credibility — of global oil markets.

W.E.B. Du Bois was born in Great Barrington, Massachusetts (where AIER is now headquartered), in 1868. Today, this towering figure of the early civil rights movement is remembered as a groundbreaking sociologist, Pan-African socialist, and near-mythical hero to the intellectual left.

“He’s a reformist,” philosopher Cornel West told a classroom of Dartmouth students in a 2017 lecture on Du Bois’ long path to becoming a revolutionary. “But he’s a radical reformist, no doubt.”

But there was once a W.E.B. Du Bois who was radical mainly in the scientific sense. Before drifting into the study of history and sociology, he was an economics student at Harvard. The marginal revolution had just remade the dismal science into a more mathematical and literally “edgy” subject. And Du Bois made original contributions that leveraged insights from the free-market Austrian school and anticipated later developments in neoclassical economic thought, as Daniel Kuehn explains in a recent paper published in the Journal of Economic Perspectives.

Similarly, the young Du Bois’ recommendations for black racial uplift bore surprising similarities to the modern-day conservative economist Thomas Sowell. What caused his later radicalization? It was arguably a tragedy of racism.

Du Bois’ maternal great-great-grandfather was born in Africa and enslaved in America. But in the late 1700s he gained his freedom, possibly by fighting in the American Revolution. By the time Du Bois was born in Great Barrington, the town had a small but largely integrated black population. Du Bois’ mother (his father had abandoned the family when Du Bois was a toddler) owned land, and he learned and played at the public school alongside white kids. In 1888, having already studied at historically black Fisk University, he became only the sixth African American student to matriculate at Harvard.

Studying under Frank Taussig, Du Bois wrote a 158-page essay titled A Constructive Critique of Wage Theory. It included a thorough review of Carl Menger, one of the drivers of the marginal revolution, and his insight that the market value of goods and services does not depend on the value of inputs, but rather the value that consumers place on the most recent, or marginal, unit of output.

In his essay, Du Bois built on such work and rigorously demonstrated what Kuehn terms “a statement of wages as equal to the marginal revenue product… Du Bois identifies this need to think in terms of what would ultimately be called the marginal revenue product of labor.”

Kuehn goes on to note that Du Bois provides “one of the earliest acknowledgements that a labor-leisure trade-off determines individual labor supply in the marginalist framework.”

A year later, Du Bois left Harvard for two years of study at what is today the Humboldt University of Berlin. There he was exposed to a more historical approach to economics under scholars such as Adolph Wagner. Du Bois’ interests evolved, and when he returned to Harvard to finish a PhD (the first PhD Harvard would award to an African American), it was in history.

In his autobiography published in 1968, Du Bois would look back and characterize the economics he studied under Taussig as “reactionary” and “dying.” But as a newly minted PhD, Du Bois still had a long way to go to reach that point. His early works such as The Study of the Negro Problems (1898), The Philadelphia Negro (1899), and The Negro in Business (1899, which he edited), mention family cohesion, productive skills acquisition, and entrepreneurship as keys to black uplift. The required precursor, he believed, was ending racial discrimination.

But having taken a position at Atlanta University in Georgia, Du Bois was immersed in the South’s era of Jim Crow segregation. It was a time when a black man accused of a heinous crime against whites could find himself facing, rather than a court of law, mob action determined to surpass in barbarity the alleged underlying crime. Sam Hose was such a man, alleged to have murdered his white employer in 1899. A mob kidnapped him from a jail in Newnan, Georgia, dismembered him and burned him alive. Another black man was shot to death for “talking too much” about the attack on Hose.

Du Bois later reported in his autobiography that on his way to meet an Atlanta newspaper editor to discuss the lynching, he learned the burnt knuckles of Hose’s hand were on display in a nearby store window. He said the experience “broke in upon my work and eventually disrupted it…one could not be a calm, cool, and detached scientist while Negroes were lynched.”

Was this the final disappearance of the W.E.B. Du Bois who had once made those economic breakthroughs at Harvard? Subsequent years saw him drift to the left. In 1910, Du Bois joined the Socialist Party of America. In 1926, he visited the new Soviet Union, which he saw as a beacon of hope for racial equality. In 1961, he joined the Communist Party USA. By this time, he seemed to believe that, rather than having potential for black uplift, capitalism was an obstacle to it.

The suffering of The Great Depression likely played a role in his views, as it did for some others. But one wonders how much Du Bois’ embrace of socialism had to do with the simple fact that, for all their proven faults, such regimes tend not to be concerned with skin color. They oppress all races the same. 

We live in a time when many young people have a similarly friendly view of socialism. They see the historic wealth produced by free markets not as a path to their dreams but an obstacle to them. And like the evolution of Du Bois’ economic thought, it’s a tragedy.  

The shocking capture and extradition of former Venezuelan President Nicolas Maduro and his wife over the weekend is the culmination of months of US pressure on the regime. President Trump and other administration officials have labeled Maduro and his close associates “narco-terrorists, accusing him of leading a huge criminal organization and profiting by violating US laws, selling large quantities of illegal narcotics which may have potentially killed Americans. 

But while the future of the Venezuelan regime is uncertain, it is worth taking a few minutes to understand how Venezuela got to where it is today and what Americans can learn from its descent into a tyrannical/criminal regime.

The time for a warning may be especially appropriate. Zohran Mamdani’s election as New York City’s next mayor and Katie Wilson’s election as mayor of Seattle, both late last year, have people worrying about a surge in socialist sentiment across the US. Both Mamdani and Wilson openly campaigned as democratic socialists who believe: “No problem is too big, no issue is too small for the government” and “We will replace the frigidity of rugged individualism with the warmth of collectivism.” 

Many with a lick of sense correctly criticize the naivety of these socialist economic policy ideas and collectivist sentiments. But fewer recognize the true horrors that can be unleashed by entitled college graduates voting for massive wealth redistribution.  

The tragedy of Venezuela serves as a cautionary tale.

Socialism plays the major role in the story of Venezuela’s descent into poverty, desperation, and organized crime (Tren de Aragua). David Friedberg, a venture capitalist and a member of the All-In Podcast, recently interviewed Nobel Peace Prize winner María Corina Machado about the fraudulent 2024 national election in Venezuela — highlighting the tragedy of socialism and the resulting tyranny in Venezuela. 

Twenty-five years ago, its GDP was roughly $4,800 per person. In 2014, it was nearly $16,000. But the latest estimates for 2024 and 2025 are about $4,000 per person — roughly 20 percent less than in 2000 and a shocking 75 percent less than in 2014. Poverty rates in Venezuela have skyrocketed from less than a quarter of its population to over half. Yet, Venezuela has the largest known oil reserves of any country in the world – an estimated 300 billion barrels — 10 percent more than Saudi Arabia and seven times more than the United States. 

GDP per capita in Venezuela, 1960-2024. World Bank data.

At least seven million Venezuelans have fled the country in the past ten years, most of them college-educated. The Maduro regime was a criminal enterprise. Besides Maduro himself, several of his family members have been arrested for trafficking cocaine. The government stole the property of its people as well as plundering the country’s natural resources. The regime has also been accused of cooperating with drug trafficking and cartel activity — hence the Trump administration’s focus on Venezuelan gangs, and trafficking described as “narco-terrorism.” 

Venezuela’s 2024 presidential election showcased remarkable courage and ingenuity on the part of those who opposed the Maduro regime. It was also the clearest expression yet of how utterly criminal and corrupt Maduro was. The main opposition candidate, María Corina Machado, after a resounding victory in the primaries, was prohibited by the government from running.  

Her lesser-known proxy, Edmundo González, still won overwhelmingly. And we know he won because Venezuelans documented their election results in incredible ways and reported those results to the rest of the world. The European Union, the European Parliament, and Human Rights Watch all rejected Maduro’s victory, as did other election watchers, who declared González the winner. 

Yet today, González is in exile, and many of those who worked on the campaign are in prison or worse. Maduro claimed victory, against all evidence, and threw dissidents and those who supported them, or even associated with them, into prison. We see truly Mafia-like behavior in disappearing and blacklisting people simply for doing business with the “opposition.” A United Nations report found “evidence of unlawful executions, enforced disappearances, arbitrary detentions and torture” in Venezuela under the Maduro regime.

The state of things in Venezuela is dire and complicated. Much has been written about the highly tenuous legality of military strikes on Venezuelan drug traffickers. And much more will be written about the apprehension of Maduro and his wife in the dead of the night. While the Trump administration should do more to align with constitutional norms and the rule of law, this is not exactly a repeat of the drug war of the 1990s. 

The Maduro regime was actively supporting oppressive parties across Latin America as well as strengthening drug cartels that, in many countries, basically constitute paramilitary forces. Those who want to advance freedom, property rights, and prosperity across the western hemisphere should not overlook the geopolitical force of Venezuela. 

It’s tragic how far Venezuela has fallen. From a prosperous, successful, cultured society, it has become destitute, crime-ridden, and ruled by military thugs. But its initial step towards modern serfdom was much more innocent — and should serve as an eerie warning for the collectivist inclinations of the young and entitled. 

Hugo Chávez, the architect of Venezuelan socialism and tyranny, paved the way for Nicolas Maduro to rule by military fiat. Chávez, though, was popularly elected and portrayed himself as an outsider and a man of the people — someone who would refuse to go along with the corrupt “neoliberalism” that he claimed had disenfranchised so many.  

Sound familiar? 

There has been a lot of talk about how hard young people have it in the US. Buying a house is more difficult, because homes are more expensive and financing costs are high. Unemployment among 20-24-year-olds is more than double the unemployment rate for the rest of the population. Student debt continues to rise at an alarming rate — both in aggregate and for individual young college graduates.

But the recent interview with María Corina Machado reveals how the young and entitled, and their sympathizers, miss the central justification of a free society. Machado notes that the young socialists in Venezuela when they were warned to watch out, would “always answer, ‘Venezuela is not Cuba. That’s not going to happen to us.’ And at the end, look at the disaster and devastation.” 

Socialists have exploited this discontent. In New York City, Mamdani tapped into the frustration with housing, with jobs, with rent, with prices, and with uneven wealth gains in the stock market. Income and wealth inequality frustrate many young people. Declining income mobility frustrates them. They increasingly feel like the deck is stacked against them. 

Although such concerns are real, they hardly justify a socialist impulse — and not just because socialism won’t fix these problems. What these young idealists (or entitled ignoramuses) don’t know is the story of Venezuela and nearly a dozen other countries who’ve tread this path already. In Venezuela, they don’t just have an expensive housing problem, or an income mobility problem, or an income and wealth inequality problem.  

They have much deeper problems: lack of hope and lack of opportunity. In the United States, even with the challenges mentioned above, people can still find jobs, even if those jobs pay less than they would like. They can usually choose to work more hours if they want to make more money. They can move about freely. They are not beaten or imprisoned for social media posts or for supporting the “wrong” candidates. They can improve their lives. They can build for the future. Even if achieving success has become harder than in the past, that is far different from success not being possible. 

And that’s the real danger, and the real tragedy, of Venezuela. Socialism isn’t just about inefficiency and becoming poorer — though it does cause both those things. Socialism leads to tyranny where the worst rise to the top, civil society is destroyed by political power, and the opportunity to improve one’s life doesn’t just diminish, it is extinguished. 

Although Venezuelans’ future prospects have brightened considerably with the removal of Maduro, we should continue to point out the dangers of socialist regimes with increasing urgency to generations of people who know little about history or global affairs, care even less, and are merrily traipsing down the Road to Serfdom

Bitcoin and other cryptocurrencies are widely – but wrongly – panned as unregulated casinos or Ponzi schemes that create no real value. For example, US Senator Elizabeth Warren called crypto a “threat to financial stability,” while the UK’s Treasury Select Committee said that cryptocurrency ownership “more closely resembles gambling than a financial service.”

While some cryptocurrencies are mainly speculative, many serve specific business or functional purposes. We can identify some of the value created by cryptocurrencies by breaking them into four general categories: Bitcoin, stablecoins, meme coins, and utility tokens.

1. Bitcoin

Bitcoin (BTC) is the original cryptocurrency. It is the base token of the Bitcoin protocol, a decentralized proof-of-work blockchain based on the 2008 whitepaper by Bitcoin’s anonymous founder Satoshi Nakamoto. The protocol has a limited supply, with an eventual maximum of 21 million bitcoins.

Unlike most cryptocurrencies, Bitcoin has only one purpose: to be used as money – or, more specifically, as a system of payment. It has no other features. The Bitcoin network is decentralized, which makes it highly resilient and hard to disrupt, though coin prices can be quite volatile.

With a market capitalization of around $2 trillion, Bitcoin is by far the largest cryptocurrency by market value. No other blockchain has anywhere near its history, its reliability, or its dedicated flock of fans and users. Bitcoiners often say that “Bitcoin is not crypto” because it is so fundamentally different from other blockchains that it deserves a category of its own.

2. Stablecoins

Stablecoins are tokens whose value is tied to a particular asset, most commonly the US dollar. They are widely used in electronic payments since they provide the benefits of blockchain-based payments without Bitcoin’s price volatility. Stablecoin payments are especially prominent in countries with unstable national currencies, whose governments cannot be trusted to maintain the value of their money.

The two most widely used stablecoins, Tether (USDT) and Circle’s USDC, have market capitalizations of about $148 billion and $62 billion, respectively. Both tokens are readily redeemable for US dollars. Circle is regulated as a money transmitter in the United States. Tether is a foreign entity, but is in the process of launching a regulated US subsidiary.

The opposite of gambling, stablecoins are safe, stable assets that serve as an electronic version of US dollars.

3. Utility tokens

Utility tokens are cryptocurrencies created by blockchains that provide some utility or service.

One example is Filecoin (FIL), which offers online storage, likeiCloud or Microsoft OneDrive, but on a decentralized public blockchain. The Filecoin blockchain provides safe and private file storage on the decentralized Filecoin network. The FIL token is used to pay for storage and is paid to participants to provide storage space on the Filecoin network.

A subset of utility tokens known as Decentralized Physical Infrastructure (DePIN) uses decentralized blockchains as a replacement for government or corporate-based infrastructure.  The Helium network (HNT), for example, provides a block-chain based marketplace for buying, selling, and transmitting WiFi and mobile phone data.

In addition, the decentralized finance (DeFi) industry is building a parallel financial system on blockchain technology, which is cheaper and more transparent than traditional exchanges. Larry Fink, CEO of Blackrock, the world’s largest asset manager, has said that the tokenization of traditional assets will be “the next major evolution in market infrastructure.”

Unsurprisingly, utility tokens – those with actual functionality and business purposes – tend to be the category most attractive to major cryptocurrency investment funds and venture capitalists.

4. Meme coins

There is one category of crypto tokens meant purely for speculation: Meme coins. These tokens have no functional purpose and no intrinsic value aside from the fun of trading. They are based on “meme” characteristics, like symbol or story that drives their prices. Many use pictures of dogs, frogs, and hats. The most popular meme coin DOGE, represented by a picture of a Shiba Inu dog and frequently referenced by Elon Musk, has a market cap above $26 billion. Another token, FARTCOIN, is based on, well, fart jokes.

There are political meme coins for candidates like BODEN and TREMP, whose prices bounced around before the 2024 elections as candidates moved in and out of favor, with both eventually crashing. After the election but before taking office, President Trump launched his own meme coin TRUMP, which peaked in late January, then lost 80 percent of its value within a few months.

Most meme coins trade for the fun of participating in a shared joke or the excitement of betting that the price will rise. They are indeed gambling in the truest sense, but despite these tokens being amongst the most well-known to non-crypto folk, this category of tokens represents only a small segment of the crypto market.

While there is certainly much speculation in cryptocurrencies, as in all financial markets, the cryptocurrency industry is more than meme coins. Bitcoiners hope Bitcoin will become the world’s dominant means of payment or at least a common reserve currency. Stablecoins provide an efficient means of payment and a relatively stable store of value, at least as far as the US dollar does. Utility tokens create real value or serve some business function.

Collectively, cryptocurrencies provide a variety of functions and use cases, ranging from specific business purposes to no purpose at all. Users can gamble if they want to, or they can make more informed strategic investments. Crypto is more than just a meme coin casino.

Recently, Minnesota and Governor Tim Walz have come under scrutiny for Medicaid Fraud. The debacle received renewed focus on December 1 when Treasury Secretary Scott Bessent posted on X that he had directed the US Treasury to investigate allegations of fraud and that taxpayer dollars were allegedly “diverted to the terrorist organization Al-Shabaab.”

Unfortunately, misuse of Medicaid funds is nothing new. In 2023, the Office of Minnesota Attorney General Keith Ellison charged three individuals as part of a scheme to defraud the Minnesota Medical Assistance (Medicaid) program out of nearly $11 million, the largest Medicaid fraud prosecution in that state’s history. These charges spurred a wider crackdown on Medicaid fraud in the Land of 10,000 Lakes.

What distinguishes the current scandal from background levels of fraud is abundant evidence that “someone was stealing money from the cookie jar and they [state officials] kept refilling it.” This quote, highlighted by Economist Michael F. Cannon, comes from one of the defense attorneys in the fraud case. Cannon then reiterated his insight from 2011: “The three most salient characteristics of Medicare and Medicaid fraud are: It’s brazen, it’s ubiquitous, and it’s other people’s money, so nobody cares.”

This comes at the cost of reducing quality of care and access to care for the poorest Americans. The solution comes from getting government out of healthcare, not by enlarging Medicaid’s “cookie jar,” or by refilling the jar more frequently.

Improper Payments? Fraud? Waste? What’s the Difference?

When federal officials discuss various errors in their program, they choose specific language. Understanding the distinctions in how each term is used helps decipher how a federal program is performing.

In its own findings, the Government Accountability Office (GAO) notes that Medicaid is highly susceptible to “improper payments” with an improper payment rate second only to Medicare. The GAO defines improper payment as “payments that should not have been made or that were made in the incorrect amount; typically they are overpayments.” This is distinct from their definition of fraud, which is “obtaining something of value through willful misrepresentation.” The GAO comments, “While all fraudulent payments are considered improper, not all improper payments are due to fraud.” An improper payment could be an honest mistake on the part of either the citizen receiving Medicaid or the public employees administering the program.

The GAO also distinguishes waste as “when individuals or organizations spend government resources carelessly, extravagantly, or without purpose” and abuse “when someone behaves improperly or unreasonably, or misuses a position or authority.”

Specific allegations or investigations regarding waste or abuse are beyond the scope of this author, but incentives suggest that both are present and widespread among state Medicaid programs.

The Bad News: Medicaid’s Design Makes It Susceptible to Error (Including Fraud)

Medicaid is a joint federal-state program that funds health insurance coverage for America’s poor. The federal government transfers funds to states, which then administer Medicaid programs, with some variations from state to state. 

This income threshold to be eligible for Medicaid increased under the expansion of The Affordable Care Act (also known as the ACA or Obamacare). Because ACA enrollees receive more federal dollars than traditional Medicaid, state policymakers are incentivized to prioritize serving more Medicaid expansion enrollees (the slightly less poor) over those in traditional Medicaid (the poorest Americans). 

The Centers for Medicare & Medicaid Services (CMS) estimates Medicaid’s improper payments within three categories:

  1. Managed care: Measured errors in payments states make to private insurance companies that are contracted to deliver Medicaid benefits (known as managed care organizations).
  2. Fee-for-service: Measured errors in payments states make directly to providers on behalf of fee-for-service beneficiaries, including payments made to ineligible providers.
  3. Eligibility: Measured errors in state eligibility determinations for both types of Medicaid beneficiaries.

In fiscal year 2024, improper payments in Medicaid were estimated at $31.1 billion — equal to five percent of total Medicaid spending. This highlights a major weakness in the program, whose size and complexity lead to clerical errors and procedural mistakes. Additionally, when states fail to collect the necessary documentation (such as up-to-date income verification), improper payments (including fraud) are more likely to occur.

Saul Zimet recently wrote in The Daily Economy:

The government bureaucrats who kept sending hundreds of millions of dollars to the fraudsters year after year had every indication of what they were enabling, but their incentives were to enable rather than prevent the theft.

Unfortunately, Medicaid’s design encourages state policymakers to maximize transfers. In some instances, that may mean lax oversight of where the money goes and who is eligible to enroll in Medicaid. COVID-19 stimulus funding required states to relax eligibility requirements and accelerate approvals to receive Medicaid: the environment was ripe for accidental improper payments as well as waste and fraud.

Since Medicaid’s inception, state policymakers have taken advantage of accounting gimmicks (such as provider taxes) to maximize the amount federal taxpayers shell out into state programs. The motivation for state officials is clear: increase your spending and have federal taxpayers in other states pay for it. Transfers to state and local governments often come with strings attached — the terms and conditions of receiving the transfers — allowing federal policymakers more influence over state and local spending. Whether or not the use of a provider tax loophole represents a misuse of Medicaid’s framework is the subject of debate. Research from the Paragon Institute highlights areas that, at the very least, require substantial investigation and reform to prevent states from shifting costs to federal taxpayers.

The Worse News: Medicaid’s Errors May Be Worse Than Official Government Estimates

From 2015-2024, the GAO reported $543 billion in improper Medicaid payments. Unfortunately, that may be lower than the actual total. Research from economists Brian Blase and Rachel Greszler found that improper payments during that period are estimated to actually be $1.1 trillion, more than double the GAO’s estimates.

The discrepancy comes from Blase and Greszler’s inclusion of eligibility checks in the audits of improper Medicaid payments, which both the Obama and Biden administrations excluded. The halting of Medicaid enrollment audits is especially concerning because during this same period, many states expanded Medicaid under the ACA and Medicaid saw a record number of enrollees during the pandemic. Blase and Greszler comment, “Eligibility errors of this nature are particularly concerning as it can indicate that individuals are allowed to remain enrolled in the program during times in which they do not qualify, potentially diverting limited resources that could otherwise be invested in better serving vulnerable populations.”

Blase and Greszler’s research raises serious concerns about Minnesota. Is the fraud being investigated just the tip of the iceberg?

The Solution: Get Government Out of Healthcare

In addition to the improper payment rate of Medicare and Medicaid (and the disincentive to investigate what becomes of ‘other people’s money’): fraud risks are being investigated in the other portion of the ACA: the premium tax credits paid from the US Treasury to an insurance company to cover an enrollee of an ACA exchange health insurance plan. 

Healthcare is also the single largest category of the federal budget, with about 26 cents of every dollar spent going to various healthcare programs, which are also the single largest item on most state budgets. Not by accident is healthcare highly regulated at both the federal and state levels. Federal and state tax codes incentivize working Americans to purchase health insurance through an employer, leaving little room for insurance offered through civil society and voluntary contracting. There’s a lot unknown in health care, but one thing is clear: government encroachment is not helping.

Healthcare, nearly twenty percent of the US economy and growing, is in desperate need of reform. Rolling back regulations on insurance offerings, the healthcare profession, and innovation, as well as reforming the tax code and spending to encourage consumer-driven choice will encourage competition, lower costs, and empower patients. 

Greater consumer choice — and less reliance on distant federal programs — will help reduce the fraud endemic in government healthcare.

Recently, two Federal Reserve governors delivered speeches with interesting differences. Michael Barr warned against weakening bank supervision, citing “growing pressures to scale back examiner coverage, to dilute ratings systems” that could lead to a crisis. Stephen Miran countered that “regulators went too far after the 2008 financial crisis, creating many rules that raised the cost of credit” and pushed activities into unregulated sectors.

Both governors make valid observations about their respective concerns. Yet neither addresses a more fundamental problem: the regulatory cycle itself may be the primary source of financial instability. Rather than preventing crises, financial regulation tends to shift risks to new areas, setting the stage for different—not fewer—failures.

The Regulatory Ratchet

Barr himself describes the pattern: “time and again, periods of relative financial calm have led to efforts to weaken regulation and supervision…often had dire consequences.” But this observation cuts both ways. Periods of crisis lead to regulatory overreach, which creates unintended consequences, which leads to calls for reform—and the cycle repeats.

The Savings and Loan crisis of the 1980s and early 1990s illustrates this dynamic clearly. Following widespread S&L failures, regulators imposed stricter capital requirements through the 1988 Basel Accord. Financial institutions responded by using securitization to reduce their regulatory capital requirements while maintaining risk exposure—creating the shadow banking system that would later amplify the 2008 crisis. The new regulations didn’t eliminate risk; they relocated it to where regulators couldn’t see it.

After 2008, the pattern repeated. Dodd-Frank increased capital requirements and restricted proprietary trading through the Volcker Rule. As Miran notes, “many traditional banking activities have migrated away from the regulated banking sector” because regulatory costs made these services unprofitable for banks. Credit migrated to private credit funds, collateralized loan obligations, and other non-bank lenders. 

Today, private credit markets exceed $1.5 trillion, largely outside regulatory oversight. When the next crisis arrives, it will likely originate in these sectors—not because markets failed, but because regulation distorted incentives and redirected risk to less efficient channels. “Shadow banking” now accounts for $250 trillion globally, nearly half of the world’s financial assets, with minimal regulatory oversight.

Managing Risk, Not Preventing It

This regulatory cycle reveals a deeper problem with how policymakers think about financial stability. Both prevention-focused regulation (Barr’s preference) and “peeling back regulations”
(Miran’s approach) assume regulators can outsmart markets. Neither addresses the knowledge problem at the heart of financial regulation: regulators are always fighting the last war while markets adapt faster than rules can be written.

A more effective approach recognizes that financial risk cannot be eliminated—it can only be managed when it materializes. Financial regulation, if there is going to be any, should focus on crisis resolution rather than crisis prevention. This means three things:

First, establish clear rules about who bears losses when failures occur. Uninsured creditors, not taxpayers, should absorb losses. The FDIC’s resolution authority works precisely because it allows banks to fail in an orderly way, with clear priorities for claims. Extending this principle—making “too big to fail” institutions write “living wills” that detail how they would be unwound—creates market discipline without micromanaging risk-taking.

Second, eliminate implicit guarantees that encourage excessive risk-taking. When creditors believe regulators will intervene to prevent losses, they stop monitoring risk carefully. The 2008 bailouts reinforced expectations of government support, which may explain why risk-taking continued despite stricter regulations. A credible commitment to let failures happen—even of large institutions—would do more to encourage prudent lending than any capital requirement.

Third, simplify the regulatory framework itself. Complex rules create opportunities for regulatory arbitrage and make it harder for market participants to understand their actual risk exposure. Miran identifies one such complexity: leverage ratios that penalize holding safe assets like Treasury securities, creating “contradictory incentives” that distort markets rather than stabilizing them.

Canada’s experience offers a useful contrast. Canadian banks weathered the 2008 crisis better than their American counterparts, despite having less stringent capital requirements and a more concentrated banking sector. The key difference? Canadian regulators focused on ensuring orderly resolution of failures rather than preventing all risk-taking. Banks faced real consequences for poor decisions, which encouraged more conservative behavior than any amount of supervision could mandate. Since 1840, the United States has experienced at least 12 systemic banking crises—Canada has had zero. During 2008, Canadian banks maintained an average leverage ratio of 18:1 compared to over 25:1 for many US banks. The US bailed out hundreds of banks; Canada bailed out zero.

Breaking the Cycle

The debate between Barr and Miran represents the latest turn in the regulatory cycle. Both assume their preferred approach will prevent the next crisis. History suggests otherwise. Until policymakers recognize that financial regulation shifts rather than eliminates risk, we will continue cycling between crisis, overreaction, unintended consequences, and the next crisis.

The alternative is clear bankruptcy procedures and eliminating implicit guarantees. Let markets—not regulators—price risk. Let banks—not bureaucrats—manage portfolios. And most importantly, let failures happen to those who take excessive risks, ensuring that profits and losses remain where they belong: with the institutions that make the decisions.

On January 3, 1976 — 50 years ago — the United Nations’ “International Covenant on Economic, Social and Cultural Rights” entered into force with the backing of the Soviet Union and the Cold War “Non-Aligned Movement” (NAM). Intended to secure the “right” to housing, health care, fair wages, paid vacations, and other benefits globally, the International Covenant is a prime example of conflating rights with desires.

Thankfully, this socialist project, advanced under the banner of “human rights,” never became the law of the land in the United States. President Jimmy Carter signed the International Covenant at the UN headquarters in 1977, but it has since awaited ratification in the Senate Foreign Relations Committee. Cold War anxieties about the spread of socialism and communism may have hindered its acceptance among Congress and the public. However, 35 years after the Cold War, socialism is surging in popularity, especially among young Americans, and it’s important to reiterate the dangers of the UN’s International Covenant, lest it makes a comeback and the treaty is ratified. 

Russell Kirk wrote that two “essential conditions” are attached to all true rights: first, the capacity of individuals to claim and exercise the alleged right; and second, the correspondent duty that is married to every right. The right to practice one’s religion freely involves a duty to respect others’ religious beliefs; the right to private property dovetails with the responsibility to not violate someone else’s possessions. Thus, true rights are mutually beneficial and reinforcing, undergirded by the virtues of justice and prudence.  

What Kirk designated as “true rights” are synonymous with “natural rights” or “negative rights,” which are inherent in our nature and cannot be taken away. The only obligation they impose on others is to not infringe upon them. “Postive rights,” by contrast, require the individual to sacrifice portions of his earnings or potentially his life in the service of others, even against his own conscience and free will. One individual’s “positive right” to free health care, for example, violates another individual’s right to the fruits of his own labor. In short, one person’s desire becomes someone else’s obligation, and the former bears no responsibility while exercising his “right.”

The conflation between rights and desires — or negative rights and positive rights — was explicitly manifest in President Franklin D. Roosevelt’s “Four Freedoms” articulated in his 1941 State of the Union Address. “Freedom of speech” and “freedom of worship” are negative rights that may be exercised by individuals and secured by government, but “freedom from want” and “freedom from fear” are impossible to achieve — for “want” and “fear” are immutable aspects of the human condition. Our perpetual yearning for more than we presently possess, or our anxieties about future uncertainties, can never be entirely satisfied or relieved, even under the most healthy, safe, and prosperous conditions. 

As Edmund Burke wrote, “The great Error of our Nature is, not to know where to stop, not to be satisfied with any reasonable Acquirement; not to compound with our Condition; but to lose all we have gained by an insatiable Pursuit after more.” The “great Error of our Nature” may impel us to demand unbridled resources from government, all in the pursuit of abstract “rights,” and therefore jeopardize the natural rights that are indispensable to a just social contract.

FDR’s “Four Freedoms” inspired the UN’s 1948 “Universal Declaration of Human Rights,” which affirms the “right” to rest and leisure. While these may be human needs and social goods that both public and private entities should respect, they ought not be framed as “rights.” Unlike freedom of speech and freedom of worship, rest and leisure are exercised without adjacent responsibilities and often require the provision of goods, services, or accommodations by others to be meaningful.

The International Covenant drastically expanded the Universal Declaration of Human Rights. No correspondent duties are associated with the “right” to the free and generous provisions championed by the UN, but instead require the burden and sacrifice of someone else’s labor and its fruits.

The treaty includes not only the “right” to rest and leisure, but also to an “adequate standard of living” and the “progressive introduction of free education.” It even declares the extremely vague “right” to “enjoy the benefits of scientific progress and its applications.” There is no theoretical reason why such broad and elastic provisions cannot be extended to absurd proportions, where even non-essential consumer goods and vogue technologies like video game consoles or robot vacuum cleaners are labeled “human rights.”

The International Covenant and its advocacy of “economic, social and cultural rights” is contrary to America’s founding principles and the Western tradition of natural law. In accordance with the Declaration of Independence, Americans have a right to the “pursuit of happiness,” which governments are instituted to secure, along with the right to life and liberty. But the “right” to happiness itself cannot be reasonably justified by any appeal to the natural law. 

The conflation of rights and desires is an impetus for the expansion of government power, which risks undermining the true rights most vulnerable to usurpation. As Andrew Cowin wrote in a 1993 Heritage Foundation report, the International Covenant “identified rights that were never meant to be granted. For decades, though, it gave Soviet totalitarian governments the cover that justified their accumulation of power and property.” 

While Congress shelved the International Covenant and stopped its provisions from becoming American law, the treaty was ratified by many other countries, including US allies such as Japan, Mexico, France, Germany, and Italy. 

If desires became “rights” in these capitalist democracies, the same could happen in America, which is why — on its 50th anniversary — we must remain vigilant against the UN’s International Covenant on Economic, Social and Cultural Rights.

Elon Musk recently put forth a bold vision: that within two decades, AI will automate virtually all productive activity, work will be optional, and money will lose meaning. Coming from Musk, such pronouncements carry gravitas. And noticeably, the expressed vision unsurprisingly dovetails neatly with Musk’s admittedly exciting entrepreneurial visions. 

Yet variants of those claims have circulated for years, usually without reference to economic theory, institutional constraints, or political risk. Rigorously examining those assertions is essential to decouple technological optimism from the practical realities that will shape the next two decades.

Will Work Be Optional in 20 Years?

In economic terms, “work optionality” requires three simultaneous conditions: (1) per-capita output high enough that the median person can maintain a high standard of living without paid labor, (2) widespread, reliable distribution mechanisms, and (3) institutional stability that ensures income security over time. None of those conditions are close to existing.

Even if AI substitutes for large swaths of labor, historically new automation has reallocated work rather than completely eliminating it. New goods, new services, and new forms of status competition appear as old ones disappear. Moreover, without explicit redistribution mechanisms — which no major nation has implemented — the owners of AI capital capture the lion’s share of gains. That, if anything, requires the median individual to work more, not less.

Musk’s claim requires the total (or near-total) automation of the entire global capital stock, universally broad redistribution of payment streams or wealth, and stable political institutions, all within twenty years. Even considering $38 trillion of US public debt, setting aside lumpiness of capital flows, uneven innovation in various places around the world, differing views on the sanctity of labor and countless other potential obstacles, that is an extraordinary compression of time, capital formation, and global diffusion.

When Musk says that “work will be optional,” no different than choosing to grow vegetables in one’s backyard for fun, he is insinuating a world where production is fully automated and money scarcely matters. But such a future requires astronomically more than clever automation. It demands the near-total overhaul of the global capital stock, new mechanisms for distributing income in the absence of labor markets, and political systems stable enough to address a world where everyone receives sustenance without working. Considering $38 Trillion in US public debt, wildly uneven innovation and development across nations and continents, and deep cultural disagreements about the value and role of work, fitting that transformation into a twenty-year window would require an inconceivable compression of capital formation and institutional evolution. 

Will Money “Lose Meaning?”

For money to lose meaning, scarcity must disappear. But scarcity is not abolished by robots, however intelligent they are or however cheaply they make goods. Scarcity arises because:

  • Consumer preferences differ.
  • Time matters (now vs later).
  • Land, location, status, political influence, and other constraints make some goods unavoidably rivalrous.

As individuals become wealthier, their consumption tends to shift from goods toward services. Even in a world with ultra-cheap production methods, services, and in particular positional and experiential goods, will dominate utility at high income levels. Access to desirable neighborhoods, exclusive schools, bespoke clubs, rare experiences, or political influence cannot be automated into abundance. Prices will continue to ration those services, and money, even if it changes form, remains the social mechanism that expresses relative value.

What could disappear is wage labor as the primary mechanism for accessing consumption. But that would not constitute an “end of money;” it would instead signify a shift in the income structure of society. One characterized by more capital income, more redistribution, perhaps more universal transfers. A world where “money loses meaning” is one where the economy violates the core assumptions of microeconomics, macroeconomics, and game theory. Artificial intelligence does not do that.

What Actually Matters: Capital, Diffusion, and Institutions

Musk’s vision treats AI as a profound, exogenous leap. Economic systems change, however, only through capital deepening, technological diffusion, and institutional adaptation.

At the very least, replacing the existing global capital stock with AI-augmented systems is a many-decade project. Upgrading energy grids, fitting logistics networks to the speed of AI, remaking industrial processes and transportation fleets, and all the other necessary upgrades of the global industrial base will take orders of magnitude longer than updating software. They require planning, investment, and training for complementary human skills. Each is additionally likely to be a target of hefty regulation, slowing that update process all the more. 

Second, technology diffuses unevenly. Historically, the highest-productivity technologies take decades to spread across countries, sectors, and classes. Under any realistic model of diffusion, any optionality where work is considered will arrive at vastly different times for different nations and the internal strata of their societies.

If that weren’t enough to cast doubt on the idea of work’s future irrelevance, prevailing institutions condition everything. Whether AI abundance produces universal prosperity or vast inequality depends entirely on property rights, competition policy, the rule of law, and stable governance. None of those variables can be automated by graphics processing units (GPUs). AI is indeed likely to be transformative, but productivity shocks are not destiny. 

Institutions, formal and informal, ultimately determine whether a society captures, mismanages, or squanders technological abundance.

AI vs. War, Famine, Pandemic, or Totalitarianism

The argument that rapid AI progress will reduce geopolitical danger is beyond economics; it may simply be a category error. Military conflict, agricultural collapse, disease, or authoritarian risk are not functions of the density of automation or its calculative capacity; they are functions of incentives, history, institutional weakness, and human failings.

If anything, AI may foster volatility: consider the long shadow likely to be cast by autonomous weapons, algorithmic political decision-making, and cyber vulnerabilities. Asymmetries in access to the most advanced AI at any given moment could increase international insecurity, not reduce it. Rapid automation may escalate resource competition, foster nationalist resentment, and shift paranoid regimes into overdrive.

Consider the impact of AI on already totalitarian regimes: thusly augmented, oppressive states become more capable, not more benign. Surveillance, social-credit systems, censorship, predictive crackdowns, and digital repression scale boundlessly with AI. North Korea, Cuba, and scores of other unfree nations will not suddenly liberalize because machines capture data and deliver higher productivity. Existential political risks are not reduced by AI abundance; they may, on the other hand, be amplified by it.

Must We ‘Get There Faster to Avoid Suffering?’

The view that a faster transition is better is nothing but an assumption. It may reflect a candid attempt at regulatory relief or direct government subsidies. In welfare-economic terms, though, the transition to an AI-dominated economy may be more painful than life at the destination itself. Rapid automation can produce unemployment, wealth concentration, social unrest, and political extremism. On top of that, the groups most harmed by the transition have extremely high marginal utility of consumption, which means that transitional losses will weigh heavily upon them.

Optimal diffusion may require gradual adoption precisely because societies need time to adjust: retraining, workplace changes, safety mechanisms, new institutional frameworks, and so on. A reckless sprint to automation might generate more conflict, not less. An adaptive, organic trajectory is far more aligned with empirical economic behavior and political stability. 

But that trajectory cannot be centrally planned. When governments attempt to dictate the timing of transformative technological rollouts — as some are already preparing for or attempting to — they almost always set the pace either too fast — triggering backlash and disruption — or too slow, stifling innovation and growth. Competitive market processes, though imperfect, tend to reveal a more measured rate of adoption than bureaucratically-imposed timelines.

A High-Tech Future, Without The Fiction

AI will significantly reshape the global economy. Productivity may rise sharply in certain sectors, returns on certain forms and mixes of capital may spike, labor markets will inevitably reorganize, and while some degree of both structural and frictional unemployment may result, new industries will emerge. But none of this implies an end of work, scarcity, money, or political acrimony. Across the past century, figures from Keynes to Jeremy Rifkin, Martin Ford, and Erik Brynjolfsson have predicted that technological progress, especially automation and artificial intelligence, ultimately make large portions of human labor unnecessary. Similar post-work visions appear in futurist/science fiction and political writing, from Marshall Brain’s Manna to post-scarcity theorists and Silicon Valley leaders like Ray Kurzweil, all of whom see automation pushing societies to rethink income, purpose, and the role of work itself.

AI is likely to magnify prosperity, but it will also magnify risks, and it cannot fundamentally change human nature. A better description of what AI will create is leverage in the economic, political, and military realms. What nations choose to do with that leverage will determine the future. Technology cannot, and does not, erase incentives, undermine money, or invalidate scarcity. It only changes the terrain on which human beings pursue them.

The most credible exception to the case for free trade policy is rooted in concerns about national security. If complete freedom of trade jeopardizes our national security, some protectionism arguably is justified because, as even Adam Smith insisted, although free trade is enriching and important, “defence … is of much more importance than opulence.”

As Smith’s statement implies, protectionism pursued for purposes of national defense will reduce the country’s material well-being, but this cost is worth paying if the protectionist measures result in a large enough enhancement of national security. (Caleb Petitt argues, not implausibly, that Smith really didn’t believe that national-security concerns justify a retreat from free trade. But that’s a topic for another time.)

While most free traders today admit the national-security exception, they also warn that it’s very easy to abuse, as shouts of “national security!” are given enormous deference by the public and politicians. Free traders also warn that, even when the national-security exception isn’t intentionally abused, extraordinary care is required to prevent its application from undermining its goal of promoting national security. The surprising practical difficulty of identifying trade-policy measures that are most likely to adequately protect national security is revealed by two recent developments regarding US trade with China.

Semiconductors

The Trump administration lifted controls that restricted Nvidia’s exports of its H200 chips to China. (The administration made this move in exchange for the US government getting 25 percent of Nvidia’s revenues from these sales — an unjustifiable condition, but also a topic for another time.) The Editorial Board of the Wall Street Journal worries that China’s access to these chips will boost that country’s prospects of surpassing the US in AI technology. The reason, as described by the Journal’s Editors, is that advanced chips such as the H200s “are needed to train advanced AI models.” Unable so far to develop their own advanced chips, the Chinese will now use Nvidia’s chips to further “Beijing’s ambitions to dominate biotech, quantum computing and military power.”

Quite possibly, the Trump administration’s lifting of these controls will indeed undermine US national security. But also, quite possibly not. By supplying the bigger market opened to it by access to China, Nvidia can perhaps take advantage of larger economies of scale that will further improve its chip-making efficiency. And more-efficient advanced microchip production by a company such as Nvidia will, in turn, strengthen US national security.

In addition, Nvidia officials and the White House argue that Chinese dependence on non-Chinese advanced chips diminishes China’s prospects of developing its own advanced chips — an effect that also plausibly promotes US national security by retarding Chinese chip technology. Although acknowledging that this argument has some merit, the Journal believes that it doesn’t carry the day. The Journal worries that the improved computing power that China gains as a result of its access to the H200 chips will further, rather than frustrate, Beijing’s quest for AI dominance.

I have no idea which of these two arguments — to not restrict Nvidia’s sales of H200 chips to China or to restrict these sales — is correct. Not only do both have merit, neither argument seems strong enough to clearly defeat the other. And that’s the point. 

Economic arrangements and interdependence today are so enormously complex that the apparent, indisputable validity of simple statements about the need to impose import or export restrictions in the name of national defense often dissolves upon inspection.

Critical Minerals

So-called “rare-earth” minerals present another such conundrum. Rare-earth minerals aren’t rare; they exist all over the globe, including in the United States, where new deposits of such minerals continue to be discovered. China, however, has become the world’s leading producer of these minerals, many of which have military significance. But guess which country is the world’s second-leading producer: the United States.

So why is the White House boasting of its recent deal with the Chinese government,  committing China to avoid restricting its exports of rare earths? The conventional national-security exception to the case for free trade would have the US government impose restrictions on US imports of rare earths in order to stimulate more domestic production of these critical minerals. Therefore, when the Chinese government imposed restrictions on that country’s exports of rare earths, it did for the US economy precisely what conventional national-security trade policy would have the US government do: protect the US market from foreign supplies of these critical minerals as a means of encouraging more US production.

And yet, even on narrow national-security grounds, the White House might here be correct.

The most obvious defense of the White House’s position is that ramping up US production of rare earths would take too long. Perhaps restricted access now to Chinese-supplied rare earths would weaken US national security in the short run so severely as to outweigh any long-run benefits. This possibility is both real and not remote, yet it is typically ignored by most people who invoke the national-security exception to the case for free trade. Tariff-induced expansion of any industry takes time. This reality surely means that, even for resources and outputs that are indisputably vital for national defense, national security sometimes is better served by continuing, without tariffs, our reliance on foreigners.

There’s a second reason why the White House might be justified in bragging of its rare-earths deal with Beijing (although this reason is unlikely actually to have occurred to today’s White House officials). Were China to continue to severely restrict its exports of rare earths to the US, the resulting expansion of the US rare-earths industry would necessarily entail a shrinkage of some other US industries. Even if this expansion in rare-earths production were fully achieved overnight, if the US industries that, as a result, shrink are industries that produce militarily significant outputs, the net effect on US national defense might well be neutral or negative.

This possibility is also one that’s not remote. The specialized knowledge and labor skills that are best used to mine and process rare-earth minerals are likely to be found in disproportionately large numbers in related industries, such as petroleum and ore production, rather than in economically distant industries such as leisure and entertainment. A tariff-induced expansion of US rare-earths production, therefore, might well come at too high a price in terms of the contraction of other militarily important US industries.

I write “might” attentively. This ambiguity is real and has relevance for policy-making. It should always be taken into account. No one knows if the national-security benefits of increased domestic production of rare earths will exceed, or be exceeded by, the national-security detriments of reduced domestic production of other outputs. 

Even if in any particular case the trade-policy decision proves to weaken rather than strengthen national security, greater recognition of such ambiguity would, over time, result not only in improved trade policy but also a stronger national defense.

Security Requires Humility

The above observations are offered not to render the national-security exception to the case for free trade null, but to caution against its overuse. The Trump administration’s recent treatment of the exportation of American-made advanced microchips, along with its actions regarding rare-earth minerals, each in its own way demonstrates (if unintentionally) the shallowness of the conventional advice to protect any and all industries that produce outputs judged to be important for national security.

Zohran Mamdani garnered a bit more than 50 percent of the vote in the recent New York City mayoral election. Have voters learned nothing from history? Apparently not. As Nobel laureate F.A. Hayek quipped, “if socialists understood economics, they wouldn’t be socialists.”

In the past six months, I count no fewer than twelve pieces in The Daily Economy that discuss — directly or indirectly — Mamdani’s socialist policies. Rent controls will decrease the quantity and quality of housing. Millionaire taxes would accelerate the exodus to more friendly states, to the great glee of Texas and Florida real estate agents. City-owned grocery stores would end in a bungle of Soviet proportions, increasing food deserts and raising prices. The drop in tax revenue from increased taxes (Laffer Curve, anyone?), combined with increased expenditures, would lead to another debt crisis. 

Things aren’t looking good for the Big Apple. I remember walking through Manhattan with the late, great economist Jim Gwartney, an early mentor who introduced me to Frédéric Bastiat. He said, “you know, Nikolai… I like to visit New York City once a year to remind myself why I don’t live here.” I don’t think I’d like living in New York City, but I do enjoy visiting a few times a year. The skyline, the commerce, the energy, the Metropolitan Opera, the museums, the restaurants (from posh and exotic to a slice of the world’s best pizza)… and that’s just Manhattan, with less than 20 percent of the population and, with eight percent of total surface, the smallest borough.

I’m going to scramble to make it to New York one last time before it goes to the dogs. Lag effects being what they are, I figure I have a good six to twelve months before things get bad. It will take a while for food and hotel prices to rise, or for the inevitable debt crisis to arrive. The poorest will feel the pain quickly; my tourist bill can take the hit. But it’s the crime that really worries me.

I’m not old enough to remember the 1970s in New York. But I am old enough to remember the late 1980s, which were still pretty gnarly. My family had just returned from a leafy suburb of Paris to leafy Princeton, New Jersey. I missed the café life and the French wines and cheeses — but Princeton is a quiet and peaceful slice of this green Earth. I still remember my first trip to Manhattan in the winter of 1988. I forget if we first arrived at Penn Station or the Port Authority Bus Terminal. Both were ghastly visions of Third World poverty, with hints of Mad Max. Bums were everywhere, in various states of dress — or undress — zonked-out druggies on the streets, sidewalks thick with hobos begging for money. I remember a visceral mix of horror — the memory still makes me reel, even after traveling to 70-plus countries — and pity, as I felt the urge to give something, anything, to every beggar.

Well, I’m sorry to say, despite my inveterate optimism, these sad images are likely to return. We can expect a dramatic uptick in New York City crime rates over the next five to ten years. To be sure, crime in New York has been falling, along with national trends, for the past 30 years. But localized spikes in specific categories demonstrate the fragility of these gains.

Shrinking tax revenues, rising rents and decaying housing, unemployment from higher minimum wages, and other business-punishing policies are likely to raise poverty dramatically. And, while social workers can be a powerful addition to police, Mamdani is already signaling a softer approach to crime. It doesn’t take a Gary Becker to predict a rise in crime.

Lessons from the 1970s

In a recent weekly AIER research meeting, I wondered aloud who might replace Charles Bronson in the next round of inevitable New York vigilante movies.

This led me to return to his 1974 classic, Death Wish. It was a pleasure to revisit the 1970s — the cinematography seems campy now, but it creates a gritty realism. I did not watch the next four movies, or the 2018 Bruce Willis remake. But the 1974 original contained some fascinating nuggets of political philosophy.

The movie opens with a tender, loving, idyllic scene in Hawaii. Middle-aged architect Paul Kersey (Charles Bronson) is on a beach vacation with his wife (Hope Lange). They are in love and happy. Then they go home to New York City. The tropical paradise immediately gives way to traffic jams and graffiti, in the loud concrete and steel jungle, as they make their way home to their Manhattan apartment from the airport.

Back at the office, a colleague complains about the city’s rising crime: “Decent people are going to have to work here and live somewhere else.”

“You mean people who can afford to live somewhere else,” Kersey retorts.

His colleague rolls his eyes. “You’re such a bleeding-heart liberal, Paul,” he says.

“My heart bleeds a little for the underprivileged, yeah,” Kersey replies.

The conversation turns to policing. His colleague suggests the city will need more cops than people to curb rising crime. Kersey is doubtful. “You’ll have to find other options,” he says. “No one could pay the taxes.”

Then comes the crime. Six minutes into the movie, we see our first act of violence. Two minutes later, it’s vandalism, followed by breaking and entering. Then Kersey’s wife is beaten by thugs in her own home, while their adult daughter is sexually assaulted. Kersey’s wife dies from her injuries. His daughter survives but falls into catatonic mutism and ends up institutionalized.

A grieving Kersey finds solace at the office. One night, he tentatively whaps a would-be robber in the head with a sock filled with quarters. Then work calls him to Arizona. There, we learn that he was a conscientious objector in a medical unit during the Korean War, reinforcing his image as a nerdy, mild-mannered, middle-class Manhattan architect. But we also learn he grew up around guns — and he is a crack shot.

The movie then comments on violence and crime. Kersey’s Arizona host argues that New York’s crime wave is no coincidence, given the city’s gun control laws. 

“Muggers operating out here [in Arizona] just plain get their asses blown up,” he tells Kersey. “If you ever get tired of living in that toilet, you’re welcome here.” Then he slips a gift-wrapped case into Kersey’s suitcase: a .32 revolver with ammunition.

Kersey returns to the grim reality of a graffiti-mottled, lawless New York. There is no progress in finding his family’s assailants. The police are overwhelmed, bureaucratic, and uninterested; the two victims are “statistics on a police blotter… and there is nothing we can do to stop it. Nothing but cut and run.”

Shortly thereafter, Kersey shoots and kills a mugger who attempts to rob him. He returns home to vomit — incidentally, much like James Bond in Ian Fleming’s original novels, who didn’t have the casual nonchalance of the Broccoli movies. Minutes later, Kersey opens fire on three muggers, killing two immediately and executing the third as he tries to escape. By the end of the movie, Kersey will have killed a total of ten muggers in self-defense.

From Hollywood to Philosophy

The movie raises a core question: If the police don’t do it, should we not do it ourselves? 

What, really, is “civilized”? Cutting and running, to live among others who have the means to stay safe? Or does a gentleman — or a lady, for that matter — defend himself as an armed citizen, protecting both himself and his community?

It’s notable how the fictionalized NYPD changes its tune when a numbing, overwhelming crime wave sparks vigilante action. Initially, they were apathetic about Kersey’s assault, assigning only a patrolman to the case. But when it becomes clear that the government’s monopoly on violence is being challenged, the case is bumped up — fast — to an inspector. Think of the leap from a single private to a colonel, complete with a task force of about 20 police officers and detectives. 

A crime wave is one thing, but “Murder is no answer to crime in our city; crime is a police responsibility,” complains the Police Commissioner as he pours resources into tracking the vigilante.

I note an important detail. The movie is billed as a vigilante film. I was expecting naqam (biblical vengeance, as taught to me by my late Jesuit mentor). But Kersey doesn’t seek out criminals to execute. In a crime-infested city, it may look like he’s taunting criminals, simply by walking alone after dark. But he is going about his business as anybody would in a functional city, with a functional government and functional police. He just happens to exercise his natural right to self-defense, where the Lockean Commonwealth has failed.

This brings us back to the big question. We’re all uncomfortable with vigilantism. As a good Lockean, I return to Chapter II, Section 13, of the Second Treatise of Government.

To this strange doctrine, viz. That in the state of nature everyone has the executive power of the law of nature, I doubt not but it will be objected, that it is unreasonable for men to be judges in their own cases, that self-love will make men partial to themselves and their friends: and on the other side, that ill nature, passion and revenge will carry them too far in punishing others; and hence nothing but confusion and disorder will follow, and that therefore God hath certainly appointed government to restrain the partiality and violence of men. I easily grant, that civil government is the proper remedy for the inconveniencies of the state of nature, which must certainly be great, where men may be judges in their own case, since it is easy to be imagined, that he who was so unjust as to do his brother an injury, will scarce be so just as to condemn himself for it…

And yet. And yet…

Around the world, long-distance competition, then cellphone competition, replaced state telephone monopolies. Federal Express and UPS have replaced an inefficient monopoly postal service. Even Denmark — a country that has traditionally been enamored of state solutions — recently ended the state’s collection and delivery of letters. Bitcoin is increasingly replacing failed fiat currencies. Private, tax-deferred retirement accounts (IRAs) arrived in the US 50 years ago; they have astronomically higher returns than Social Security.

So, why not security?

In America today, private security guards outnumber police two to one. To be sure, the latter enjoy a vast number of monopoly privileges, from use of force and arrest powers to qualified immunity that shields them from liability for actions committed behind the badge.

Returning to political theory, the anarcho-capitalist argues that security can and should be private, as the state can never be neutral and will inevitably serve its own interests. The minarchist and the Hayek/Friedman/Buchanan super-minimalists have crafted strong arguments for the necessity of neutral state-enforcement of rights. James M. Buchanan was uncharacteristically blunt: “The libertarian anarchists who dream of markets without states are romantic fools who have read neither Hobbes nor history.” 

But what if the state is demonstrably incapable of providing security? This was clearly the case in New York City in the 1970s. Adding insult to injury, the city, at least according to Death Wish, was more interested in protecting its monopoly on force than providing security.

The half of New Yorkers who didn’t vote for Commissar Mamdani don’t deserve the hell he is about to unleash on them. As to the 50 percent of economically illiterate, naïve, and rapacious New Yorkers who voted for socialism, do they deserve what they asked for? Was H.L. Mencken right when he opined that “democracy is the theory that the common people know what they want, and deserve to get it good and hard”? Or can we hope for forgiveness, for they know not what they do?

I hope I’m wrong. I really do. I wish all the best for New York City. I fervently hope Mayor Mamdani’s policies are squarely thwarted by Albany. But I don’t think I am wrong. And this invites a final question. 

Who will replace Charles Bronson in the next round of Death Wish movies?