Category

Economy

Category

A surveillance state is being erected around the American public at an alarming rate. In many urban and suburban settings, anyone traveling on public streets or sidewalks will have his image captured by the ubiquitous surveillance cameras. A leisurely stroll around the neighborhood, as well as any conversation along  the way, might be recorded if the city uses surveillance-enabled street lights. Even our own front yards might not be safe from the prying eyes of the state if a neighbor has a “smart” doorbell that shares data with law enforcement.  

Rural areas are not exempt from  this intrusion. Automatic License Plate Reader cameras (ALPRs, often contracted under the brand name FLOCK), are being placed on rural highways and on county lines in an increasing number of areas. Audio and video surveillance now cover remote corners of  the Amazon Basin. Satellite technology could ensure that, one day, no square foot of the planet is unobserved. 

The power of the modern surveillance state is without historical precedent. The argument that “there is no expectation of privacy in public” no longer adequately addresses the huge quantities of data that surveillance apparatus captures, stores, and analyzes.  

While civil rights and other niche groups are sounding the alarm about the dangers of Big Brother, critics are surprisingly underrepresented among popular news outlets. When the topic of citizen surveillance is covered at all, these stories are often portrayed as a benign solution to a dangerous problem with the dangers to civil liberties receiving a brief nod, if even mentioned. 

So why does the average citizen not have greater concern over these intrusions upon their civil liberties, in some cases even championing it? One answer to that question might be that these systems are a Trojan Horse. While they are dressed up as a gift that will protect society from all that they fear, it is the gift itself that poses the greatest threat. 

The use of fear to gain power is a tale as old as time, but our unprecedented access to information has not made us any less vulnerable to it. Each decade of the lives of the modern citizen has brought about its own moral panic with the accompanying “solution.” From the Satanic Panic to the War on Drugs, fear has driven consistent relinquishment of our individual rights over time.  

The justification for the modern surveillance state began on  September 11, 2001. The fear inspired by those terrible events was the foundation for the unconstitutional provisions of The PATRIOT Act, the advent of real-time crime centers, and the birth of the TSA. Public fear of terrorism enabled the government to impose security measures that would never have been tolerated in the absence of a crisis. 

With greater public acceptance of an increasingly Orwellian environment, expanding surveillance from the airport into the streets required only amplifying stories of gang warfare, a problem portrayed as solvable only with the rampant use of cameras. Divisive political rhetoric over illegal immigration has further fanned the flames as border walls are replaced by technological solutions. 

As violations to privacy have become normalized, the powers-that-be are now promoting these technologies as an answer to non-violent crimes such as littering and traffic violations. Government programs also seek to use ALPRs to micromanage the behavior of travelers while also reaping revenue in the name of protecting the planet from climate change. 

While fear of gangs or litterbugs or drunk drivers loom large in our collective imaginations, other legitimate fears are woefully underrepresented in the public discourse. A government wielding these technologies is a threat to the privacy, and indeed the lives, of citizens living under them. While the media ignores this threat — both unprecedented and with many global, historical parallels — we deny this very real risk at our own peril. 

Some of the same technologies being erected in American communities are already used to oppress Chinese citizens through social credit systems and ethnic cleansing. Journalists and political dissidents who expose the corruption of government authorities are denied access to basic necessities,decent housing, and travel. In more authoritarian countries such as Myanmar, opponents of the ruling regime have been tracked down and executed using facial recognition. 

One might argue that these countries do not enjoy the constitutional protections afforded to Americans, but it is dangerous to place such extraordinary power in the hands of even limited governments. The track record of abuse and encroachment, such as civil asset forfeiture, provides evidence that the surveillance state in America could be abused and that abuse would be protected by courts. Even when real crimes are detected by AI, the “evidence” police rely on routinely turns out to be wrong.

Collecting large quantities of data without proper consent or constraint is another danger of widespread surveillance. The comings and goings of average Americans are data-based and analyzed with little to no regulation. This poses a threat not only from agents of the state but also from the corporations that hold the data. Individuals risk having their data compromised by security breaches without having made the informed decision to provide that data in the first place. If an individual is targeted, state actors can reconstruct months or years of his life in search of a crime with which to charge him.  

It is time for an honest conversation about surveillance. Weighing some kinds of fear heavily has contributed to our loss of liberties, while other legitimate fears, like the consequences of allowing government intrusion into the private lives of every citizen, have been outright ignored.

The most important thing a book on the economic history of the world must emphasize, again and again, is the incredible, unbelievable, unrivaled improvement in the standard of living since preindustrial times. Economics terms like “growth,” “wealth,” or “division of labor” just never seem to do that shift justice.

Up until fairly recently, almost everyone in every society of every civilizational age in every part of the world toiled the earth for some meager, grain-dominated subsistence living. Little variety, poor hygiene, none of the technical and material comforts we take for granted — death and suffering always within terrifying proximity.

Today, most people live in urban areas, and make a living assisting their fellow humans — often humans very, very far away. They create “services” instead of growing crops, all without actually neglecting the land. Peruse some of the commodities at Our World in Data’s agricultural production page: almost without exception, they all move up and to the right. The story of the modern world is indeed “more.”

Very few innovations or revolutions in the history of mankind — perhaps the internet or the internal combustion engine, or the expanded franchise or the contraceptive pill in the social sphere — can even rival the extreme human revolution, over the relatively short arc of the last 175 years. 

In One From the Many: The Global Economy Since 1850, Christopher Meissner, UC Davis economics professor and longtime scholar of international and financial economics, takes us on a journey through a century and a half of international trade. We’re treated to a slightly different flavor of that exact Great Enrichment point. A few stunning charts reproduced in the opening of the book tell precisely that story from a global trade point of view: nothing-nothing-nothing-hockey-stick-up

It’s outlined like a textbook (lecture notes?) for a class in international economics: we get chapters on the North Atlantic trade, on the classical gold standard, on the great migrations of the nineteenth century, on the Great Depression, and on the Bretton Woods regime. 

Trade was once restricted to a small number of high-value-to-weight products. But humans have progressively learned to trade even very low-value-to-weight products (e.g., wheat or water. Again, the nineteenth century was a watershed thanks to the new technologies.

Out of many, our global economy became one.

In a sense, it’s an investigation of globalization over the long haul. The author’s thesis is pretty clear: despite recent backlash and wobbles, globalization is inevitable and unstoppable. While it ebbs and flows over the centuries, the great enrichment in no small part owes its existence to the global division of labor, and the massive reduction in the cost of trading and shipping goods and services across the world. 

With a full chapter on the gold standard, there’s plenty of room for focusing on money and monetary regimes. That’s not surprising coming from a scholar who made his reputation in assessing various gold standards — the classical, the interwar one, the Bretton Woods and European Exchange Rate Mechanism fixed-exchange rates. (It’s also welcome, since my bias, like many who study money, is to think monetary conditions rule the roost).

Given the importance of a gold-based monetary order for most of this timeline, if anything it’s something of a missed opportunity to not write further about the monetary qualities ruling gold, and to what extent that cherished monetary order contributed to the rise of standards of the world. 

While I might quibble with the details here and there — explanations for the Great Depression, central banks as rescuing firefighters rather than arsonists, what makes a gold standard functional — Meissner is excellent on the many critical epochs of the last two centuries. If you knew nothing of these themes, this is as good and accessible an entry as any. 

I much appreciate Meissner rudely discarding common mythical beliefs, e.g., in the Marshall Plan — which “did not rebuild enough infrastructure to matter for the European economy.” The war, he points out, ended in 1945, and by the time that (comparatively meager) Marshall Plan money showed up in 1947, most bridges and infrastructure had already been repaired. The European growth miracle had already started.

In this, as in so many other observations, he’s balanced: he competently invokes research or counterfactuals to assess popular claims in history and historiography alike. He contrasts the acclaimed Golden Age of growth between roughly World War II and the breakdown of Bretton Woods in the late 1960s with the unsatisfactory development of China, India, or most of Latin America — whose below-trend or average growth impressed nobody. It couldn’t therefore only have been technological transfer and catch-up. The candidate explanations we’re served are plausible but too vague to investigate: “conscientious policymakers,” political harmony, or growth as a way to defend against Soviet Union temptation. 

What I can’t grasp, and what leaves a bad taste from what otherwise is a pretty enjoyable and enriching book, is what happens on the very last few pages: the left-wing, globalist, intellectual biases re-assert themselves. 

The epilogue is altogether a reminder not to situate your long historical work in the fleeting political moment. Written in the fall of 2023, some six months before the book’s publication, it reads (from the point of view of early 2025) as straight-up comical. Meissner praises international institutions, celebrates the ousting of Donald Trump, and rebuilding the “long-standing alliances” and “global engagement” he torpedoed.

Oops. One takeaway is the bittersweet humility that the wailing, intellectual classes still haven’t embraced. Meissner, like so many academics and intellectuals, defaulted to treating Trump 1.0 as an outlier period to be purged from their memories. Returning him to the presidency wasn’t a remote possibility even in their wildest nightmares.

Worse, to spend a career studying, among other things, global trade monetary regimes and how the various gold standards operated… and yet end your first stand-alone book — with Oxford University Press! — by splurging platitudes about global cooperation and the threat of climate change, ends up sounding just like another member of the tone-deaf, ivory-tower, liberal intelligentsia.

Aside from all such political considerations, if climate change didn’t merit a role for 299 out of 301 pages of serious economic history, why must it show up on the penultimate page? Any editor worth his or her salt should have removed this.One from the Many is a nice addition to global economic history, but it’s a shame that it concludes by reinforcing exactly how reality-detached our otherwise excellent academics can be.

It took the world — and stock markets — a while to grasp that Trump’s tariffs aren’t primarily intended to achieve reciprocal tariff parity. Rather, they focus absurdly on rectifying individual trade deficits with specific countries. Notably, these tariffs target only imbalances in goods, conveniently overlooking America’s substantial surplus in services.

Examining the rhetoric of Trump and prominent advocates like Navarro and Lutnick reveals a primary objective beyond revenue generation: returning industrial jobs to the US, almost irrespective of the economic consequences. Steel mills, auto plants, and oil fields symbolize an idealized, nostalgic vision of industrial America.

This vision is rooted in an idea long recognized by scholars of populism: producerism. Found across various populist movements globally, producerism centers on the belief that the working middle class is the true backbone of economic and moral strength, supporting both the parasitic elites above and the welfare-dependent poor below. A closer look at who qualifies as the ideal working middle class reveals that producerism splits into two distinct strands: Decentralized Producerism and Dirty Hands Producerism.

Decentralized Producerism: The Jeffersonian Ideal

Decentralized producerism has deep roots in American political culture. Thomas Jefferson envisioned America as a nation of self-reliant farmers, skeptical of industrialization but open to free trade if it complemented agrarian life. In an 1812 letter to John Adams, Jefferson expressed that every family should ideally function as “a manufactory within itself,” relying on external production only for finer goods.

This form of producerism emphasizes small-scale production and promotes self-sufficiency. The dignity of labor arises primarily from local autonomy and independence from state control, rather than from any particular mode of production.

The People’s Party — America’s first significant populist movement — embodied this ethos. Historian Lawrence Goodwyn described it as a grassroots democratic movement aimed at limiting corporate power. These populists weren’t against capitalism; they supported free trade while opposing monopolies and cartels threatening independent producers.

Later, thinkers like Wilhelm Röpke, inspired by Ortega y Gasset’s The Revolt of the Masses, championed an independent middle class — artisans, small traders, and farmers — as a necessary balance to state and corporate dominance. Röpke promoted decentralized capitalism with small, diverse, locally embedded enterprises operating freely in competitive markets.

Dirty Hands Producerism: Smokestacks and State Power

By contrast, Dirty Hands Producerism emphasizes manual labor’s dignity in large-scale industrial settings — steel mills, auto plants, and oil rigs. It romanticizes workers whose jobs involve physically, ideally dirty work.

Mid-twentieth-century populists like George Wallace championed this version. He praised the “steelworker, the rubber worker, the textile worker” and lambasted the “over-educated ivory-tower folks with pointed heads” who, he claimed, had lost touch with real American values.

This form of producerism aligns easily with mercantilism – the idea that national strength depends on producing more and consuming less. It portrays centralized industry as virtuous and essential, justifying state interventions such as subsidies and tariffs to protect domestic production. Whereas decentralized producerism strives to keep production free from government interference, dirty-hands producerism insists on active state involvement to preserve industrial jobs, even at significant economic, social, and political costs.

April 2: The High Cost of “bring industry jobs home”-policies

The recent tariff expansion announced on April 2 represents the culmination of dirty-hands producerism combined with MAGA nationalism and superficial economic reasoning. The focus on industrial jobs might carry emotional appeal, yet its economic merits are deeply questionable.

As The Economist has pointed out, it’s far from clear that operating industrial robots is inherently more fulfilling than preparing cappuccinos. Data from the Bureau of Labor Statistics indicates that many service-sector jobs — when adjusted for comparable education and skill levels — offer equal or superior pay, benefits, job security, and workplace safety compared to traditional blue-collar manufacturing roles.

Meanwhile, the costs associated with protectionist policies designed to “bring industry jobs home” are tangible and significant, especially for the independent middle class whom producerism claims to champion. Entrepreneurs dependent on imports or integrated global supply chains are now confronting higher input costs and market disruptions. They often become collateral damage in a conflict driven by nostalgia for industrial labor and mercantilist, zero-sum economic thinking.

Producerism’s Double Edge

Producerism identifies a legitimate issue: the traditional working and middle classes are underrepresented politically, culturally, and economically. Powerful elites benefit disproportionately from expanding federal authority, harming traditional, self-reliant producers.

However, only decentralized producerism effectively addresses these imbalances within a free-market context. It promotes local autonomy, counters corporatism, and restrains bureaucratic state power. Dirty Hands Producerism, meanwhile, provides an emotionally compelling narrative — but risks strengthening state-corporate collusion rather than diminishing it.

The true test isn’t whether a job involves steel, software, or cappuccinos, but whether it thrives due to genuine market demand rather than government intervention. Similarly, the real measure of trade isn’t whether it balances neatly in national accounts, but whether it is

balanced through voluntary exchange that benefits both sides. Only then does trade create wealth and effectively limit both market and governmental power.

President Trump is not happy with Federal Reserve Chair Jerome Powell. With the economy likely to slow under the weight of the administration’s tariffs and corresponding uncertainty, the president thinks the Fed should be cutting rates preemptively. Powell, in contrast, prefers a wait-and-see approach, at least partially out of fear that inflation will resurge if the Fed cuts rates too soon.

In a Truth Social post last week, the president wrote that “Powell’s termination cannot come fast enough!”

Figure 1. Trump blasts Powell on Truth Social, April 17, 2025

Powell’s four-year term as chair will end on May 15, 2026. Even then, he could stay on the Board as a governor until January 31, 2028. President Trump had considered getting rid of Powell even sooner, but then more recently said he had ‘no intention’ of firing the chair.

The Federal Reserve Act permits the president to remove a governor “for cause.” (Powell would not be able to continue to serve as chair if he were removed as a governor.) It is widely accepted, however, that cause does not include mere policy disputes. That is certainly Powell’s view. When asked whether the president has the power to fire or demote the Fed chair back in November, Powell said it was “Not permitted under the law.” Just last week, he said the Fed’s “independence is a matter of law.”

President Trump disagrees. “If I want him out, he’ll be out of there real fast, believe me,” he said last week.

Earlier this year, then-Vice Chair for Supervision Michael Barr opted to step down before President Trump could attempt to fire or demote him, which seemed likely. Barr did not believe the president had the authority to do so, but did not “want to spend the next couple of years fighting about that” in court and thought “it would be a serious distraction from” the Fed’s “ability to serve our mission.”

Whether intentional or not, Barr’s decision has almost certainly improved the odds that Powell will serve out his term as chair. Since Barr was very unpopular among Republican lawmakers, President Trump would not have experienced much opposition from the home team for firing or demoting him. And, if Barr had fought the decision in court and lost, Trump could point to the precedent when dealing with Powell. By stepping down, Barr prevented such a precedent from being established.

Unlike Barr, Powell is very popular among Republican lawmakers. That puts the president in a much more difficult position. If he moves to fire or demote Powell, he will face opposition from some Republican lawmakers — and, since the decision might be overturned by the Courts anyway, it could be all cost and no benefit for the president.

When asked on Tuesday, President Trump said he has “no intention of firing” Powell. When reminded that, just a few days prior, National Economic Council Director Kevin Hassett indicated the president and people in the White House were studying the issue, President Trump denied that he had any plans to oust Powell. “None whatsoever. Never did,” the president said.

The press runs away with things. No, I have no intention of firing him. I would like to see him be a little more active in terms of his idea to lower interest rates. This is a perfect time to lower interest rates. If he doesn’t, is it the end? No, it’s not. But it would be good timing. It could have taken place earlier. But, no, I have no intention to fire him.

That would seem to put an end to the question.

If President Trump does not intend to fire Powell, how will he respond in the likely event that the Fed continues to delay cutting its federal funds rate target? (The CME Group currently puts the odds of a May rate cut at just 8.3 percent.)

Last year, now-Treasury Secretary Scott Bessent suggested Trump could appoint a “shadow Fed chair” prior to the end of Powell’s term. The shadow Fed chair would initially be appointed as a governor, with a credible commitment from the president that he or she would be elevated to chair once Powell’s term ends. The shadow Fed chair could then make speeches indicating how he or she would conduct policy in the future, which would move expectations — and markets — today.

There are at least three problems with Bessent’s suggestion, however. First, the Federal Open Market Committee conducts policy by majority vote. The Fed chair usually has an outsized voice in the process, but there is no guarantee that the remaining members of the FOMC would go along with the president’s new appointment when the time comes. The most recent Summary of Economic Projections and statements from FOMC members suggest there is broad support for Powell’s wait-and-see approach. If other FOMC members were to publicly oppose the future chair’s stated policy path, it would hamper his or her ability to move expectations as shadow chair.

Second, the president can only appoint a governor when a position becomes available. Barring a resignation or firing, the next opening will come in January 2026 when the balance of Adriana Kugler’s partial term ends. (Although she would then be eligible for reappointment, it is difficult to imagine President Trump extending the term of his predecessor’s pick.) Hence, a shadow chair would be left waiting in the wings until January — making any statements before then less credible than they would be if he or she were already on the Board.

Third, the shadow chair scheme risks significantly narrowing the pool of potential applicants. The power and independence of the Fed is part of the position’s appeal. I suspect few of those qualified and interested in the top spot would remain interested if they thought the shenanigans surrounding their appointment would significantly weaken the institution. The Wall Street Journal reports that Kevin Warsh, who is widely believed to be a frontrunner for the position, “has advised against firing Powell and has argued that he should let the Fed chair complete his term without interference.” If a chair-in-waiting were to appear complicit in a scheme to undermine the power and independence of the Fed, it would not merely damage the reputation of the Fed. It would damage the reputation of the chair-in-waiting, as well.

Given the constraints, it is easy to understand why President Trump now says he will not fire the Fed chair and, indeed, never intended to do so. As for appointing a shadow Fed chair, that seems unlikely, too. Most likely, the president will reluctantly let Powell serve out his term as chair while continuing to badger and berate him from the bully pulpit. Whether pressure from the president will be effective, ineffective, or counter-effective remains to be seen.

President Donald J. Trump recently signed an Executive Order directing the Secretary of Energy to rescind certain restrictions on water pressure established by his predecessors. 

As the White House put it, the president was ending the “Obama-Biden war on water pressure and [making] America’s showers great again.”

This isn’t the final salvo in the decades-long Appliance Wars — nor did the order accomplish what many on social media claim.

I first encountered the Appliance Wars in the 1990s, courtesy of my favorite TV show, Seinfeld. In a memorable episode, Kramer, Jerry, and Newman are all visibly irked (and unkempt) as they wrestle with the newly mandated “low-flow” showers.

“There’s no pressure; I can’t get the shampoo out of my hair!” Kramer laments. “If I don’t have a good shower, I am not myself. I feel weak and ineffectual; I’m not Kramer.”

The scene comes from “The Shower Head,” Season 7, Episode 15, which aired in 1996. I didn’t catch it until a few years later while in college, but even then the episode felt fresh, edgy, and smart. 

What I didn’t know was that the Appliance Wars had already been raging for decades.

The Appliance Wars

On December 22, 1975, President Gerald Ford signed into law the Energy Policy and Conservation Act, which granted the president powers over energy exports. The law included regulatory power over household appliances to increase energy efficiency.

The legislation was a response to the 1970s oil crisis, an event that was exacerbated by price controls imposed by President Richard Nixon. The first energy efficiency regulations under the EPCA focused primarily on items like refrigerators, air conditioners, and water heaters, but over time, the scope of these regulations expanded and became more stringent.

In 1992, the Energy Policy Act amended the EPCA to require stricter efficiency standards for appliances and water efficiency standards, including a rule that mandated lower-flow showerheads that limited outflows to 2.5 gallons of water per minute. 

The federal government’s attempt to save the planet by regulating showerheads seemed common sense to some and absurd to others. For writers at Seinfeld, it was clearly the latter. 

Yet the low-flow showers that Seinfeld mocked were not stringent enough for some. 

In 2010, the Obama Administration reduced the maximum flow of showerheads to 2.0 gallons per minute. Some states have gone further. California, for example, has regulations that limit the maximum flow rate for showerheads to 1.8 gallons (and 1.2 GPM for bath faucets).

Twitter and media were abuzz last week that Trump had “made showers great again,” but his executive order didn’t scrap the federal rule, something the White House’s own statement confirms.

“President Trump is restoring sanity to at least one small part of the federal regulations, returning to the straightforward meaning of ‘showerhead’ from the 1992 energy law, which sets a simple 2.5-gallons-per-minute standard for showers,” the press release stated.

The executive order reversed a complicated Biden rule — it was 13,000 words, according to the White House — on the definition of the word “showerhead.” What it did not do was repeal the 1992 regulation.

‘If Washington Can Regulate Showerheads’

The Trump administration is taking a victory lap for “Making Showers Great Again,” but the federal regulation that inspired “The Shower Head” is still in place — it’s just slightly less stringent than the 2-gallon per minute rule initiated during the Obama presidency. (To be fair, the 2.5 limit is written into the US Code, which cannot be changed with the stroke of a pen.) 

That episode ended with Kramer buying “hot” showerheads off the black market. The episode captured the absurdity of attempting to conserve resources in a top-down fashion. As Kramer pointed out, he couldn’t get clean with the new showerheads, which resulted in him taking longer showers. 

Longer showers are indeed a consequence of lower-flow showerheads, but these are the kind of practical consequences that rule-making bureaucrats rarely consider. We’re supposed to take it on faith that federal regulators know the optimal amount of water each individual requires to live, wash, and flush. They don’t, of course.

The Showerhead Wars are funny because they are a Kafkaesque absurdity. The wars lay bare the stupidity of a soulless bureaucracy that can spend 13,000 words defining the term “showerhead” to make our lives less enjoyable and efficient. 

The joke is ultimately on us. Because if Washington can regulate your showerhead, it can regulate anything — and that’s the problem.

We just made it through another tax season. Congress has begun debating whether and how to extend the Trump tax cuts from the 2018 Tax Cuts and Jobs Act. While many elements of that tax debate are worth commenting on, I want to highlight the standard deduction because it sheds light on an underappreciated part of American philanthropy.

Prior to the introduction of the federal income tax in 1913, charitable donations did not have meaningful tax deduction benefits. Yet Americans gave generously. In fact, if anything, American philanthropy has declined due to the Scrooge effect of the welfare state. “Are there no [state-funded] prisons [or work-houses]?” Ebenezer Scrooge asks in Charles Dickens’ A Christmas Carol.

The questions reveal that Scrooge (and others) feel that their higher taxes to fund a variety of social and “poverty-reduction” programs take the place of direct philanthropic giving. Americans also keep a lot less of what they earn today than they did a hundred or a hundred and fifty years ago—as most of us know from recent personal experience.

The case of welfare programs crowding out charity has been made eloquently by Marvin Olasky in The Tragedy of American Compassion. Various religious and fraternal orders provided health insurance, old-age insurance, and other social services to their members throughout the nineteenth and into the twentieth century. These services were later replaced by state unemployment benefit schemes, Social Security, Medicare, and Medicaid.

These government programs “crowded out” charitable, philanthropic, civil society—contributing to problems of declining social capital elaborated by Robert Nisbet (Quest for Community) and Robert Putnam (Bowling Alone). Government agencies and government checks replaced civic networks and systems of support. Yet American philanthropy is still alive and kicking in the U.S.

The Lilly Family School of Philanthropy at Indiana University estimates that Americans gave $557.16 billion to charity in 2023. That’s about $1,600 per capita. By comparison, Canadians gave about $400 per capita to charity and Brits gave about $250 per capita. Even as a percentage of GDP, the U. S. ranks well above European countries. According to one source, the U. S. is one of the most charitable countries in the world.

What’s remarkable is that the vast majority of Americans who give to charity receive no federal tax benefit from doing so. Returning to the individual exemption, when you file your taxes, you can either claim the standard deduction ($14,600 for an individual, or $29,200 for a couple) or you can itemize your deductions. A few expenses can count towards the itemized deductions, but these expenses are highly qualified and don’t add up to much for the average person.

From a benefit standpoint, your qualified expenses, including your charitable giving, must add up to more than the standard deduction before you receive any tax advantage. Suppose someone takes the entire $10,000 state and local tax deduction (SALT) and comes up with $5,000 more in other qualified expenses. They would still be $14,200 short of the $29,200 standard deduction for a couple. This means that any of their charitable giving, up to $14,200, does not render them any benefits on their federal taxes. 

Seventy percent of American households earn less than $127,000 before taxes. So $14,200 would mean donating more than ten percent of their pre-tax earnings before they saw any advantage from the giving being “tax-deductible.” For most Americans, the “tax-deductible” element of charitable giving is practically irrelevant. Yet they give anyway.

Most Americans donate money even though they receive no federal tax benefit. Americans gave generously long before the income tax and the charitable tax deduction existed. A large industry of lawyers and accountants has cropped up to help wealthy people lower their tax liabilities through various forms of charitable giving. Sometimes these methods lead to creative accounting and legal gymnastics that can distort or divert people’s choices of how to use their wealth.

These observations provide a few reasons to want an alternative to our federal tax code 501(c)(3) structure. We should ask whether society would be freer in a world without tax exemptions for charitable giving—a world without the stark for-profit/nonprofit legal divide with all its attendant reporting and hoops. Tax code rules that put their thumb on the scale represent social engineering of the kind free people should reject.

Most Americans give generously without thought of return—even with a large welfare state and high taxes. There is something deeply admirable about this kind of generosity that gives without expecting any material benefit in return. Imagine how they would give if the welfare state were trimmed down and their taxes were lower. That’s what George W. Bush’s “compassionate conservatism” should have meant.

The school choice revolution just scored its most historic victory yet. The Texas House passed Senate Bill 2 by a decisive vote of 86 to 63, following the Texas Senate’s approval by a 19 to 12 margin.  

Texas Senate leadership announced Friday that the chamber plans to concur next week with the version of the bill passed by the House. Shortly afterwards, Governor Greg Abbott announced that he is “ready to sign this bill into law.” 

This isn’t just a win for Texas families — it’s the biggest day-one school choice initiative in US history, launching a $1 billion Education Savings Account (ESA) program for 100,000 students. The initiative provides about $10,000 per child for private school tuition or other educational expenses, with more funding for students with disabilities. Homeschool families would receive $2,000 per student per year for approved education expenses.  

Texas is the sixteenth state to pass universal school choice since 2021, cementing red states as the vanguard of parental rights in education. 

Texas’s journey to this moment was fraught with resistance. Just last year, 21 Texas House Republicans joined all Democrats to vote against school choice, sinking Governor Greg Abbott’s school choice proposal in 2023. But the political winds have shifted dramatically. After the 2024 primaries, only seven of those Republicans remained in office, thanks to Abbott’s relentless campaign to oust anti-school choice incumbents.  

On Thursday, six of those seven holdouts flipped, voting in favor of universal school choice, signaling a seismic realignment in the Texas House. This turnaround underscores the growing clout of parents and the electoral peril of standing in their way. 

The spark for this parent-led revolution came from an unlikely source: Randi Weingarten and the teachers’ unions. By fighting to keep schools shuttered during the COVID era, they gave parents a front-row seat to the Marxist critical race theory and gender ideology infiltrating public school curricula.  

Outraged and galvanized, parents became a political juggernaut, demanding control over their children’s education. Their influence fueled Donald Trump’s landslide victory in November 2024, driven by a nine-point lead among parents — a 15-point shift from 2020, when they favored Joe Biden by 6 points.  

The result is staggering: about 40 percent of America’s school-age population now lives in states with universal school choice policies — a meteoric rise from zero percent in 2021. 

Red states like Texas, Florida, Arizona, and Iowa are setting the standard for parental empowerment, recognizing that families, not bureaucrats, know what’s best for their kids. Florida, once a swing state, shows how school choice reshapes politics. In 2018, Ron DeSantis narrowly won his first gubernatorial election because school choice moms rallied behind him after his Democrat opponent, Andrew Gillum, vowed to dismantle the state’s scholarship program.  

Those parents tipped the scales, and today, Florida Republicans boast supermajorities in both the House and Senate. School choice isn’t just the right thing to do — it’s a political winner for Republicans, helping them make inroads with voters who might otherwise lean Democrat. Families desperate for better education options become single-issue voters, rewriting the political playbook. 

Meanwhile, blue states are doubling down on policies that alienate parents, ignoring the mandate from Trump’s parent-driven victory. In Colorado, Democrats passed House Bill 25-1312, classifying “misgendering” your own child as child abuse, potentially ripping away children from their parents if they refuse to affirm the delusions of a small child.  

In Illinois, Democrats eliminated the state’s modest school choice program in 2023 and are now targeting homeschooling freedom with House Bill 2827. This bill, which advanced out of committee on a party-line vote last month, would force homeschooling families to file annual declarations, disclose detailed personal information about their children, and submit to portfolio reviews by public school officials, with truancy charges or misdemeanor penalties for non-compliance.  

These policies aren’t just out of touch — they’re a direct assault on parents’ rights to direct their children’s upbringing. 

The political consequences of ignoring parents are clear. In Virginia’s 2021 gubernatorial race, former Governor Terry McAuliffe handed victory to Glenn Youngkin by dismissing parental concerns, infamously stating, “I don’t think parents should be telling schools what they should teach.” That misstep ignited a parent-led backlash, proving parental rights are a third rail in politics. Democrats better learn this lesson soon if they want to stay in office.  

School choice enjoys overwhelming bipartisan support — 71 percent of voters back it, including 80 percent support among Republicans and 69 percent support among independents. Even Democrats privately concede it’s a winning issue, but their loyalty to teachers’ unions keeps them tethered to a losing strategy. 

School choice is more than better education — it’s a pathway for Republicans to expand their majorities by appealing to diverse voters. When families see their kids thriving in schools that align with their values, they don’t just vote — they mobilize. In Arizona, universal school choice passed in 2022 and has become a cornerstone of family empowerment. Once parents gain the power to choose, they fight like hell to keep it, and politicians who try to claw it back face political consequences.  

The teachers’ unions thought they could hold education hostage, but they’ve awakened a sleeping giant. Parents are now a more potent voting bloc than union bosses, reshaping the political landscape.  

Texas’s Senate Bill 2 marks a national turning point, showing that empowering parents is both good policy and smart politics. Republicans are building coalitions across racial, economic, and geographic lines, as Texas’s shift from 21 Republican holdouts in 2023 to a pro-school choice majority in 2025 demonstrates. Democrats in blue states are running out of time to adapt. The longer they cling to policies like Colorado’s HB 25-1312 or Illinois’s HB 2827, the more they risk political suicide. 

The parent revolution is here to stay, and red states are leading the way. As Texas joins the ranks of school choice pioneers, the message to union-controlled politicians is clear: empower parents or prepare to lose.  

The days of top-down control over education are numbered. Families are taking back their power, and won’t give up without a fight.

In times of rising debt and fiscal strain, unconventional ideas occasionally surface as ways to manage the US government’s borrowing obligations. Few have forgotten the trillion-dollar platinum coin scheme a few years back. A recent suggestion, associated with Stephen Miran’s A User’s Guide to Restructuring the Global Trading System (a.k.a. The Mar-a-Lago Accord) involves forcing or pressuring holders of US Treasury securities to exchange their current bonds—many with short- or medium-term maturities—for 100-year bonds carrying lower interest rates. 

On the surface, the plan seems attractive: it could reduce short-term debt servicing costs and push out repayment far into the future. However, viewed through legal, financial, and market lenses, the plan is a nonstarter—at best unrealistic, and if pursued, potentially disastrous. 

Below are seven key reasons why such a strategy would be unworkable and harmful to the credibility of the US government and the functioning of global financial markets.

1. It represents a violation of the contractual terms

Treasury securities are formal contracts between the US government and investors. They specify the amount borrowed, the coupon rate, and the repayment date. Investors buy these securities with the legally binding expectation that the terms will be honored. A forced conversion into 100-year bonds—particularly those with lower yields—would represent a breach of contract. This would likely result in a wave of legal challenges in US courts and could be interpreted as a selective default by credit rating agencies. More broadly, it would send a chilling message to investors that the US government cannot be relied upon to meet its obligations under previously agreed terms. That reputational damage would have lasting consequences for the government’s ability to borrow in the future.

(Source: Bloomberg Finance, LP)

2. Dumping or another form of retaliation is likely

Foreign governments and central banks are among the holders of US Treasury securities, holding trillions of dollars’ worth as part of their currency reserves and financial stabilization strategies. If these entities were forced to exchange their existing holdings for ultra-long-term, lower-yielding bonds, they might interpret it as an act of bad faith or even financial expropriation. In response, some could retaliate economically or strategically, but most would likely begin to liquidate their Treasury holdings—either to avoid further exposure or as a form of protest. A coordinated or large-scale selloff by foreign holders would depress bond prices, push yields higher, and potentially weaken the dollar. The resulting financial instability would erode the US government’s long-standing position as the issuer of the world’s reserve currency.

3. There are considerable legal and political obstacles

Any plan to convert existing Treasury debt into 100-year bonds would encounter immense legal and political resistance. Congress would likely need to pass enabling legislation, and bipartisan opposition would be fierce, likely citing both the Contract Clause and the Takings Clause. Lawmakers across the ideological spectrum would view the measure as a direct threat to the full faith and credit of the United States. Moreover, contract law strongly protects the rights of bondholders, and retroactively changing debt terms would almost certainly be challenged in court. The only conceivable workaround—invoking emergency executive powers—would trigger a constitutional crisis and further erode domestic and international trust in US governance. The political fallout would be severe, and the financial markets would respond accordingly.

4. 100-year bonds with low yields are an unlikely and unattractive outcome

From a financial perspective, longer-term bonds carry substantially more risk than shorter-term ones. Investors exposed to longer maturities face greater uncertainty over inflation, interest rates, and creditworthiness. As a result, markets demand higher, not lower, yields for longer-term bonds. Forcing or even encouraging investors to accept lower-yielding 100-year bonds in exchange for their existing securities contradicts this basic principle of finance. The scale of this mismatch is glaring: a 1-year Treasury converted into a 100-year bond represents a 100-fold increase in maturity; even a 30-year bondholder would be tripling their time exposure. Yet the plan proposes that these investors accept lower compensation for that additional risk—a proposition that defies economic logic.

Further complicating matters are the bond market dynamics of duration and convexity. Duration measures how sensitive a bond’s price is to changes in interest rates. A bond with high duration—like a 100-year bond—will see its price fall significantly if interest rates rise even modestly. Convexity, which describes how a bond’s duration changes as interest rates move, becomes more pronounced in ultra-long bonds. While convexity can help slightly in very large interest rate swings, it also introduces greater pricing volatility and uncertainty, making 100-year bonds particularly hard to hedge or model. For many investors—especially those with liability-matching needs or regulatory constraints—this makes such instruments unappealing or outright unmanageable.

Finally, these ultra-long bonds would be less useful as collateral in the banking system. Treasury securities are widely used in repo markets and other secured lending arrangements because of their liquidity and relatively stable pricing. But the longer the maturity, the more volatile the market value—meaning that 100-year bonds would need to be deeply discounted (with a higher “haircut”) when used as collateral. This reduces their effective value in day-to-day financial operations and makes them a poor substitute for the shorter- and medium-term Treasuries currently in wide circulation. Additionally, their low liquidity and lack of historical issuance would make them harder to price and trade efficiently, further diminishing their utility in modern financial systems.

5. It would set a negative precedent and ratchet up moral hazard

The long-term consequences of forcing a debt restructuring would extend beyond the immediate market shock. If the US government sets the precedent that it can change repayment terms unilaterally—even in pursuit of efficiency or cost savings—it opens the door to future manipulations. Investors would begin to price in the risk that terms might change again under future administrations or during future crises. This creates a “moral hazard” problem where the government is seen as an unreliable borrower, ultimately raising borrowing costs and damaging its credit rating. More broadly, such a move could encourage other indebted nations to follow suit, weakening the integrity of sovereign debt markets globally. For a country that issues the world’s reserve currency and whose bonds underpin the global financial system, the risks of setting such a precedent are especially grave.

(Source: Bloomberg Finance, LP)

6. Maturity stretching solves no fiscal ill

Even if the market accepted a swap of shorter-term debt for 100-year bonds—at appropriately higher yields to reflect the vastly longer exposure—such a maneuver would do nothing to resolve the underlying structural fiscal imbalance. It would merely change the timing of repayments, not their scale or structure. The central issue is not the maturity profile of US debt, but the chronic mismatch between government spending and revenue. As long as deficits persist—year after year—the total debt will continue to rise regardless of how it is financed. The debt problem will only be addressed when the deficit problem is resolved. That means aligning federal spending more closely with tax revenues through either fiscal consolidation, revenue increases, or both. Until that occurs, restructuring debt maturities is just a cosmetic change, not a real solution.

7. It could lead to a loss of confidence and market panic

Investor confidence is the cornerstone of stable financial markets, and US Treasury bonds are the global benchmark for low-risk assets precisely because of their reliability and predictability. If the government were to unilaterally alter the terms of its debt—extending maturities and lowering yields—investors would perceive this as a form of financial coercion or soft default. Such a move would spark a massive selloff in Treasury markets, drive up yields across the curve, and destabilize global portfolios that rely on Treasuries as a safe store of value. Broader market volatility would likely follow, including sharp declines in equities and liquidity freezes in credit markets. The ripple effects could extend to emerging market economies, corporate bond markets, and even the real economy through higher borrowing costs.

While the idea of reducing interest costs by converting existing debt into ultra-long, low-yielding bonds might sound like a creative solution to America’s debt challenges, it fails every test of financial realism, legal integrity, and political viability. It would violate contracts, damage the United States’ reputation as a trustworthy borrower, shake global confidence, reduce the usefulness of Treasuries in collateral markets, and set a damaging precedent for fiscal governance. Worse, even if done at market-clearing interest rates, it would not address the structural driver of debt growth: persistent federal deficits. Rather than stabilizing public finances, such a move would almost certainly ignite a full-blown financial crisis. In a world that still (and somehow inexplicably) depends on US debt markets, tampering with that foundation carries more risk than reward.

In 2021, the Biden administration secured $42.5 billion from Congress to extend broadband Internet access to small and ever-shrinking portions of the country that didn’t yet have it. Four years later, that federal program still hasn’t connected one single person to the Internet.  

Elon Musk’s DOGE efforts have so far uncovered tens of billions more in “waste, fraud, and abuse.” For example, the $40 billion USAID budget, DOGE found, is bloated with billions for indefensible bilge — from sex changes in Guatemala to tourism in Egypt. 

Is there anyone in his right mind who would argue that the federal government stimulated the economy by this spending? Or if the money instead had been left in the private sector, it would have hurt the economy? Is it humanly possible to waste other people’s money more thoroughly than the government does?  

Imagine a pickpocket who steals cash from the wallets and purses of unsuspecting shoppers in a mall. Then he goes from store to store and spends the loot. Whether or not he stimulated the mall economy depends on who you interview — shopkeepers who are grateful for the pickpocket’s patronage or the thief’s dispirited victims who discover they must go home empty-handed. 

When we employ our instinctive common sense, especially if we zero in on egregious and inexcusable profligacy, we are drawn to the conclusion that Milton Friedman expressed so well: “Nobody spends somebody else’s money as carefully as he spends his own.” Moreover, robbing Peter to pay Paul makes Paul richer but Peter equally poorer at the least. 

But if we adopt a Keynesian “macro” perspective, we will assert that more government spending energizes economic activity, and that less government spending sends the economy into a tailspin. Is it not simply amazing that politicians possess such powers the rest of us do not?! When they spend your money, the magical multiplier kicks in, but when you and I spend our money (or save it in the bank so the bank can spend it), we just don’t get the same bang for the buck. Just think how prosperous we would be if we laundered everything through the government (like they do in poverty-stricken North Korea). 

John Maynard Keynes himself once claimed that if the government simply paid people to dig holes and fill them back in, we could stimulate the economy. It didn’t matter to him what the government spent it for so long as it was the government doing the spending. In any event, he flippantly declared, “In the long run, we’re all dead anyway.”  

If DOGE ends up cutting federal expenditures by the trillion dollars or more that Musk has promised, expect every unrepentant Keynesian to warn of dire consequences. It would be the same wrong-headed thinking that led Keynesians in the 1940s to predict another depression when World War II ended.  

If another depression is in our future, it will not occur because government spends less. When it did spend less — decisively less — after World War II, depression didn’t materialize. Just the opposite. 

Under the influence of the Keynesian consensus, a committee chaired by New York Senator James Mead issued a report in 1945. It argued that with the imminent end of the war, “the United States would find itself largely unprepared to overcome unemployment on a large scale.” Even President Harry Truman, in September of that year, told The New York Times that it was “obvious” that the process of reducing federal employment and spending would yield “a great deal of inevitable unemployment.” Indeed, between June 1945 and June 1946, more than ten million people were lopped off the federal payroll (mostly military), and millions returned from overseas to the US job market, while Keynesians held their breath and expected the worst. 

One of the best assessments of what actually happened, in contrast to the Keynesian forecasts echoed by future Nobel laureates Paul Samuelson and Gunnar Myrdal, is that of economist David Henderson. In a November 2010 paper for the Mercatus Center titled The US Post-War Miracle, Henderson noted, 

In the four years from peak World War II spending in 1944 to 1948, the US government cut spending by $72 billion — a 75-percent reduction. It brought federal spending down from a peak of 44 percent of gross national product (GNP) in 1944 to only 8.9 percent in 1948, a drop of over 35 percentage points of GNP. 

While government spending fell like a stone, federal tax revenues fell only a little, from a peak of $44.4 billion in 1945 to $39.7 billion in 1947 and $41.4 billion in 1948. In other words, from peak to trough, tax revenues fell by only $4.7 billion, or 10.6 percent. Yet, the economy boomed. The unemployment rate, which was artificially low at the end of the war because many millions of workers had been drafted into the US armed services, did increase. But during the years from 1945 to 1948, it reached its peak at only 3.9 percent [italics mine] in 1946, and, for the months from September 1945 to December 1948, the average unemployment rate was only 3.5 percent. 

Let that sink in: Federal spending plummeted by 75 percent. Millions re-entered the private job market. Yet unemployment remained lower than it is today, and the economy took off. Keynesians, with all their vaunted “New Economics” and sophisticated equations, got it dead wrong. Common sense would have served them much better. 

One reason for the unexpected post-war “economic miracle” was the Revenue Act of 1945. Go to the search engine Bing.com. Type in “top corporate income tax rate 1944,” tap “Enter” and voila! The resulting number is staggering: 94 percent. Next, just change one digit in your search terms, from 1944 to 1945. The new figure? 38 percent. The Revenue Act cut marginal tax rates (on both business and personal income) a little, but more importantly, it eliminated surtaxes such as an “excess profits tax” that had driven rates so high. 

You need only common sense, no equations required, to know that there’ll be a whole lot more risk-taking entrepreneurship and business investment when you cut tax rates dramatically.  

Another reason for the boom was the abolition of all price controls. We had plenty of them during the war, but by 1946, they were all gone. Prices were freed to reflect supply and demand conditions in the marketplace, not the arbitrary whims of Congress or bureaucracies. The rationing of consumer goods ended as well. 

Prone to mathematize and oversimplify, Keynesians love to boil an economy down to three main components: Consumption plus Investment plus Government Spending, they claim, equals GNP.  C + I + G = Y is the Keynesian formula we all had to learn from our Keynesian economics profs. Their ideological bias prevented them from understanding that when there’s a lot less G, there’s a lot more C and I. That’s because ultimately, G has nothing that it doesn’t sooner or later take from C and I. This would be true even if G didn’t waste a penny. 

Common sense tells me that the 75 percent reduction in federal spending after the war may have been the most significant contributor to the economic boom. It diverted resources away from blowing things up on the battlefield. Instead, we could now make cars, refrigerators, and an array of consumer goods of which Americans had been deprived for years. At the very least, the Keynesian fear that massive government spending cuts would tank the economy proved to be utterly and embarrassingly unfounded. 

The US boom was no outlier. Once post-war Germany under Ludwig Erhard placed its faith in markets instead of government spending, the world began referring to “the German economic miracle.” Japan experienced a “Japanese economic miracle” for similar reasons. Hong Kong pursued smaller government/free-market policies after the war and awed the world with decades of phenomenal growth. Meantime, under a new socialist government, Britain plunged headlong into an expensive welfare state and became, by the 1970s, the “sick man of Europe.” 

In the 1980s, New Zealand transformed itself from a slow-growth welfare state into a free and vibrant economy. In just two years, it slashed government spending from 60 percent of GDP to 40 percent. Once again, Keynesians expected a bust, but the country got a boom instead. 

Some people besotted with Keynesian hangovers are worried that if DOGE cuts federal spending a lot, the American economy will lose the “stimulus” we somehow get from it all. But considering both common sense and the historical track record, our biggest concern should be that DOGE won’t cut enough.