Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The federal government imposes a mandate to blend ethanol into gasoline. This “Renewable Fuel Standard” harms consumers, damages the economy, and produces negative environmental effects. The mandate has also spawned a bureaucratic trading system in ethanol credits, which the Wall Street Journal reports is bankrupting a refinery in Pennsylvania.

The rubber hits the road with that “10% Ethanol” sticker you see on the pump when you fill your tank. The sticker signifies that the government is imposing a foolish policy on the nation at the behest of a handful of selfish senators, who are bucking the interests of America’s 220 million motorists.

Nick Loris discusses some ethanol basics at DownsizingGovernment.org. And Thomas Landstreet reiterated some of the problems with the mandate in the WSJ the other day:

The corn ethanol mandate was created under the Energy Policy Act of 2005. Two years later, President Bush signed the Energy Independence and Security Act, which expanded the program by providing generous tax credits and subsidies to corn growers and ethanol blenders. It also established ambitious targets, increasing annually, for biofuels in the national fuel mix. The mandate soon diverted 40% of America’s corn crop away from the food supply.

The government-imposed shortage caused corn prices to float from long-term mean levels of about $2 per bushel to more than $8 per bushel in 2012. This extraordinary price surge prompted a range of harmful responses in the farming industry. Farmers planted 17 million new acres of corn at the expense of soybeans, wheat, hay and cotton, driving prices for those crops to all-time highs as well. Cattle farmers, unable to afford corn gluten feed, culled their herds to levels not seen in 60 years, causing beef prices to rise an incredible 60% from 2007 to 2012. Over this five-year period, the IMF food price index rose 42%.

…The country has endured a startling amount of economic disruption for what is clearly an inferior source of energy. Ethanol produces 34% less energy per volume than conventional gasoline, reducing cars’ fuel economy. As for its effect on the environment, a 2010 Congressional Budget Office study found that corn-based ethanol subsidies are terribly inefficient, with the government spending an estimated $754 per metric ton of avoided emissions—an astronomically high price tag compared with other policies. (The economics of climate change literature estimates the “social cost of carbon” at far lower levels, meaning the program is inefficient even on its own terms.)

Moreover, ethanol is too corrosive to be transported through pipelines, so trucks must transport it. Growing corn also requires more water than other crops—and the policy gave farmers an incentive to plant only corn, which depleted the soil of nutrients. A 2008 study in Science found that converting natural environments for biofuel production can produce hundreds of times more carbon emissions than the biofuels themselves would save. No wonder ethanol mandates are losing support among environmentalists.

The ethanol mandate reduces freedom and costs you money at the gas pump for no reason other than to line the pockets of corn farmers, who already benefit from billions of dollars of federal farm subsidies. The mandate is stupid policy and ought to be repealed.

Nothing about Rex Tillerson’s firing should surprise us, except perhaps its timing. Tillerson has often been at odds with his boss in the White House, whether on Russia, Iran, or North Korea. Though widely hailed as one of the ‘adults in the room,’ it’s not clear he had much influence at all on Trump’s biggest foreign policy decisions. He was widely disliked inside his own agency; civil servants at Foggy Bottom hated his insularity and his plans to massively cut the State Department’s budget and diplomatic capacity.

Even the casual cruelty of the firing should not surprise us. Sure, the President fired his Secretary of State via Twitter, while Tillerson was abroad, without apparently offering him any explanation or courtesy phone call. But from the man who fired James Comey, his FBI Director, via television while Comey was on-stage giving a public speech, this was almost polite. 

But while Tillerson’s firing has been expected for some time, it will have big implications. Tillerson may not have had much influence with the President, but he was one of the administration’s more reasonable voices. He apparently had a good relationship with Secretary of Defense James Mattis, acting as a sounding board for ideas, and both men have advocated against some of Trump’s more disastrous foreign policy decisions.

It’s always been questionable the extent to which these so-called ‘adults in the room’ could actually constrain Trump on foreign policy issues. But with the loss of Tillerson and – last week – of Gary Cohn of the National Economic Council, we will see them replaced by advisors who appear to be trying not to restrain the President’s worst impulses, but instead to indulge them. On tariffs, conflict and more, things have the potential to get a lot worse.

Mike Pompeo, Trump’s new pick for Secretary of State, will move from the CIA. In that role, he has certainly been more effective than Tillerson in building a relationship with the President. But he has also often adopted highly political stances on policy, advocating strongly for the President to withdraw from the Iranian nuclear deal, and speaking out publicly in favor of Trump’s political and policy decisions far more often than is typical for the Director of the CIA.

Pompeo’s background is in the military, not in diplomacy, and he has little experience of high-level diplomatic negotiations. And given his personal views, Pompeo is likely to strengthen many of the President’s worst instincts: he is extremely hostile towards Iran and the Iranian nuclear deal, he has been hawkish on North Korea, and – where Tillerson took a more balanced approach - has largely supported Saudi Arabia in the ongoing Gulf Crisis.

His lateral shift from CIA to State Department will also create a secondary controversy. Trump’s choice to replace him is Gina Haspel, a career veteran at the agency, and potentially the first woman to hold the job of CIA Director. She is undoubtedly a better choice than uber-hawk Tom Cotton (R-AR), who was widely expected to get the job.

So far, so good. But Haspel was also heavily involved in the rendition and torture scandals of the mid-2000s, running a rendition center in Thailand, and implicated in the destruction of interrogation tapes. Her nomination will raise all the old debates about the Bush-era torture programs, and her confirmation hearings are likely to be fraught as a result.

Even Pompeo’s confirmation hearings may produce some difficulties: during hearings for his current job, Pompeo promised to be impartial on the question of the JCPOA. Yet he has been one of the strongest and most active supporters of Trump’s decision to decertify the accord. Congressional Democrats in particular may question why he backed away from his prior promises, and whether they can trust what he says in these hearings.

Tillerson’s firing was predictable, but it opens a whole new set of concerns, from the petty (i.e., fraught and difficult confirmation hearings) to the critical (i.e., an increasingly hawkish line-up in the White House and raised risk for conflict). Rex Tillerson’s tenure as Secretary of State was hardly a success. Unfortunately, what comes after is likely to be worse.

As Anne Fuqua recently pointed out in the Washington Post, non-medical drug users accessing heroin and fentanyl in the underground drug market are not the only victims in the opioid crisis. Many patients whose only relief from a life sentence of torturing pain are also victims. That is because policymakers continue to base their strategies on the misguided and simplistic notion that the opioid overdose crisis impacting the US, Canada, and Europe, is tied to doctors prescribing opioids to their patients in pain.

Unfortunately, political leaders and the media operate in an echo chamber, reinforcing the notion that cutting back on doctors prescribing opioids is the key to reducing overdose deaths. As a result, all 50 states operate Prescription Drug Monitoring Programs that track the prescribing habits of doctors and intimidate them into curtailing the prescription of opioids. Yet multiple studies suggest that PDMPs have no effect on the opioid overdose rate and may be contributing to its increase by driving desperate pain patients to the dangers that await them in the black market.

Last month Arizona joined the list of 24 states that had put in place limits on the amount and dosage of opioids doctors may prescribe acute and postoperative pain patients. These actions are based on the amateur misinterpretation of the 2016 opioid guidelines put out by the Centers for Disease Control and Prevention and are not evidence-based.

And the Food and Drug Administration continues to promote the replacement of prescription opioids with abuse-deterrent formulations, despite an abundance of evidence showing this policy only serves to drive non-medical users to heroin and fentanyl while raising health care costs to health systems and patients.

As prescriptions continue to decrease, overdose deaths continue to increase. This is because as non-medical users get reduced access to usable diverted prescription opioids, they migrate to more dangerous fentanyl and heroin.

It is simplistic—and thus provides an easy target—for politicians and the media to latch on to the false narrative that greedy pharmaceutical companies teamed up with lazy, poorly-trained doctors, to hook innocent patients on opioids and condemn them to a life of drug addiction. But this has never been the case.

As Patrick Michaels pointed out about recrudescent opiophobia back in 2004, prescription opioids actually have a low addictive potential and when taken by patients under the guidance of a physician, have a very low overdose potential. Cochrane systematic studies in 2010 and 2012 both found an addiction rate of roughly 1 percent in chronic non-cancer pain patients. And a January 2018 study in BMJ by researchers at Harvard and Johns Hopkins examined 568,000 opioid naïve patients prescribed opioids for acute and postoperative pain from 2008 to 2016 and found a total “misuse” rate (all “misuse” diagnostic codes) of just 0.6 percent. And researchers at the University of North Carolina reported in 2016 on 2.2 million residents of the state who were prescribed opioids, where they found an overdose rate of 0.022 percent.

Until policymakers disabuse themselves of the false notion that the opioid overdose crisis is a direct result of doctors prescribing opioids to patients in pain, the opioid overdose rate will continue to climb—only the type of opioid from which victims are overdosing will change. We have already seen it move from diverted OxyContin and other prescription opioids to heroin, and from heroin to heroin plus fentanyl. Most recently, fentanyl was the predominant cause of overdoses.

The “war on opioids” being waged by today’s policymakers is, in effect, a “war on patients in pain.” If policymakers are serious about wanting to reduce overdose deaths, they should look to what has been done in Portugal, and now Norway, and end the war on drugs. If they can’t muster the political will to go that far, then they should at least put the focus on harm reduction measures, such as syringe services programs, medication-assisted treatment, and making the overdose antidote naloxone available over-the-counter.

Instead of a war on opioids, they should wage a war on deaths.

Nicholas Buccola—one of the nation’s leading scholars of Frederick Douglass—has a piece in the New York Times blog “The Stone” in which he challenges my classification of Frederick Douglass as a libertarian. Now, as I argued on Ricochet recently, there’s a point at which any such effort at classification is rather silly: it’s more important to understand the substance of what Douglass stood for than to label it. Also, any effort to classify the man as “libertarian” or “conservative” or “progressive” or whatever will depend on us defining these terms—and such definitions are complex and contentious. Another complication is the fact that there are disagreements within these groups. Randy Barnett for example, pointed out in 2007 that libertarians don’t always agree on the practical application even of the principles that they share, even on major controversies. And then there’s the fact that many of those who call themselves libertarians actually aren’t.

On the other hand, the beginning of wisdom is calling things by their right names. And classifying—well, it’s just what scholars do. So how should we label Douglass?

It’s probably best to define our terms in basic principles. What’s distinctive about the libertarian or classical liberal tradition is its overriding emphasis on the rights of the individual, as opposed to the purported “rights” of society or the state. The classical liberal begins with the idea that the individual is fundamentally entitled to freedom—to live his or her life without coercion from others. People create governments to protect them against coercion, so that they can lead their lives as they choose—and the government is therefore their servant, not their master. Libertarians apply this principle to both “economic” and “social” matters: people should be as free to run a business as they should to choose their own spouses.

Today’s conservatism and liberalism share some of these views in some ways, but also reject them in others. Conservatives hold that society is something that needs preservation per se—that it has its own just claims to survival and security—and that the individual’s rights can be curtailed to accomplish that. Today’s liberals believe that “social justice” requires the state to intervene and rearrange cultural habits and social patterns and individual rights in order to accomplish broader economic and social equality. (I’m trying to be generous, here.) And, as with libertarians, there’s a lot of debate within these groups, too, both about the merits of these values and how they should be applied.

Of these three, Douglass fits most comfortably by far into the classical liberal or libertarian category. He believed quite clearly that the individual is the sole bearer of rights, and that the government exists to protect those rights. In the messy and complicated aftermath of the Civil War, of course, it was never entirely clear how to apply these principles. But it is clear that he was not what we today would call either conservative or liberal. He did not believe in today’s “social justice” theories—he would have had nothing but scorn for the notions of “privilege theory” or “cultural appropriation” or the idea that inequalities in society are the result of social injustices instead of individual choice. His emphasis on self-reliance, on the values of individual initiative, and the possibility of personal success in a free society, make that clear. And he was certainly no conservative. He married a white woman in 1884, and was a lifelong feminist.

Buccola objects to my classifying him as libertarian because Douglass came to reject his earlier belief in non-intervention and to hold that the slaves would have been better off if the government had engaged in a program of redistribution and social control. “Douglass certainly believed that it was important to protect individuals from unjust interference,” but, at least later in life, “he did not believe this was sufficient to make human beings free.”

There’s truth to this. But the context matters a lot. Douglass was speaking of people who themselves had actually been enslaved, largely as a result of government intervention. Even the strictest laissez-faire libertarian would have little objection to the government restoring gains that it wrongfully seized to begin with. What Douglass did not believe, however, even late in life, is that government should be in the perpetual business of rearranging society in the service of “social justice.” In 1883, after the Supreme Court gutted the 1875 Civil Rights Act in the Civil Rights Cases, for example, Douglass took to the podium to denounce the decision as a betrayal of the Union cause. And yet, he also made a point of rejecting the idea that the government should devote itself to, in Buccola’s words, “counteracting the power of economic elites.” The government was obligated to protect civil rights in the south, Douglass told the audience—but it should not be in the business of seeking to enforce “social equality.” In other words, government should prohibit discrimination in places of public accommodation—but not violate property rights by forcing people to accept each other as equals on a personal basis. “Equality, social equality, is a matter between individuals. It is a reciprocal understanding.” While he despised racism, he respected the individual rights of racists. (And this speech, too, he saw fit to reprint in his memoirs.)

But there’s a more important point here: Douglass did believe that “freedom as noninterference” wasn’t enough—and libertarians agree with that. Social institutions are critical to enabling people to make the most of their lives. Civil society institutions—charities, scholarly associations, community organizations, social clubs—are all essential in a free society, as every libertarian, from Friedman to Hayek to Rand, has emphasized. The only dispute is whether these institutions should be operated by the government or by private initiative. Libertarians argue—I think persuasively—that they work better, more justly, more effectively, if run privately than by the state. And one might argue that the experience of the Freedmen’s Bureau is good proof of that. But the idea that libertarians think that noninterference alone is enough is really a simplistic caricature of libertarian thought.

And that opens another layer of complexity. Real life is far messier than the abstractions of any political theory, and particularly in the wake of a catastrophe like the Civil War and the collapse of Reconstruction. The advent of sharecropping and the peonage laws in the south show how racial oppression was produced by an interaction of private prejudice and government interference—which built a chain that could not easily be dissolved by applying the acid of any philosophical ideas in their purest forms. Like all people of good conscience, Douglass struggled with these questions, often torn between the temptation toward government intervention and the fact that respecting people’s freedom means they’ll often make bad choices. Another good example of this is prohibition of alcohol—a proposition Douglass opposed virtually all his life, despite being strongly opposed to drinking. There’s some evidence (I think rather vague) that he came to embrace prohibition late in life, but if so, it was only reluctantly.

I make clear in my book that Douglass wasn’t a “pure” libertarian—if that term means anything. Indeed, his rejection of the “state action doctrine” in his speech on the Civil Rights Cases is quite un-libertarian. But even with these factors considered, I think it’s false to say that Douglass abandoned his belief in “freedom as noninterference.” There is no evidence of his thinking that government should redistribute wealth indefinitely to accomplish lasting economic equality. He certainly did not believe in anything like a regulatory welfare state. He was a radical individualist, who, even when he did think government should intervene, confined that to removing the weights that had been imposed upon them, so that they could achieve their own individual goals.

Over a public career that lasted a half-century, Douglass took many directions, but the overriding theme of his thought was that all people are created equal, with an inalienable right to their own lives, their own liberties, and the pursuit of their own happiness, without interference from others or obstruction from the state. And I think the best label for that is libertarian.

[Cross-posted from In Defense of Liberty]

At a cost of $100,000, the city of Baltimore plans to provide 60 free buses to take students from its schools to planned anti-gun demonstrations in Washington, D.C. later this month.

Many things could be said against this decision. For instance, it openly breaks with the notional political neutrality of public schools so as to side with some parents’ beliefs against others’. It takes money away from a Baltimore City school system that, though lavishly funded, struggles with unmet basic needs “from malfunctioning furnaces to undrinkable water.” It siphons classroom time from students in desperately underperforming schools.

But there is one more thing to say against it as well: a protest outing that is ardently enabled or even meticulously organized by the authority figures in your life can be like the ninth-grade English course that ruins Macbeth or Moby Dick for you. Writes Lynda C. Lambert in the Baltimore Sun:

Marches are normally “bottom up.” They are formed by people who are not government, usually to protest something that government is doing [or not doing].

Governments do not sponsor marches, unless that government is, say, the government of China or Russia or North Korea, where governments sponsor marches all the time that show how much the people support their governments……

Part of protesting is finding your own way, for your own reasons.

Baltimore government sponsorship of this ride to D.C. demeans our kids and demeans the point of the march. And, even more than that, it demeans the concept that a march is an uprising, a beginning, a statement made by we the people to our government.

Also, on institutional encouragement of the protests, I had a piece in the WSJ last week on the Yale admissions office’s contribution.  

As for the separate question of whether compulsory attendance and truancy laws should be enforced against students for skipping school in a favored cause, I’ll see and raise: don’t enforce those laws against anyone period.

Our air traffic control (ATC) system is run by the federal government and subject to all the usual bureaucratic failures such as cost overruns, lack of innovation, a stagnant workforce, unstable finances, and ineffective leadership.

The solution to these problems is to privatize the system, as the Canadians have done with their system to great success. The Federal Aviation Administration (FAA) would continue to oversee aviation safety, but ATC operations would be moved into a private, nonprofit, self-funded company.

A recent Washington Post headline was “FAA botched $36 billion effort to modernize air traffic system, report says.” They point to a new report by federal auditors, which is the latest of many similar reports going back decades.

When will Congress finally say “enough” and pursue an overhaul?

Here are highlights from the WaPo:

The Federal Aviation Administration has mishandled a $36 billion project to modernize the antiquated aviation management system, according to a harshly critical inspector general’s report released Thursday.

It was the fourth inspector general’s critique in as many years of a program known as NextGen, on which more than $7 billion in federal funds has already been spent.

… The report said the FAA “has lacked effective management controls” in awarding contracts, sometimes spent money on low-priority projects and allocated an estimated $370 million for projects that were still awaiting approval.

… NextGen has long been a cause of consternation and frustration in Congress and with commercial airlines that are expected to invest billions of dollars in their own cash to complete the system.

NextGen is often described as a GPS-based system, but it is a vastly more complex network of interlocking systems that will change cockpit communications, guide airplanes both aloft and on the ground, and allow airlines to fly directly to their destinations rather than turning after reaching each designated way point.

… Together they will allow planes to safely fly closer to one another, save fuel and time, get immediate weather updates, and communicate more effectively with other airplanes and with air traffic controllers.

… But the cost of equipping each plane to handle the new systems has been estimated at $200,000. Airlines say they need reassurance that if they invest, the NextGen program will be delivered on schedule.

That led House Republicans, later with the support of President Trump, to propose that the NextGen program and more than 30,000 FAA workers be spun off into an independent, nonprofit corporation.

… There have been 13 confirmed or acting heads of the FAA since the precursor of NextGen was proposed as the Advanced Automation System in 1983.

… “FAA does not have today, and has not had since its inception, anything that would approximate a real plan for achieving a lot of the things it has advertised for the NextGen program,” said an FAA employee familiar with the program, who asked to remain anonymous to speak candidly. “I think the sentiment out there is that NextGen has been a big dud, and it’s hard to disagree with that sentiment if you look at what’s actually been produced.”

More on air traffic control reform here.

 

 

 

The Los Angeles Times reports on the latest setbacks to California’s high-speed passenger rail project. The project is far over budget and way behind schedule. Is anyone surprised?

Randal O’Toole describes the plague of cost overruns in government rail systems here, and he explains why high-speed passenger rail makes little sense here. I discuss the epidemic of cost overruns on government infrastructure projects here.

The LAT says:

The price of the California bullet train project jumped sharply Friday when the state rail authority announced that the cost of connecting Los Angeles to San Francisco would be $77.3 billion and could rise as high as $98.1 billion — an uptick of at least $13 billion from estimates two years ago.

The rail authority also said the earliest trains could operate on a partial system between San Francisco and Bakersfield would be 2029 — four years later than the previous projection. The full system would not begin operating until 2033.

… The new estimates will force California’s leadership to double down on its political and financial commitments if it wants to see the system completed, against a backdrop of rising costs, years of delays, strident litigation and backlashes in communities where homes, businesses, farms and environmental preserves will have to give up land to the rail’s right-of-way.

… The new business plan is based on a wide range of uncertainties, Kelly said. Among the most challenging is the cost of about 36 miles of tunnels through mountainous Southern California, which could range anywhere from $26 billion to $45 billion, according to the report.

… A spokesman for Gov. Jerry Brown, who since the 1980s has championed high-speed rail, said the disclosures do not change the strong support he expressed in his recent State of the State address, when he said: “I make no bones about it. I like trains and I like high-speed trains even better.”

… The disclosure about the higher costs comes nearly a decade after voters approved a $9-billion bond to build a bullet train system. The original idea was that the federal government would pay about a third of what was then an estimated $33-billion project, with private investors covering another third.

When project supporters are admitting that the cost “could be as high as” $98 billion, it obviously will be at least that high in the end. That would be triple the original promised cost, and ten times the amount that voters directly approved.

But Jerry likes trains, so full steam ahead!

Cass Sunstein has been for some time a capable and influential critic of individual choice and limited government. Over the past decade, he has argued that the Internet is failing liberal democracy. Left to their own preferences, he says, individuals choose to avoid political views that challenge their prior beliefs. They form filter bubbles that exclude contrary views and echo chambers that polarize debates. Both complicate solving national problems.

These alleged filter bubbles and echo chambers comprise expressing and hearing (or reading) speech, both highly protected activities in the United States (or in any polity deserving the name liberal). The harms of filter bubbles and echo chambers should be much more than alleged to justify government actions to “improve” our debates.

Sunstein’s claims about filter bubbles and echo chambers have a certain appeal. We can imagine people choosing to avoid unpleasant people and views. As communications researcher Cristian Vaccari notes:

social media users can make choices as to which sources they follow and engage with. Whether people use these choice affordances solely to flock to content reinforcing their political preferences and prejudices, filtering out or avoiding content that espouses other viewpoints, is, however, an empirical question—not a destiny inscribed in the way social media and their algorithms function.

Both older and more recent studies cast doubt on Sunstein’s claim that the individual choices of Internet users are turning the nation into a polarized dystopia. For example, several studies published in 2016 and earlier indicate that people using the internet and social media are not shielded from news contravening their prior beliefs or attitudes (see the references here). In 2014, experimental evidence led two scholars to state “that social media should be expected to increase users’ exposure to a variety of news and politically diverse information.” They conclude that “the odds of exposure to counterattitudinal information among partisans and political news among the disaffected strike us as substantially higher than interpersonal discussion or traditional media venues.” A 2015 paper based on a panel design found that “most social media users are embedded in ideologically diverse networks, and that exposure to political diversity has a positive effect on political moderation.” Contrary to the received wisdom, this data “provides evidence that social media usage reduces mass political polarization.” A broad literature review in 2016 found “no empirical evidence that warrants any strong worries about filter bubbles.” Just before the 2016 election, a survey of U.S. adults found that social media users perceive more political disagreement than non-users, that they perceive more of it on social media than in other media, and that news use on social media is positively associated with perceived disagreement on social media.

Did the 2016 election change these findings? No doubt all of the studies of that election have not yet appeared. But several suggest doubts about filter bubbles, polarization, and Internet use remain valid. Cato published a summary of a study by three economists who found that polarization has advanced most rapidly among demographic groups least likely to use the Internet for political news. The cause (Internet use) was absent from the effect of interest (increased polarization). Other studies have been more specific. Three communications scholars examined how people used Facebook news during the 2016 U.S. presidential campaign. They had panel data and thus could examine how Internet usage affected the attitudes of the same people over time. The results suggest Sunstein’s concerns are exaggerated. Both Internet use and the attitudes of the panel “remained relatively stable.” A filter bubble did not appear: the people who used Facebook for news were more likely to view news that both affirmed and contravened their prior beliefs. Indeed, over time, people exposed themselves more to contrary views which “was related to a modest…spiral of depolarization.” In contrast, they found no evidence of a filter bubble where exposure to news affirming prior attitudes led to greater polarization.

 Other recent studies have focused on both the United States and other developed nations or just European nations alone. Perhaps data and conclusions from other developed nations do not transfer to the United States. However, cultures and borders notwithstanding, citizens in developed nations are similar in wealth and education. Even if we put less weight on conclusions from Europe, such inform our thinking about supposed failures of Internet speech. 

In 2017, Cristian Vaccari surveyed citizens in France, Germany, and the United Kingdom to test the extent of filter bubbles online. He concluded “social media users are more likely to disagree than agree with the political contents they see on these platforms” and that “citizens are much more likely to encounter disagreeable views on social media than in face-to-face conversations.” His evaluation of Sunstein’s thesis merits quoting at length:

Ideological echo chambers and filter bubbles on social media are the exception, not the norm. Being the exception does not mean being non-existent, of course. Based on these estimates, between one in five and one in eight social media users report being in ideological echo chambers. However, most social media users experience a rather balanced combination of views they agree and disagree with. If anything, the clash of disagreeing opinions is more common on social media than ideological echo chambers.

Another recent study in the United Kingdom found that most people tended to avoid echo chambers. Only about 8 percent of their sample had constructed echo chambers. The authors urge us to look more broadly at media and public opinion: 

Whatever may be happening on any single social media platform, when we look at the entire media environment, there is little apparent echo chamber. People regularly encounter things that they disagree with. People check multiple sources. People try to confirm information using search. Possibly most important, people discover things that change their political opinions. Looking at the entire multi-media environment, we find little evidence of an echo chamber.

Finally, another study of multiple countries found that using social media was related to incidental exposure to news, contrary to Sunstein’s view that older media promoted such unintended exposure while new media do not.

Sunstein’s concerns about filter bubbles and echo chambers appear exaggerated. Accordingly, the case for government action to improve public deliberation fails.

 

 

 

Cato adjunct scholar Leland B. Yeager had a long career at the University of Virginia Department of Economics in its golden age and later at Auburn University. He is the author of Foreign Trade and U.S. Policy: The Case for Free International Trade (1976), International Monetary Relations: Theory, History and Policy (1976), and Free Trade: America’s Opportunity (1954). At 93 he is still as insightful and as blunt as ever, and he just published this critique of President Trump’s understanding of trade policy at Liberty magazine under the title “Profound and Destructive.” The whole thing is reprinted below.

___________________

President Trump’s destructiveness requires few words here. Consider how world stock and currency markets have been shaken by the resignation on March 6 of Gary Cohn, regarded until then as Trump’s chief economic adviser. Although not a trained economist, Cohn apparently had some sound instincts derived from years of financial experience. His departure apparently and ominously leaves more influence, or echo, to Peter Navarro — look him up with Google.

This latest example of destructiveness follows the one touched off by Trump’s March 2 tweet bewailing America’s loss of “many billions of dollars on trade with virtually every country it does business with” and heralding trade wars as “good, and easy to win.”

I’ll spend more words on how profound Trump’s ignorance is. He considers a country’s excess of imports over exports a measure of loss. This measure applies even to trade with each foreign country separately. He counts China and Mexico among the worst offenders, deserving punishment. He does not understand the multilateral aspect of beneficial trade.

Nor does he understand how we gain in buying goods cheap from abroad. What difference does it make if steel and aluminum are cheap because of low foreign prices or because they grow cheaply on bushes at home? Money cost is a measure of opportunity cost, which means the loss of other goods when resources go instead to make the particular good in question. Opportunity cost reflects scarcity. Scarcity applies even to prosperous America, where we could enjoy still higher standards of living if food, clothing, shelter, entertainment, and other goods and services came costlessly and miraculously from heaven. Scarcity and how gains from domestic and foreign trade alleviate it are fundamentals of economics. The principle of comparative advantage goes far in explaining how.

Without understanding the academic presentation of the “absorption approach to the balance of payments,” everyone should be able to grasp its central idea, which is sheer arithmetic. If we as a country use more output for consumption and real investment than we produce, then the difference must come from somewhere — from abroad in the form of more imports than exports. A big item in this excess absorption, alias national undersaving, is government deficits. Yet Trump and Congress are complacent about increasing the deficit and debt by taxing less and spending more.

All too many politicians say that they are in favor of free trade if it is “fair trade” played on a “level playing field.” These slogans express Trump’s view of international trade as a game, a zero-sum game in which one player’s gain is another’s loss.

Trump does not understand how the price system coordinates economic activity, making most government planning about jobs and industries unnecessary and harmful.

The profundity of Trump’s ignorance goes beyond economics. It extends to diplomacy in domestic and foreign relations and even to the behavior of a decent human being. Yet his destructive economic ignorance remains prominent.

   

When economic journalists speculate about looming inflation risks in the U.S. or any other country, they implicitly assume that each country’s inflation depends on that country’s fiscal or monetary policies, and perhaps the unemployment rate. Yet The Economist for March 3rd–9th shows approximately 1–2 percent inflation in the consumer prices index (CPI) for virtually all major economies. 

Inflation rates were surprisingly similar regardless of whether countries had budget deficits larger than ours (Japan and China) or big surpluses (Norway and Hong Kong), regardless of whether central banks experimented with “quantitative easing” or not, and regardless of whether a country’s unemployment rate was 16.9 percent (Spain) or 1.3 percent (Thailand). 

The latest year-to-year rise in the CPI was below 1 percent in Japan and Switzerland, 1.5 percent in Hong Kong and the Euro area, 1.6 percent in Canada and China, 1.8 percent in Sweden, 1.9 percent in Norway and Australia, 2 percent in South Koreas and 2.1 percent in the U.S.  Among major countries, U.K. was on high side with inflation of 2.7 percent.  Three economies with super-fast economic growth above 6 percent (India, Malaysia and the Philippines) do have slightly higher inflation—above 3 percent—but the CPI is up just 1.6 percent in one of them, namely China. 

The remarkable similarity of CPI inflation rates is surprising since countries measure inflation differently and consume different mixes of goods and services. The fact that inflation rates are nonetheless so similar, and move up and down together, suggests that inflation is largely a global phenomenon.  The U.S. may well have a disproportionate influence on global inflation, since it accounts for about 24 percent of global GDP and key commodities are priced in U.S. dollars.  Yet U.S. inflation nonetheless goes up and down in synch with other major economies, as the graph shows. 

Average world inflation is higher than inflation among major economies, however, because there are always some countries in chaos with untrustworthy currencies and extreme inflation—currently that includes Venezuela (741 percent), South Sudan (118 percent), North Korea (55 percent), Congo (52 percent) and Syria (43 percent).

The similarity of inflation, aside from a few extremes, is due to arbitrage among traded goods (though less so for local services). If exchange rates were fixed, the “law of one price” would prevent the same goods from selling at different prices in different places (aside from transportation costs, tariffs and sales taxes). Arbitrage—traders buying low and selling high—would ensure that prices varied only temporarily from one country to another.   

Differences in inflation, including the extreme cases, is largely explained by exchange rates.  Countries with a “strong currency” reputation (Switzerland) invariably have less inflation than countries that mistakenly pursue chronic currency depreciation as a boost to trade (Turkey).  Anticipated devaluation is preemptively negated by rising wages and prices, which doesn’t help “competitiveness.”

On the other hand, when currencies rise against the U.S. dollar that makes oil and other commodities cheaper in terms of such rising currencies, which tends to boost world demand for industrial materials and grains and thus put upward pressure on commodity prices in dollars. Over the year ending February 27, for example, The Economist’s commodity price index rose 5.7 percent in dollars, but fell by 8.2 percent in euros because the euro rose against the dollar. (Dollar prices of metals rose from July to December, but little since then).  

Exchange rate shifts may offer some insulation against global price swings, with rising currencies experiencing less inflation than others in the 1970s, and devalued currencies experiencing less deflation in the 1930s. But all currencies cannot appreciate or depreciate against each other, so global trends prevailed.

The obvious global synchronization of two broad inflation waves in the 1970s (aside from energy prices), and of deflation in the early 1930s, should have made it clear by now that trying to forecast inflation in one country alone is a futile exercise without taking into account global price trends and national exchange rates.   

Just over a year into a presidency already full of unusual precedents, President Trump has agreed to a North Korean offer, communicated through South Korean national security adviser Chung Eui-yong, to meet face to face with Kim Jong-un. Though such meetings have been bandied about in the past, no sitting U.S. president has ever met with a sitting North Korean Supreme Leader. It is a prospect fraught with risk and opportunity.

Kim reportedly made this offer along with a statement that North Korea is “committed to denuclearization.” He left ambiguous what he would want in return, though, according to Chung, it involves a commitment that South Korea and the United States “not repeat the mistakes of the past.” Given what Pyongyang has previously demanded, this likely refers to upholding our side of any bargain, and possibly an end to what they call America’s “hostile policy” (i.e., our alliance with South Korea, proximate U.S. military assets, joint military drills, and economic sanctions).

It is a bewildering and unexpected development. Just a few weeks ago, Kim and Trump were trading barbs about how stupid the other is and making explicit threats of nuclear aggression. Answers to a few preliminary questions are in order.

Was Trump right to say yes?

Yes, but we should proceed with caution. The dangerous cycle of taunts, threats, and ever-heightening tensions over the past several months risked inadvertent escalation. The Trump administration even publicly floated the idea of a so-called “bloody nose” attack, involving a surgical strike against North Korean targets in the hopes that they would back down in response. Essentially every informed assessment of the consequences of even this kind of minor use of force predicts catastrophic escalation and possibly nuclear war, with higher-end casualty estimates in the many millions of people and with no clear political win at the other end of the conflagration.

Agreeing to meet face to face with an adversary is, by its nature, the opposite of the bluster and threats of war that has been the rule in Trump’s first year. And therefore, a welcome development. The consensus among analysts is that any war would be calamitous. It is therefore hard to see how we had a choice here. Declining the offer would mean a return to confrontation and antagonism.

That said, we should not have high confidence that the Trump administration is prepared to actually handle serious face to face negotiations. This is not really how smart diplomacy is done. Typically, lower-level officials, including seasoned diplomats and technical experts, engage in private discussions for years, determining each side’s red lines, finding areas of compromise, and establishing arrangements for neutral verification and mediation protocols. Only after progress is made at this level would a meeting between heads of state be appropriate, constructive, and, crucially, safe for both sides.

Furthermore, Trump has consistently undermined the value of diplomacy and has hollowed out the State Department of the kind of diplomatic professionals needed now to meet this challenge. Trump has not even appointed an ambassador to South Korea yet. In fact, Trump unexpectedly withdrew an impending nomination for this post to Victor Cha after the latter told Trump preventive war against the North was a bad idea. This has made us terribly unprepared for such an unprecedented and unpredictable meeting.

Things could very easily unravel. And the consequences of failure could be extreme. As Sen. Lindsey O. Graham (R-S.C.) said, “The worst possible thing you can do is meet with President Trump in person and try to play him. If you do that, it will be the end of you — and your regime.” Victor Cha writes today in the New York Times that “failure could also push the two countries to the brink of war.”

Trump may very well have a similar outlook.

Why did North Korea make this offer?

It is very hard to say. The strategic and tactical calculations of states are inherently opaque, particularly in extraordinary and rapidly developing situations like this. 

Many have argued that Kim has offered to meet because of the “maximum pressure” policy of the Trump administration – specifically the additional economic sanctions imposed on North Korea over the past year. But this is a wild oversimplification at best. The sanctions are surely weakening North Korea’s already ailing economy and tightening the screws on the regime, but the real key on sanctions was greater Chinese enforcement. Trump would be eager to take credit for Beijing’s slightly harder line against North Korea of late, but in reality, it has been a gradual process resulting from changing Chinese perceptions of their regional role and their increased frustration due to Pyongyang’s progress on nuclear and missile development over the past couple of years.

It is not inconceivable that Trump’s threats have scared Kim into offering direct talks. The North Korean regime may view Trump as unstable. The Washington Post reports that top North Korean officials have even read Michael Wolff’s Fire and Fury, a book about Trump’s chaotic first year in office that depicts the president as erratic, ignorant, and impulsive. Maybe Kim thought Trump was actually mad enough to unleash a war that is widely acknowledged to be too costly to contemplate. I am skeptical. It is just as likely that Kim understands Trump is a political novice who violated the parameters of debate in Washington, DC by suggesting during the campaign that South Korea assume responsibility for its own security. Maybe Kim thinks he can outsmart Trump or make a fool of him at the negotiating table.

Another possible explanation is that Pyongyang has leverage like it has never had before, and so now is as propitious a time as any to negotiate at the highest levels. The regime feels emboldened by the successful completion of their nuclear development, as they refer to it. Now they feel their nuclear deterrent is strong enough to meet with their greatest enemy on more equal footing.

It is even possible that Pyongyang sincerely believes direct talks are the best way to dial down tensions. Perhaps they really are willing to make concessions in return for reciprocal concessions from the United States. That is perhaps the most rational explanation for the regime’s motivations here.

However, the notion that North Korea is really ready to denuclearize is far-fetched, to say the least. They have devoted enormous resources, at great risk, to obtain their current capabilities. They won’t forfeit them without truly significant concessions from the United States.

Is it likely to succeed?

No. Despite his claims to be a world-class dealmaker, Trump is manifestly unprepared to engage such difficult negotiations. His mishandling of diplomatic engagements with other world leaders does not leave me with much confidence that he can prudently conduct himself in such high-stakes talks with an avowed enemy like Kim Jong-un.

Successful negotiations require a solid understanding of the interests of all the players, some measure of regional expertise, and technical knowledge of how to establish limitations and verification regimes of the nuclear program. It requires experienced diplomats and strategic clarity of the political goals driving all sides. What do we expect to get out of this meeting? What are some realistic expectations? What are we willing to concede? What do we expect Pyongyang is willing to give up? Finally, negotiating partners must have confidence that the other side will uphold its commitments under any agreement. This crucial element is not present in this case. Neither side trusts the other, and each has a long list of accusations that the other has cheated and reneged on past arrangements.

The initial announcement suggested this face to face meeting would take place by May. With so little time to prepare, and with such daunting obstacles, we do not have the ingredients for probable success. It is not clear what choice we had, however.

On Thursday, President Trump held a meeting to discuss how and whether violent video games affect gun violence, particularly school shootings. Before getting into the details of this claim, perhaps we should take a step back and read a classic fairy tale from 1812, printed in the Brothers Grimm’s Nursery and Household Tales and titled “How the Children Played Butcher with Each Other”:

A man once slaughtered a pig while his children were looking on. When they started playing in the afternoon, one child said to the other: “You be the little pig, and I’ll be the butcher,” whereupon he took an open blade and thrust it into his brother’s neck. Their mother, who was upstairs in a room bathing the youngest child in the tub, heard the cries of her other child, quickly ran downstairs, and when she saw what had happened, drew the knife out of the child’s neck and, in a rage, thrust it into the heart of the child who had been the butcher. She then rushed back to the house to see what her other child was doing in the tub, but in the meantime it had drowned in the bath. The woman was so horrified that she fell into a state of utter despair, refused to be consoled by the servants, and hanged herself. When her husband returned home from the fields and saw this, he was so distraught that he died shortly thereinafter.  

The end.

Violent entertainment is nothing new, nor is the older generation complaining about it. In usual Trump fashion, he claimed to be “hearing more and more people say the level of violence on video games is really shaping young people’s thoughts.” But it’s not true. People all over the world play video games, especially young boys, and there’s no resulting correlation to acts of violence. Actually, some studies have shown that violent video games might reduce crime by keeping young men off the street and glued to their TVs. 

In 2011, the Supreme Court decided the case of Brown v. Entertainment Merchants Association, holding that California’s 2005 law banning the the sale of “violent” video games to minors violated the First Amendment. Cato filed a brief in that case that documented the history of complaints about uniquely violent entertainment and the effectiveness of industry self-regulation–such as the MPAA movie ratings, the ESRB ratings for video games, and the Comics Code–over ham-handed government oversight. The Court cited Cato’s brief in its opinion.

Due to Brown, any federal law regulating violent video games is likely to be struck down by the courts. That doesn’t mean, however, that Trump and other government agents can’t make things uncomfortable for the industry. Most likely, we’ll just hear a bunch of complaining about “these kids today” from older generations. Everything old is new again, particularly when new forms of entertainment come around that are foreign to older generations.

As many people know, Brothers Grimm fairy tales can be shockingly violent and disturbing. In the Grimms’ “Cinderella,” the stepsisters slice off part of their feet to fit the glass slipper. When the prince notices that “blood was spurting” out of the shoes, he disqualifies them. Some critics were shocked at the tales and urged parents to protect their children from the gruesome content. Later editions of the Brothers Grimm toned down some parts, but in other parts, particularly violence suffered by evil doers in order to teach a moral lesson, the gore actually increased.

In the late 19th century, “dime novels” and “penny dreadfuls” were blamed for youth violence. An 1896 edition of the New York Times told of the “Thirteen Year Old Desperado” who robbed a gold watch from a jeweler and fired a gun while being pursued. “The boy’s friends say that he is the victim of dime novel literature,” the story concludes. Or Daniel McLaughlin, in an 1890 New York Times, “who sought to emulate the example of the heroes of the dime novels and ‘held up’ Harry B. Weir in front of 3 James Street last night.”

Next there were movies, which apparently made dime novels look tame, as the Times wrote in 1909:

The days when the police looked upon dime novels as the most dangerous of textbooks in the school for crime are drawing to a close. They have found a new subject for attack. They say that the moving picture machine, when operated by the unscrupulous, or possibly unthinking, tends even more than did the dime novel to turn the thoughts of the easily influenced to paths which sometimes lead to prison.

In fact, the Supreme Court didn’t grant movies First Amendment protection until 1952, ruling in a 1915 case that movies could “be used for evil” and thus could have their content regulated.

Movies might be bad, but violent radio dramas actually make listeners play out the violence their heads, a fact which concerned some in the ‘30s and ‘40s. In 1941, Dr. Mary Preston released a study in the Journal of Pediatrics which claimed that a majority of children had a “severe addiction” to radio crime dramas. One 10-year old told her that “Murders are best. Shooting and gangsters next. I liked the Vampire sucking out blood very much.”

In the 1950s, America had a prolonged scare about violent comic books prompted by the psychiatrist Dr. Frederic Wertham. Wertham exhorted parents to understand that comics were “an entirely new phenomenon” due to their depictions of “violence, cruelty, sadism, crime, beating, promiscuity,” and much more. Writing in the Saturday Review in 1948, Wertham chastasized those who downplayed the risk: “A thirteen-year-old boy in Chicago has just murdered a young playmate. He told his lawyer, Samuel J. Andalman, that he reads all the crime comic books he can get hold of. He has evidently not kept up with the theories that comic-book readers never imitate what they read.” Wertham’s activism led to congressional hearings and eventually the comic book industry creating the Comics Code Authority.   

Since the 1950s, we’ve seen periodic scares about violent television, movies, and now video games. And although the idea that violent entertainment might cause crime can’t be dismissed out of hand, empirical studies consistently fail to show a connection, just as with video games. The most consistent correlation is that of older generations misunderstanding the pastimes of the youth, coupled with a hearty sense of nostalgia for the good ol’ days.  

Protecting citizens from threats domestic and foreign is the most important function of government.  Among those very threats is a government willing to concoct and aggrandize dangers in order to rationalize abuses of power, which Americans have seen in spades since 9/11. Justifying garden variety protectionism as an imperative of national security is the latest manifestation of this kind of abuse, and it will lead inexorably to a weakening of U.S. security.

The tariffs on imported steel and aluminum that President Trump formalized this afternoon derive, technically, from an investigation conducted by the U.S. Department of Commerce under Section 232 of the Trade Expansion Act of 1962.  The statute authorizes the president to respond to perceived national security threats with trade restrictions. While the theoretical argument to equip government with tools to mitigate or eliminate national security threats by way of trade policy may be reasonable, this specific statute does little to ensure the president conducts a rigorous threat analysis or applies remedies that are proportionate to any identified threat.  There are no benchmarks for what constitutes a national security threat and no limits to how the president can respond. 

In delegating this authority to the president, Congress in 1962 (and subsequently) simply assumed the president would act apolitically and in the best interest of the United States.  The consequences of this defiance of the wisdom of the Founders—this failure to imagine the likes of a President Trump—could be grave.

Immediately, the higher costs of steel and aluminum to the U.S. industries that rely on those raw materials will rise. By how much depends on the relative importance of steel and aluminum to manufacturing the respective downstream product.  For example, steel accounts for about 50 percent of the material costs (and about 25% of the total cost) to produce an automobile, but closer to 100 percent of the material costs of producing the pipes and tubes used in oil and gas extraction and transmission infrastructure.

According to Bureau of Economic Analysis, the industries that consume steel as an input to their production account for 5.8 percent of GDP, while steel producers account for just 0.2 percent of GDP.  Steel users contribute $29 dollars for every $1 dollar contributed by steel producers.  The Bureau of Labor Statistics data show that for every worker in steel production there are 46 workers in steel-consuming industries. It doesn’t require complicated analysis to see that the costs to the broader U.S. manufacturing sector and the economy at large will dwarf any small benefits that accrue to the steel lobby.

Meanwhile, the costs to the economy will be compounded, as foreign governments target U.S. exporters for retaliation.  Lost market share abroad will mean smaller revenues for U.S. companies that need to hit profit target in order to invest, expand, and hire.

By signing these tariffs into law, President Trump has substantially lowered the bar for discretionary protectionism, inviting governments around the world to erect trade barriers on behalf of favored industries.  Ongoing efforts to dissuade China from continuing to force U.S. technology companies to share source code and trade secrets as the cost of entering the Chinese market will likely end in failure, as Beijing will be unabashed about defending its Cybersecurity Law and National Security Law as measures necessary to protect national security.  That would be especially incendiary, given that the Trump administration is pursuing resolution of these issues through another statute—Section 301 of the Trade act of 1974—which could also lead the president to impose tariffs on China unilaterally.

As of the moment, members of Congress are mobilizing in an effort to neutralize or somehow contain the damage from Trump’s action. Sen. Jeff Flake (R-AZ) announced an hour or so ago that he “will immediately draft and introduce legislation to nullify these tariffs.”  Whether that or any other efforts by Congress will succeed seem remote.  A veto-proof majority would be needed and according to a Quinnipiac University poll conducted this week, 67 percent of Americans who identify as Republicans agree with the president’s claim that “a trade war would be good for the United States and could be easily won.” By contrast, only 7 percent of Democrats and 19 percent of Independents agreed.

Unfortunately, Congress seems to have awoken too late to the dangerous situation created by the its delegation of authorities without the necessary constraints.  Perhaps we will see renewed interest in legislation Sen. Mike Lee (R-UT) introduced last year that would reestablish more robust congressional oversight of trade policy decision making.  In the meantime, let’s hope for the best.

A favorite statistic cited by paid family leave activists is thoroughly misleading. Activists regularly argue that only 15 percent of workers have access to paid family leave, relying on a Bureau of Labor and Statistics (BLS) number. Just this week, the figure was cited in a Harvard Business Review article, a WSJ letter, and a Bloomberg Businessweek report on Leaning In, among other places.

But the BLS figure doesn’t agree with federal data sets or national survey results, including the Census Bureau’s Survey of Income and Program Participation (SIPP), FMLA Worksite and Employee Surveys, Census Bureau’s Current Population Survey (CPS), or the National Survey of Working Mothers. Estimates of access to paid leave by source are detailed in the table below.

Table: Estimates of Access to Paid Parental Leave

 

Source Paid Leave Figure Details FMLA Worksite and Employee Surveys 57% of women and 55% of men received pay for parental leave from any source 2012 Data National Survey of Working Mothers 63% of employed mothers said their employer provided paid maternity leave benefits 2013 Survey Census Bureau’s Survey of Income and Program Participation (SIPP) 50.8% of working mothers report using paid leave of some kind before or after child birth 2006 - 2008 Data Census Bureau’s Current Population Survey (CPS) Dating back to 1994, on average 45% of working women took parental leave received some pay 1994 - 2014 Data

The difference between the BLS figure and other federal and national figures is considerable. For example, the BLS figure is more than 40 percentage points lower than the FMLA figure, and there is a 50 percentage point spread between the BLS number and the National Survey of Working Mothers number.

That is partly because BLS uses a peculiar definition of paid family leave that excludes most types of paid leave that can be used for family reasons. The particulars are described in greater detail here. As a result, the BLS figure is an extreme outlier even compared to other federal data sources.

As an extreme outlier, the BLS figure is misleading in the extreme. To engage in an accurate conversation about the experience of working parents, activists and policy makers should abandon it. 

In his new book Enlightenment Now and in his McLaughlin Lecture at the Cato Institute this week, Steven Pinker made the point that we may fail to appreciate how much progress the world has made because the news is usually about bad and unusual things. For instance, he said, quoting Max Roser, if the media truly reported the important changes in the world, “they could have run the headline NUMBER OF PEOPLE IN EXTREME POVERTY FELL BY 137,000 SINCE YESTERDAY every day for the last twenty-five years.”

This is understandable. As Pinker writes, 

News is about things that happen, not things that don’t happen. We never see a journalist saying to the camera, “I’m reporting live from a country where  a war has not broken out”—or a city that has not been bombed, or a school that has not been shot up. As long as bad things have not vanished from the face of the earth, there will always be enough incidents to fill the news, especially when billions of smartphones turn most of the world’s population into crime reporters and war correspondents.

And among the things that do happen, the positive and negative ones unfold on different time lines. The news, far from being a “first draft of history,” is closer to play-by-play sports commentary. It focuses on discrete events, generally those that took place since the last edition (in earlier times, the day before; now, seconds before). Bad things can happen quickly, but good things aren’t built in a day,  and as they unfold, they  will be out of sync with the news cycle. The peace researcher John Galtung pointed out that if a newspaper came out once every fifty years, it would not report half a century of celebrity gossip and political scandals. It would report momentous global changes such as the increase in life expectancy.

I’ve noted this myself. I think the mainstream media such as NPR, which I listen to morning and evening, fail to adequately examine the most important fact in modern history—what Deirdre McCloskey calls the Great Fact, the enormous and continuing increase in human longevity and living standards since the industrial revolution. If you listen to NPR or read the New York Times, you’ll be well informed about the news in general and about problems such as racism, sexism, and environmental disaster. But you won’t often be reminded that we are the richest, most comfortable, best-fed, longest-lived people in history. Or as Indur Goklany put it in a book title, you won’t hear about The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet.

Pinker does point out, “Information about human progress, though absent from major news outlets and intellectual forums, is easy enough to find. The data are not entombed in dry reports but are displayed in gorgeous Web sites, particularly Max Roser’s Our World in Data, Marian Tupy’s HumanProgress, and Hans Rosling’s Gapminder.” But of course those aren’t the major media. Which is why, he says, “And here is a shocker: The world has made spectacular progress in every single measure of human well-being. Here is a second shocker: Almost no one knows about it.”

So what if the media did report the most important news, the Great Fact? I asked Cato intern Thasos Athens to help me envision that:

Comparing the risk of dying in a terrorist attack to a common household accident like slipping in the bathtub is inappropriate.  After all, inanimate objects like bathtubs do not intend to kill, so people rightly distinguish them from murderers and terrorists.  My research on the hazard posed by foreign-born terrorists on U.S. soil focuses on comparing that threat to homicide, since both are intentional actions meant to kill or otherwise harm people.  Homicide is common in the United States, so it is not necessarily the best comparison to deaths in infrequent terror attacks.  Yesterday, economist Tyler Cowen wrote about another comparable hazard that people are aware of, that is infrequent, where there is a debatable element of intentionality, but that does not elicit nearly the same degree of fear: deadly animal attacks.

Cowen’s blog post linked to an academic paper by medical doctors Jared A. Forrester, Thomas G. Weiser, and Joseph H. Forrester who parsed Centers for Disease Control and Prevention (CDC) mortality data to identify those whose deaths were caused by animals in the United States. According to their paper, animals killed 1,610 people in the United States from 2008 through 2015. Hornets, wasps, and bees were the deadliest and were responsible for 29.7 percent of all deaths, while dogs were the second deadliest and responsible for 16.9 percent of all deaths. 

The annual chance of being killed by an animal was 1 in 1.6 million per year from 2008 through 2015.  The chance of being murdered in a terrorist attack on U.S. soil was 1 in 30.1 million per year during that time.  The chance of being murdered by a native-born terrorist was 1 in 43.8 million per year, more than twice as deadly as foreign-born terrorists at 1 in 104.2 million per year.  The small chance of being murdered in an attack committed by foreign-born terrorists has prompted expensive overreactions that do more harm than good, such as the so-called Trump travel ban, but address smaller risks than those posed by animals.

In addition to the data analyzed in the Forrester et al. paper, the CDC has mortality data for animals back to 1968.  This period includes the 9/11 attacks, the deadliest terrorist attacks in world history, which helps to take account of the fat-tailed distribution of actual terrorist attacks.  From 1975 through the end of 2016, 7,548 people have been killed by animals while 3,438 have been killed by all terrorists.  Even during this time, the annual chance of being killed by an animal is far higher than being killed in a terrorist attack (Table 1). 

Table 1

Annual Chance of Being Killed by Different Means, 1975-2016

Means of Death

Annual Chance of Dying

Homicide

1 in 14,296

Animal Attack

1 in 1,489,177

All Terrorists

1 in 3,269,432

Native-born Terrorists

1 in 27,482,415

Foreign-born Terrorists

1 in 3,710,897

 Sources: John Mueller, ed., Terrorism Since 9/11: The American Cases; RAND Database of Worldwide Terrorism Incidents; National Consortium for the Study of Terrorism and Responses to Terrorism Global Terrorism Database; U.S. Census Bureau, “American Community Survey”; Disaster Center, “United States Crime Rates 1960-2014”; Centers for Disease Control and Prevention (CDC); and author’s calculations.

One reason people fear terrorism so much is that it appears random and there is little one can do to avoid it.  While terrorism certainly appears random, not living in New York City or Washington, DC would have substantially reduced one’s chance of dying in a terrorist attack since 1975.  But just because terrorist attacks strike randomly and infrequently does not mean that the fear that those attacks create needs to be addressed through new public policies that spend trillions of dollars and kill many people in addition to making daily life just a little more inconvenient for little to no benefit. 

As far as I can tell, nobody suggests banning bees, dogs, or other animals just because they have killed 7,548 people since 1975.  But it is common for people to argue for banning immigrants due to the manageable hazard posed by infrequent terrorist attacks by foreign-born individuals.  Animals can be scary and they are infinitely more in control of their actions than inanimate objects like bathtubs, although probably not as much in control of themselves as human beings.  Adjusting for an American’s number and frequency of contacts with animals relative to people is essential to understanding the relative risks of dying from animals or other people.  Many of us have zero daily interaction with animals but talk to many different people. 

The chance of dying in any of these types of incidents, whether terrorism or homicide or animal attack, is small and manageable.  Certain precautions do make sense but only if they pass a cost-benefit test that counter-terrorism spending is guaranteed to fail.  Evaluating small and manageable threats such as that from terrorism relative to other small and manageable threats from homicide or animal attacks is a useful way to understand the world and where we should focus our energies and worries. 

As the nation remains fixated on the opioid epidemic, methamphetamine is making a resurgence. Meth is less expensive than heroin, and it is gaining users who fear opioid overdoses.

Meth is not new; it burst onto the scene in the early 1990, as the crack epidemic waned.  Synthesized from readily available chemicals, meth provided a cheaper, homemade alternative to other drugs. As use increased, legislators and law enforcement officials took note.

The first major legislation targeting meth was the 1996 Comprehensive Methamphetamine Control Act. Passed unanimously by the Senate and by 386-34 in the House, the legislation required that individuals buying and selling chemicals used in meth production register with the federal government, which sought to track such chemicals and reduce their supply to manufactures.

Despite this legislation, meth use – and fatal overdoses – increased. In response, Congress passed the Combat Methamphetamine Epidemic Act of 2005 (officially enacted in March 2006), which limited over-the-counter sales of ephedrine and pseudoephedrine, and required retailers to log customer purchase of such drugs. Simultaneously, federal and state authorities were instituting restrictions on pharmaceutical amphetamines including Ritalin and Adderall. And many states instituted prescription drug monitoring programs to reduce the availability of prescription amphetamines acquired legally and resold on the black market.

While well-intentioned, these policies may have induced users to substitute from expensive prescription drugs to cheap, readily available meth. And this switch had the usual impact of restrictions on access.

Overdose deaths related to methamphetamine initially declined after the crackdown on prescription access, but by 2016, the meth overdose rate had reached four times its level a decade ago. The likely explanation is that restrictions pushed users from prescription versions to black market meth, where uncertainty about purity generated increasing overdoses.

 

As the opioid crisis worsens and calls for supply restrictions increase, policymakers should consider how the same approach failed to halt – indeed exacerbated – the meth epidemic.

 

Research assistant Erin Partin contributed to this blogpost.

In the book I advertised in my last post, I argue that the Fed’s decision to switch to a “floor”-type operating system “deepened and prolonged the Great Recession.” Yet the Fed is only one of several central banks that have adopted floor systems for monetary control during the last dozen years. That fact raises some obvious questions: Did those other floor systems have similarly dire consequences? If not, why not?

In this post I answer these questions for one of those other cases: New Zealand’s. By doing so I also hope to shed some further light upon the U.S. floor system experience.

New Zealand’s Corridor System

From 1999 until July 2006, the Reserve Bank of New Zealand (RBNZ), New Zealand’s Central Bank, relied upon a symmetric corridor system in which the benchmark policy rate, known as the Official Cash Rate, or OCR for short, was kept 25 basis points above the rate paid on banks’ reserve (“settlement”) balances, and 25 basis points below the rate that the RBNZ charged for overnight loans.

Not long before it established its corridor system, the RBNZ implemented a Real Time Gross Settlement (RTGS) system for wholesale payments, in which interbank payments are settled bilaterally and immediately, thereby becoming final and irrevocable as transactions are processed, rather than at the end of the business day only, following the determination and settlement of net balances. Because banks’ overnight settlement balances bore an opportunity cost under New Zealand’s corridor regime, as they do in any corridor-type system, banks held very few such balances — an aggregate value of just NZ $20 million was typical — relying instead on intraday credits from the RBNZ to meet their ongoing settlement needs.

The advantage of RTGS is that it allows payments to be made “final” as soon as they’re processed, so that payees don’t have to wait until the end of the day to find out whether their money came through. In a net-settlement system, in contrast, a transfer made earlier in the day remains tentative until banks pay their net settlement dues. Should a bank fail to settle, its intraday payments have to be “unwound,” reversing any transfers made with money the bank wasn’t good for.

“Cashing Up” the Banking System

The disadvantage of RTGS, or rather of any RTGS system that relies on intraday central bank credits by permitting overdrafts on participants’ settlement accounts, is that it exposes the central bank itself to credit risk: should a bank fail while in overdraft, the central bank would incur a loss. To avoid that risk the Reserve Bank, instead of supplying unsecured intraday credit by allowing banks to overdraw their accounts, chose to supply it in the form of free but nonetheless fully-secured intraday repurchase agreements. In principle at least, if a bank with an outstanding repo failed, the RBNZ could sell the purchased security to recover any cash it had advanced.

In practice, however, the RBNZ’s decision to accept municipal and corporate paper as repo collateral meant that it still faced some risk of loss; moreover, it soon discovered that the volume of its outstanding repos with particular banks was such as made that risk uncomfortably large. Recognizing the danger, and desiring as well to reduce the frequency of delayed or failed settlements, the RBNZ determined to encourage banks to rely on overnight settlement balances, instead of intraday repos, to meet their settlement needs.

With that aim in mind, in July 2006 the RBNZ began its program of “cashing up” the New Zealand banking system. Because the Reserve Bank’s intent was to enhance banks’ liquidity without altering its monetary policy stance, this program involved several components. The first consisted of the RBNZ’s creation, between July and October, of an additional NZ $7 billion of settlement balances, while the second consisted of a concurrent 25 basis-point increase, made in five five-point increments, in the interest rate paid on those balances, aimed at encouraging banks to hold them. Finally, these other steps having been taken, the RBNZ stopped providing intraday repos. As the figure below shows, although total settlement balances hovered around NZ $8 billion during the crisis, and occasionally were raised beyond NZ $10 billion, they eventually settled down near the RBNZ’s originally-chosen target of NZ $7 billion, where they’ve remained ever since.


New Zealand Settlement Balances, 1999-2014

A Floor System, but Not for Long

Since the Reserve Bank did not find it necessary to alter its policy rate until March 2007, when it raised the OCR from 7.25 to 7.5 percent, it seems to have achieved its goal of cashing-up the banks without altering its policy stance. However, the steps it took to cash-up the New Zealand banking system did involve a fundamental change in the central bank’s operating system: from a symmetrical corridor system to what most observers have regarded as a “floor” system, in which the interest rate on settlement balances was identical to the Reserve Bank’s policy rate, and banks were well-supplied, if not satiated with, liquidity.

However, at least two crucial facts distinguish New Zealand’s floor system from floor systems employed by the Fed, the ECB, and the Bank of England. One is that, while it involved a quantity of settlement balances that was adequate to meet banks’ settlement needs, the RBNZ never took advantage of it to engage in Quantitative Easing. Having supplied banks with a level of settlement balances it judged adequate for their ordinary liquidity needs, it never attempted to enlarge those balances substantially and for an extended period by means of further, large-scale asset purchases. Instead, as the world crisis deepened during the last half of 2008 and first half of 2009, it mainly responded by cutting the OCR, and hence the interest rate it paid on banks’ settlement balances, aggressively, from 8.25 percent in June 2008 to just 2.5 percent at the end of April 2009 — with the biggest cuts coming between September 2008 and January 2009. It was, it bears noting, while these cuts were in progress that the Fed introduced its own floor system, raising  the interest rate it paid on banks’ excess reserves from zero to a final level of 25 basis points, where it was to stay until December 2015.

Second, while the Fed, the ECB, and the Bank of England retained full-fledged floor systems throughout the crisis and since, the Reserve Bank of New Zealand had already taken important steps away from such a system in August 2007, or well before the crisis reached its most critical stage with Lehman Brothers’ failure.

The RBNZ’s decision to modify its floor system was informed by a recommendation made in the same March 2006 consultation document that caused it to install that system in the first place, to wit: that “Incentives should be in place to foster an environment where the commercial banks get liquidity from each other and deal with the Reserve Bank only when liquidity is not otherwise available in the market.”

The RBNZ had hoped that its “cashed up” floor system would satisfy this requirement. “The increased base level of settlement account balances in the system,” the consultation document claimed,

should better foster the development of an inter-bank cash market. In the presence of significant market liquidity, market participants should transact cash with each other at the end of day in preference to using the Bank’s standing facilities. Development of the inter-bank market is desirable to improve the distribution of cash between ESAS [Exchange Settlement Account System] participants, leaving the Bank to concentrate on the liquidity to the system as a whole. This market, if developed, would also provide another source of information for the Bank on any inefficiencies in the market.”

However, once the floor system was up and running it became clear that, instead of encouraging banks to lend and borrow settlement balances in the private, overnight market, it was encouraging at least some banks to hoard any surplus balances that came their way. Like floor systems elsewhere, New Zealand’s involved a deposit rate equal to the central bank’s overnight policy rate, which tended to be higher than the corresponding, secured interbank overnight repo rate. New Zealand’s floor system therefore allowed banks to accumulate reserves without incurring any substantial opportunity cost by doing so.

New Zealand Adopts a Tiering System

To correct the problem of reserve hoarding, the Reserve Bank needed to modify the terms of its interest payments on banks’ settlement balances so as to keep banks from holding settlement balances beyond what they actually needed for settlement purposes. The solution it settled on was a “tiering system,” with settlement balances up to a bank’s assigned tier limit earning the OCR, and balances beyond that level earning 100 basis points less. The tier levels were themselves based on banks’ apparent settlement needs, but collectively still amounting to the aggregate target of NZ $7 billion.

Although it was originally supposed to go into effect in September 2007, the tiering system was put in place a month ahead of schedule to deal with stresses from the emerging global crisis — which “threatened to materially tighten monetary and credit conditions in New Zealand, jeopardising banks’ confidence in continuing access to credit.” In other words, the RBNZ found it more desirable than ever to move away from an orthodox floor system as credit markets, and markets for overnight bank funding especially, tightened, so as to keep its own payments arrangements from contributing unnecessarily to that tightening.

As Ian Nield (2008, p. 14) explains, the establishment of the tiering system, together with the Reserve Bank’s decision to accept domestic bank bills in its overnight standing facility, “had an immediate effect which, broadly speaking, re-normalised the domestic bank bill market,” especially by reducing short-term money market spreads. As the next figure shows, from Enzo Cassino and Aidan Yao (2011, p. 40), it managed to limit such spreads far more successfully then either the Fed or the ECB, and to do so without adding large quantities of fresh reserves to its banking system, though the difference also reflected the fact that New Zealand’s banks were not so encumbered with toxic assets as some U.S. and European banks.


Three-Month LIBOR-OIS Spread

Some Lessons

New Zealand’s floor experience makes for an interesting comparison with that of the United States. Of parallels, perhaps the most interesting is that in both cases a mere 25 basis point increase in the rate paid on banks’ central-bank balances proved sufficient to sustain a switch from a corridor or corridor-like operating system to a floor system. That such a small absolute change was all it took is particularly impressive in New Zealand’s case, for whereas in the U.S. between 2009 and 2015 25 basis points was a relatively significant amount in comparison to then-prevailing short-term rates, in New Zealand in July 2007 the OCR rate was 7.25 basis points —  making the 25 point increase in the rate paid on bank deposits proportionately much smaller. The New Zealand case therefore seems to supply strong evidence in support of Donald Dutkowsky and David VanHoose’s claim that even very small changes in reserve-compensation schemes can suffice to trigger major central bank operating-system regime changes.

But it’s the differences between the two experiences that are, after all, most striking. Chief among these is the fact that New Zealand installed its floor system well before the financial crisis began, and did it for reasons unrelated to monetary control. It’s sole goal was that of boosting banks liquidity to limit its own exposure to intraday credit risk — not to combat a crisis. For this purpose, raising the reward paid on bank settlement balances made perfect sense, for the point was to encourage banks to hold more such balances, and therefore become more liquid, at a time when there was no credit crunch. The switch was undertaken, moreover, in a neutral manner, so as to leave the Reserve Bank’s monetary policy stance unchanged.

The U.S. in October 2008 was, in contrast, in the throes of a credit crisis, when, in retrospect at least, the last thing it needed was a further tightening of credit. Yet the Fed’s decision to start paying interest on bank reserves that month was motivated by its desire, not to enhance banks’ liquidity, but to get them to hoard reserves that the Fed was creating through its emergency lending programs, so that those reserves would not translate into a loosening of the Fed’s policy stance. Interest on excess reserves was therefore resorted to as an instrument of monetary control, and specifically as a means of monetary tightening aimed at offsetting the loosening that would otherwise follow fresh reserve injections. Later, the same new instrument would see to it that still larger reserve additions — the by-product of the Fed’s Large-Scale Asset Purchases — would also be hoarded as so many trillions of dollars of excess bank reserves.

In New Zealand, in contrast, even before the crisis struck authorities became anxious to prevent banks from accumulating more reserves than were deemed necessary for their settlement needs. In consequence a tiering system was planned, which would prevent such hoarding by imposing an interest penalty on above-tier settlement balances. The outbreak of the crisis merely caused the RBNZ to hasten its implementation of the new plan.

Thanks to New Zealand’s switch to a tiered system, its overnight interbank lending market remained active throughout the crisis. New Zealand’s banks therefore continued to rely upon one another as lenders of first resort, turning to the RBNZ for overnight funds only as a last resort — an outcome fully in accord with orthodox doctrine. In the U.S., in contrast, the establishment of a floor system caused the once active federal funds market to altogether cease to function as a conduit for interbank loans.

Finally, although the RNZB occasionally found it desirable to inject some extra cash into the New Zealand banking system during the crisis, as it did in August 2007 and again in fall of 2009, those cash additions were — as could be seen in our first figure and as the next figure (Cassino and Yao, 2011, p. 42) makes especially clear — both relatively modest and temporary. Even recently, New Zealand’s settlement balances amount to little more than 5 percent of that nation’s GDP, whereas banks’ reserve balances held at the Fed amount to about 13.5 percent of U.S. GDP.


RBNZ Settlement Balances, 2007-2009, Millions of NZ$

Those modest cash additions proved capable, together with several other Reserve Bank programs, of “maintaining the functioning of the New Zealand money market and the flow of domestic credit during the global financial crisis,” as a later RBNZ study concluded. That they did so was due in part to the fact that, by discouraging banks from hoarding reserves in excess of their fixed tier limits, New Zealand’s tiering system preserved the banking system money multiplier, instead of causing it to collapse, as happened in the U.S. The RBNZ’s success in keeping credit flowing may have in turn contributed, if only to a modest extent, to New Zealand’s Great Recession being  both one of the first to end and one of the shallowest.

Conclusion

Although the Fed was hardly alone in establishing a floor system of monetary control — and not all floor systems had consequences like those I document in my book about the U.S. case — the relative success of these other floor systems does not necessarily serve as a vindication of the general concept. The New Zealand floor system, in particular, only functioned in a relatively orthodox manner for a period of less than a year, predating the financial crisis. As that crisis dawned, the Bank of New Zealand retreated from an orthodox floor system, by placing definite limits on the balances on which banks would incur no substantial interest opportunity cost. Having thus curtailed New Zealand banks’ appetite for settlement balances, the RBNZ could expect its additions to New Zealand’s monetary base to influence economic activity by way of the same orthodox transmission mechanism, leading to the same marginal stimulus affect, as might have been the case if settlement balances bore no interest at all. Federal Reserve authorities cannot, for all of these these reasons, point to New Zealand as supplying a precedent favoring their own decision to adopt and retain a floor system of monetary control.

[Cross-posted from Alt-M.org]

To paraphrase John Lennon, imagine there are no public schools, or private ones, too. That is what writer Julie Halpert ostensibly does in a new Atlantic article in which she purports to conduct a “thought experiment,” first imagining a world of all private schools, then one of all public. But rather than coming off as a true, objective experiment, the piece reads more like a dystopian novel depicting the horrors of an imagined all-private system, while comparatively glancing past the many real, actually experienced stains and injustices of public schooling.

It’s not auspicious that the article, before the “experiment” is even proposed, begins with a description of the posh Detroit Country Day School, which likely reinforces the impression that many people seem to have that private schools are snooty preserves of the uber-rich. Halpert notes that the price of Detroit Country Day for high school is about $30,000 per year, but doesn’t mention that the average tuition at a private high school, according to the most recent federal data, is only about $13,000. That average price is high when you’re comparing it to “free” public schools for which you’ve already paid taxes, but not Detroit Country Day high.

With commencement of the experiment we are given a little history…very little. Halpert completely bypasses American educational history prior to Horace Mann’s crusade for common schools starting in the 1830s, noting only that some of our oldest high schools, specifically tony West Nottingham Academy and Phillips Academy, date back to the 18th Century. Halpert also writes that Mann was largely responsible for “the perception of education as a public good.” She ignores the evidence the education was delivered in myriad ways and was very widespread prior to the common schooling crusade—about 90 percent of white adults were literate by 1840—or that it often had a heavily moral character geared at both the private and public good. This is a huge omission, leaving out evidence that largely private provision of education, though sometimes with a modicum of government funding, worked, at least for those who weren’t subjugated by law. Law which was, of course, promulgated by government, the entity that would supply public schools.

Halpert does somewhat acknowledge a flaw in public schooling, saying that “Mann’s good intentions didn’t always translate into the kind of diversity he envisioned.” Now, Mann’s target may have been diversity in classrooms, but it was greater uniformity coming out, and Halpert at least cites Holy Cross historian Jack Schneider pointing out that the common schools were geared to inculcate basic Protestant beliefs, and were often openly hostile to Catholics. Alas, this is about as deep as the experiment dives into public schooling’s most painful flaw: its repeatedly demonstrated, poisonous inability to handle pluralism and treat diverse people equally even when it wants to, and its easy employment as a tool for soft and sometimes overt, uniform indoctrination. At times the indoctrination has been letting everyone know they should be Protestant, other times it’s been letting them know they must be Nazis. The use of public schools for brain-washing indoctrination in places like Nazi Germany and the Soviet Union are on the extreme end, of course, but Mann himself was clear that he wanted to create greater uniformity in thought and behavior through public schooling—to create a “more far-seeing intelligence, and a purer morality, then has ever existed among communities of men”—as have many public schooling advocates since. Acknowledging that public schooling has repeatedly been used as a tool for social and political control must be a major part of any thought experiment that would objectively contemplate all-public education. But it is not here.

Continuing on, Halpert quickly notes that “not all private schools fall in the same category as Detroit Country Day,” but rather than using that to explicitly state that most private schools are much less expensive, she deploys it in an attack on an all-private system, saying that because private schools can differentiate, “reliable information on school quality would likely be nonexistent.” She continues, explaining that because private schools operate independently, “they’re generally not subject to rules holding them accountable for a certain level of student performance. No rules mean no agreed-upon measures, which mean no standardized assessments whose results parents and policymakers can consult.”

No agreed-upon measures?

Often totally on their own, many private schools have for decades given nationally norm-referenced tests such as the Terra Nova, Iowa Test of Basic Skills, and California Achievement Test, to help schools and parents assess how children are doing. They also readily participate in the Advanced Placement and International Baccalaureate programs. And, of course, lots of private school kids take the SAT and ACT, and schools pursue accreditation. Private schools have a powerful incentive to share nationally comparable test results if parents value them, because parents will demand to see them when deciding where to send their kids. Research has shown parents with choice indeed do this, though they often, very reasonably, put other things, like safety, and whether the schools seem to care for children, higher on their priority lists.

In contrast to the metric-free chaos we’d see in an all-private system, Halpert writes that public schools provide “critical information about a particular school [that] is generally accessible to anyone. This accountability reduces ‘the possibility that parents could be duped,’ said the College of the Holy Cross’s Schneider.”

Really? Let’s remember what common public school metrics often look like: “proficiency” that that is often a very low bar and varies wildly from state to state; empty graduation rates; and inscrutable “report cards.” And remember that all children and families are different, and there is huge disagreement over what education is all about, which means no single metric—or two, or three—can capture what makes each individual school special, or what each child needs. There’s public schooling’s inability to handle diversity again! Halpert does cite me noting that all kids are different, but I’m sandwiched between lengthy quotes saying that accountability and good info in an all-private system are impossible, concrete evidence to the contrary notwithstanding…or mentioned. I appear to be but a foil.

Next we get to the inequality-based condemnation of private schooling, an attack predicated on the premise that rich people can get better private schooling than poor, therefore private schooling is bad. When it comes to evidence, this is primarily grounded in conjecture and Chile, which has significant school choice but is also accused of significant inequality in school access.

As a logical proposition, the rich-will-get-better-stuff argument makes little sense: the rich will be able to access better schools than the poor with or without vouchers. What vouchers do is just even things up a bit. And even if private schools were totally outlawed, wealthier people could buy houses in better districts, which is exactly what happens now. Halpert addresses that, but not until the end of the experiment, and not until citing numerous academics declaring that choice would clearly stratify and segregate, and Halpert offering this whopper: “experts tend to agree an all-public-school world would make the United States a higher-functioning, and more harmonious, place by exposing students to peers from different backgrounds.”

Maybe most of the experts Halpert talked to concluded that, but that appears to have been a heavily slanted lot. From what I can tell the only choice supporters she talked to were folks from Detroit Country Day, me, AEI’s Andy Smarick, and Barbara Gee from the group Private Schools with a Public Purpose. Worse, she only cites Smarick pointing out that how much power parents should have over school selection is still a contentious topic; seems to throw me in as a foil; and cites Gee saying kids with dyslexia are actually better served at public than private schools. (The latter after only parenthetically and with big cost modifiers noting that there are private schools that actually specialize in working with kids with disabilities.) Oh, and at the very end she quotes Donna Orem from the National Association of Independent Schools asking, “Would America be as creative if all the schools in the country were the same?” It’s an important question, but far too little, far too late, appearing at the end of a very long assault on private schooling.

Of course, more important than what experts say is what the evidence says, and it is against the assertion that public schools are better harmonizers than private.

Not only are public schools hugely segregated, which again Halpert only gets to after a long, sharp take-down of private schooling, she ignores the lengthy empirical evidence that U.S. school choice programs typically provide as good or better education than public schools, usually at a fraction of the cost, and that they actually tend to reduce racial segregation. Far worse, her experiment totally ignores public schooling’s shameful past when it comes to integration, including sometimes painful efforts to “Americanize” immigrants, and decades of forced racial segregation. Well, I shouldn’t say “totally”: the piece does quickly mention “desegregation efforts,” but only to criticize private schools. Without discussing mandated racial segregation in public school at all, Halpert writes that the private school enrollment share is higher in Nashville, Tennessee, than nationally as a “result of desegregation efforts that prompted white families to seek educational settings where their kids wouldn’t be forced to learn alongside black children.”

If the point of the experiment is to objectively assess public and private schooling, this egregious omission should lead to the whole lab being shut down. To ignore what public schooling did for over a century—and continues to do through housing patterns—but condemn private schools because some people, who had gotten their way for so long through public schools, tried to use private schooling to keep getting their way, is utterly illogical, unfair, but also all too common.

No attack on choice would be complete without a mention of Finland, but Halpert also focuses heavily on Cuba to show how great a no-choice system would be. And Chile, which has widespread school choice, has to be held up as a bad guy. Now, the Finland miracle has been debunked many times, in part by the country’s own falling scores on the exam on which it excelled—the Program in International Student Assessment (PISA)—as well as its lesser results on other exams, but Cuba has gotten very little attention.

Halpert holds Cuba up as an educational powerhouse, and quotes Stanford professor Linda Darling-Hammond saying that even Chile’s best students “couldn’t come close” to replicating Cuba’s achievement levels. So why haven’t we heard more about this? Probably in part because Cuba has never participated in the big international assessments such as PISA or the Trends in International Math and Science Study. Also, with an authoritarian regime like Cuba’s, there is always a tinge of doubt that the results being reported are real. And then there is the inconvenient reality that Cuba is a dictatorship—not exactly the ideal people want to openly advocate for.

Those things said, Cuba appears to have done very well relative to other participating Latin American countries on two exams: the First Regional Comparative and Explanatory Study (PERCE) and the Second Regional Comparative Explanatory Study (SERCE). And that did include outpacing Chile. Which shouldn’t be surprising: authoritarian regimes often have high achieving education systems into which they pour great amounts of attention and resources. Why? Because, as noted already, education is a huge tool for control!

But there’s an important irony here. While Cuba’s system may produce high scores, it does not appear to produce equity. Cuba’s overall performance well outpaced other Latin American countries, but it also typically produced by far the biggest gaps between its top and bottom performers. In other words, it suffered from the most achievement inequality. Apparently, some Cuban kids are more equal than others. Meanwhile, Chile was consistently in the upper ranks in achievement when Cuba participated in the tests, but had roughly middling gaps between top and bottom performers. For what it’s worth, Chile consistently finished first in the Third Regional Exploratory Study, which Cuba sat out.

If Cuba is your shining example of what an all-public-schooling system could look like, you have a huge problem. You have an even bigger problem if you don’t seem to realize that.

In a way, uncritically using repressive, dictatorial Cuba in this “thought experiment” exemplifies exactly what is wrong with it: it eschews or soft-pedals almost all of the unpleasant—and sometimes downright awful—realities of public schooling, while heaping worst-case-scenario prognostications on private schooling. It seems, even if not intended, like an experiment designed to get one result: illustrate that an all-private education system would be awful. And that’s not scientific at all.

California governor Jerry Brown has been taking a victory lap of sorts after putting forth a budget for fiscal year 2019 that would include a $6 billion surplus, a remarkable turnaround for a state that hemorrhaged red ink in the wake of the great recession.

Of course, much of that surplus arrived via a hefty tax increase, as well as a surfeit of revenue resulting from the stock market boom via capital gains taxes, so attributing this turnaround to fiscal probity might be taking things a bit far.

However, Governor Brown does get credit for at least temporarily righting what seemed to be a sinking ship. What’s more, he seems to realize that this surplus can easily disappear, and he has warned his potential successors to resist spending that surplus. What Brown is fully aware of is that even the most spectacular stock market increase is not enough to erase the state’s most pressing financial problem—namely, its underfunded government pension.

Currently, it has enough money set aside to cover just 68% of its future obligations—certainly far from the most indebted state (that would be my own state of Illinois), but still low enough to dismiss any notion that future stock market growth can remedy the problem.

Despite this, the California Public Employees Retirement System, or CalPERS, has put politics ahead of achieving a high rate of return by insisting that the boards of the companies it invests in adhere to various social and environmental practices.

It’s nonsense, of course, and it amounts to little more than an extension of politics into a realm that doesn’t have room for it.

The problem is that these environmental and social constraints inevitably bring with them a lower rate of return—regardless of what CalPERS and other advocates say to the contrary. And these lower returns will only hasten the day when the state’s taxpayers—or, failing that, federal taxpayers—will be on the hook to cover California’s pension deficit.

A few of the state’s politicians seem to be aware of the conundrum this places on California citizens: A Democratic state senator recently offered a bill that would allow new state employees to opt out of the state pension plan and simply participate in a defined contribution plan. The state university system already allows newly hired professors to opt out—a recognition that a defined benefit plan does not work well for a peripatetic workforce like academics.

Ultimately, moving to a defined contribution plan might make sense from a long-term sustainability perspective, but transitioning to such a system for everyone would require someone (namely, current taxpayers and state workers) to cover promises already made to current and future state retirees while new employees build up their own retirement balances. In short, someone’s going to be left holding the bag in the ponzi scheme that is a pay-as-you-go public pension plan.

That’s a tricky path to navigate: Utah did such a thing for its new employees with a much smaller per-capita shortfall, accomplishing it by making those new employees fork over a portion of their income to cover promised benefits. It is not clear even that will be sufficient for the state.

California will need every dime it can get its hands on to fund its pension shortcomings, and with the country’s highest income tax rate it probably can’t raise personal income taxes too much higher. Governor Brown has commented that the state’s retirees should expect a benefit reduction the next time there’s a recession but most people think any reductions in promised benefits are precluded by the state’s constitution.

At some point a future governor of California will need to figure out how the state’s going to cope with having billions in promised benefits and insufficient money set aside to keep those promises. That calculus will be much easier if CalPRS doesn’t accept a lower rate of return in exchange for dubious political chits.

Pages