Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

As a historian of the Cold War, I have a passing knowledge of a number of meetings between Soviet/Russian leaders and U.S. presidents. Some are famous for getting relations off on the wrong foot (e.g. Kennedy and Khrushchev at Vienna in 1961); others set the stage for great breakthroughs, but were seen as failures at the time (e.g. Reagan and Gorbachev at Reykjavik in 1986); still others are largely forgotten (e.g. Johnson and Kosygin at Glassboro, NJ in 1967). It is impossible to predict how we will remember the first substantive meeting between Donald Trump and Vladimir Putin.

We can see, however, what President Trump wants us to remember. “I think we have great opportunities together as two countries that, frankly,…have not been getting along very well for the last number of years,” Trump said at the opening of the meeting in Helsinki. “I think we will end up having an extraordinary relationship.” 

President Trump has long said, going back to his campaign, that it is important to have good relations with Russia. I agree. I’ve never seen meetings between American leaders and senior government officials and their foreign counterparts as a “reward” for good or bad behavior. It’s called diplomacy. If this first meeting does set a tone for cooperation between the two countries, historians might eventually judge it worthwhile.

Unfortunately, the context surrounding this meeting is not conducive to long-term success. Credible evidence of Russian interference in the 2016 election, affirmed in detail as recently as Friday, casts a long shadow, and makes it very difficult to make progress on matters of mutual interest. Any genuine breakthrough will immediately run afoul of U.S. domestic politics. That President Trump continues to dismiss the Mueller investigation as a “rigged witchhunt” and mostly blames his predecessor for failing to call the Russian election hack to the attention of the American people merely confirms a widespread perception that he doesn’t take it seriously.

In addition, on the heels of last week’s NATO summit, and the G-7 meeting last month, there is the unsettling fact that President Trump seems to prefer meeting with autocrats than with leaders of democracies. We saw that again today, with President Trump praising Vladimir Putin effusively days after he humiliated European leaders. He also spoke warmly of their mutual friend, China’s Xi Jinping. Last month, the president joked about how North Koreans “sit up at attention” when Kim Jong Un speaks, and he would like “my people to do the same.” He seems particularly impressed by how others are able to stifle domestic dissent. This behavior and rhetoric plays into his critics’ warnings about Donald Trump’s authoritarian instincts, and today’s meeting does nothing to ease such concerns.

President Trump’s idiosyncrasies notwithstanding, however, I will be paying attention to what, if anything, emerges from his meeting with Vladimir Putin. These could include agreement to discuss nuclear arms control, tamping down the civil war in Syria, and possibly reaching some resolution on Ukraine. But we’d all be advised to wait a bit before rendering a definitive judgement.

As regular Alt-M readers know, I’ve been saying for over a year now that, despite their promise to “normalize” monetary policy, Fed officials have been determined to maintain the Fed’s post-crisis “floor” system of monetary control, in which changes to the Fed’s monetary policy stance are mainly achieved by means of adjustments to the rate of interest the Fed pays on banks’ excess reserve balances, or the IOER rate, for short.

Until recently the Fed’s intentions had to be inferred by reading between the lines of its official press releases, or by referring to personal preferences expressed by leading Fed officials. But with today’s release of the Fed’s official Monetary Policy Report by the Board of Governors, it’s no longer necessary to speculate. The section “Interest on Reserves and Its Importance for Monetary Policy,” on pp. 44-46, leaves hardly any room for doubt that the Board of Governors still regards the IOER rate as “the principal tool the FOMC [sic] uses to anchor the federal funds rate,” and that it plans to keep on doing so after it “normalizes” monetary policy by completing its ongoing balance sheet unwind and by further raising its fed funds rate target upper limit by another percentage point or so.[1]

An Awkward Start

Having already spilled several gallons of ink criticizing the Fed’s floor system, on these pages and in Floored!, my forthcoming book on the subject, I don’t see the point of reviewing those criticisms here, by way of a comprehensive reply to the Board’s recent remarks defending that arrangement. Still I can’t resist pointing out some especially galling aspects of those remarks, starting with this opening passage:

The financial crisis that began in 2007 triggered the deepest recession in the United States since the Great Depression. In response, the Federal Open Market Committee (FOMC) cut its target for the federal funds rate to nearly zero by late 2008. Other short-term interest rates declined roughly in line with the federal funds rate. Additional monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation. The FOMC undertook other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.

These policy actions made financial conditions more accommodative and helped spur an economic recovery that has become a long-lasting economic expansion.

Although the passage itself doesn’t refer to interest on reserves, its purpose is to introduce a discussion devoted to singing the praises of that policy instrument. It’s in light of that intention that the passage raises my hackles. For what the Fed’s report doesn’t say is that, when the Fed introduced IOER in early October 2008, it did so, not because it thought “monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation,” but because it was determined to prevent its then-ongoing emergency lending from having any stimulus effect, and from thereby becoming a source of unwanted upward pressure on inflation! IOER was, in other words, originally intended to serve as a contractionary monetary policy measure, just when monetary expansion was desperately needed.

And boy did it work! NGDP, which had been growing, albeit at a snail’s pace, went into a tailspin. Nor was that all. Because the Fed’s IOER rate — first set at 75 basis points, briefly lowered to 65 bps, then quickly raised to 100 basis points, and finally lowered again (in early December 2008) to 25 basis points, where it remained for the duration of the crisis — was designed to prop-up the fed funds rate by encouraging banks to accumulate excess reserves, when the Fed finally determined that the U.S. economy could use a little stimulus after all, it had no choice but to resort to “other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.”

But we mustn’t be too hard on the authors of the report. After all, it would have been awkward for them to laud the Fed’s floor system after first pointing out how, during the last months of 2008 and the start of 2009, that system played an important part in bringing the U.S. economy to its knees.

Not a Popular System

Another irksome passage in the Board’s report is the one declaring that “Interest on reserves is a monetary policy tool used by all of the world’s major central banks.” Yes, and no. Plenty of central banks pay interest on bank reserves. But the policy the report defends isn’t simply that of paying interest on bank reserve balances, including excess reserve balances. It’s that of using the IOER rate as the Fed’s chief instrument of monetary control, which is the essence of a “floor” operating system. And that means setting an IOER rate high enough to encourage banks to stock-up on  excess reserves, instead of trading them for other assets.

Although the central banks of several other nations have employed floor systems in the past, today, besides the Fed itself, only the Bank of England and the ECB still rely on floor systems — or something close. Most  central banks now rely on “corridor” systems of some kind, in which the central bank’s IOER (“deposit”) rate sets a lower bound on movements in its policy rate, and open-market operations are routinely employed to keep the actual policy rate at a target set somewhere between that lower bound and an upper bound consisting of the central bank’s own lending rate. Finally, a number of other central banks that either used floor systems before the crisis or adopted such systems during it, including the Swiss National Bank, the Bank of Japan, Norges Bank, and the Reserve Bank of New Zealand, switched to “tiered” or “quota” systems afterwards. In a tiered system, reserves may earn interest at a rate that makes them seem attractive relative to other safe assets, but they do so only up to a fixed limit. Beyond that limit they earn only a relatively modest return — if not a zero or negative return. Because the marginal opportunity cost of reserves remains positive in tiered systems, such systems operate more like corridor systems than like a floor system.

Just How Low Has the Fed Really Gone?

But of all the irritating claims of the Board’s report, the one that has gone furthest in putting me in high dudgeon is this one:

The rate of interest the Federal Reserve pays on banks’ reserve balances is far lower than the rate that banks can earn on alternative safe assets, including most U.S. government or agency securities, municipal securities, and loans to businesses and consumers. Indeed, the bank prime rate — the base rate that banks use for loans to many of their customers — is currently around 300 basis points above the level of interest on reserves.

To which the following footnote is appended:

The Congress’s authorization allows the Federal Reserve to pay interest on deposits maintained by depository institutions at a rate not to exceed the “general level of short-term interest rates.” The Federal Reserve Board’s Regulation D defines short-term interest rates for the purposes of this authority as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments.” The rate of interest on reserves has been well within a range of short-term interest rates as defined in Board regulations.

Where to begin?

It’s absurd, first of all, to treat interest rates on “loans to businesses and consumers,” the prime rate included, as rates on safe assets. But don’t take my word for it: consider what two Fed senior economists, one of whom works at the Board of Governors, have to say on the subject, in a Liberty Street Economics post entitled, “What Makes a Safe Asset?” Safe assets, they write,

are those with a very high likelihood of repayment, and are easy to value and trade …. As a result, safe assets typically trade at a premium, known in the academic literature as a “convenience yield,” which reflects the nonpecuniary benefits investors receive for holding them …

In today’s financial system, the prime example of a safe asset is U.S. Treasury securities. These securities are considered to have zero credit risk, can be easily sold, and can be used as collateral either to raise funding or to post as margin in derivatives positions. … Treasuries’ safe asset status translates into an average yield reduction of 73 basis points. This yield spread can be interpreted as a measure of the convenience yield embedded in Treasuries.

However, Treasuries differ significantly in maturity and that affects their safe asset characteristics. Treasury bills (T-bills) have the shortest maturities and are often thought of as “money-like” assets, that is, assets similar to physical currency. Because of this moneyness, yields on short-term T-bills are typically lower than those on comparable assets….

The private sector can also create safe assets. For example, many of the benefits ascribed to public safe assets are also attributed to private short-term debt of certain issuers. An important difference between public and private safe assets, however, is that the reliability of private safe assets can come into question.

Stretch the notion as much as you like, you will never get “safe assets” to include even the safest bank loans. That is, you won’t be able to do it unless you are a Fed official trying to claim that the Fed’s IOER rate has been “far lower than the rate that banks can earn on alternative safe assets.”

Nor is it possible to justify comparing the Fed’s IOER rate — a rate on assets (reserves) of essentially zero maturity — to rates on otherwise safe longer-term assets. Instead, to sustain the claim that the Fed’s IOER rate has been low relative to that on assets of comparable safety, including comparably low exposure to interest-rate (or duration) risk, Fed officials would have to show that the IOER rate is below rates on safe assets with very short (if not zero) maturities. That rules out comparisons to  Treasury and agency bonds and notes, leaving only Treasury bills. Even then the comparison is a bit unfair, as even the shortest-term Treasury bills have longer terms — and are therefore less liquid and safe — than bank reserves.

But let that pass. Instead, let’s just consider how the report’s assertion that the Fed’s IOER rate “is far lower than the rate that banks can earn on alternative safe assets” stacks up against the record regarding yields on various Treasury bills. Let FRED do the talking:

As the chart shows, throughout most of its existence the IOER rate has been well above, not just rates on shorter-term Treasury Bills, but those on 1-year T-bills; indeed, for a long interval banks had to hold T-bills of 2-year maturities or longer to earn as much interest as excess reserves paid. And while the situation isn’t nearly so bad today, it remains the case that reserves pay more than one-month Treasury bills. That’s not “far lower than the rate that banks can earn on alternative safe assets.” It’s not even a little lower. It’s higher. Nor could things be otherwise, because having a floor system means having an IOER rate that’s high enough “to remove the opportunity cost to commercial banks of holding reserve balances,” which it wouldn’t be if it were really “far lower than the rate that banks can earn on alternative safe assets.”

“D” for Deception

And what about that footnote? It just adds insult to injury by showing the lengths to which the Fed has been willing to go to twist and bend the law authorizing it to pay interest on bank reserves. As the note correctly observes, that law requires that the Fed’s IOER rate not exceed “the general level of short-term interest rates.” Since the IOER rate is itself, as we’ve seen, a rate on a riskless zero-maturity asset, any reasonable interpretation of the statute would have it refer to the general level of rates on other short-term, riskless assets, such as 4 week-Treasury Bills or, perhaps, overnight Treasury-secured repos.

So, in preparing Regulation D, how did the Fed define short-term rates for the purpose of implementing the statute? Why, as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments” (my emphasis). If you can’t see how self-serving — not to say dishonest — the Fed’s definition is, please read it again, carefully, bearing in mind what “term” rates are and that the Fed’s “primary credit rate” is what’s more commonly known as its “discount” rate — that is, “the interest rate charged to commercial banks and other depository institutions on loans they receive from their regional Federal Reserve Bank’s lending facility–the discount window.”

That Regulation D refers to “term” rates rather than overnight rates, when the latter are obviously more appropriate, is the least of it. The inclusion on the Fed’s list of comparable rates of the Fed’s primary credit rate is the real kicker. First of all, that rate isn’t a market rate but one that the Fed itself administers. What’s more, the Fed has long had a policy of setting it well “above the usual level of short-term market interest rates” (my emphasis again). These days, for example, it sets it “at a rate 50 basis points above the Federal Open Market Committee’s (FOMC) target rate for federal funds.” Because the IOER rate once defined the upper limit of the FOMC’s fed funds target rate range, and is now set 5 basis points below that limit, any interest rate that the Fed pays on reserves is bound to be lower than the Fed’s primary credit rate. Thus the Fed has cleverly interpreted and implemented the statute in a manner that allows it to claim that it is obeying the law requiring that its IOER rate not exceed “the general level of short-term interest rates” no matter how it sets that rate, including when it sets it well above truly comparable market-determined short-term rates!

Now I hope you’re at least starting to see why the Fed’s report has got my goat.

_______________________
[1] “Sic” because it is the Board of Governors, rather than the FOMC, that sets the IOER rate. Concerning this anomalous exception to the rule assigning responsibility for the conduct of monetary policy to the FOMC, see my January 10, 2018 testimony before the Monetary Policy and Trade Subcommittee of the House Financial Services Committee.

[Cross-posted from Alt-M.org]

As a physician licensed to prescribe narcotics, I am legally  permitted to prescribe the powerful opioid methadone (also known by the brand name Dolophine ) to my patients suffering from severe, intractable pain that hasn’t been adequately controlled by other, less powerful pain killers. Most patients I encounter who might fall into that category are likely to be terminal cancer patients. I’ve often wondered why I am approved to prescribe methadone to my patients as a treatment for pain, but I am not allowed to prescribe methadone to taper my patients off of a physical dependence they may have developed from long-term opioid use, so as to help them avoid the horrible acute withdrawal syndrome. I am also not permitted to prescribe methadone as a medication-assisted treatment for addiction. These last two uses of the drug require special licensing and permits and must comply with strict federal guidelines. 

The synthetic opioid methadone was invented in Germany in 1937. By the 1960s, methadone was found to be effective as medication-assisted treatment for heroin addiction, and by the 1970s methadone treatment centers were established throughout the US, providing specialized and highly structured care for patients suffering from Substance Abuse Disorder. The Narcotic Addict Treatment Act of 1974 codified the methadone clinic structure. Today, methadone clinics are strictly regulated by the Drug Enforcement Administration, the National Institute on Drug Abuse, the Substance and Mental Health Services Administration, and the Food and Drug Administration. These regulations establish guidelines for the establishment, structure, and operation of methadone clinics, in most cases requiring patients to obtain their methadone in person at one fixed site. After a period of time, some of these patients are allowed to take methadone home from the facility to self-administer while they remain closely monitored. This onerous regulatory system has led to an undersupply in methadone treatment facilities for patients in need. Furthermore, the need for patients to travel, often long distances, each day to the clinic to receive their daily dose has been an obstacle to their obtaining and complying with the treatment program.

Earlier this month addiction specialists from the Boston University School of Medicine and Public Health and the Massachusetts Department of Public Health argued in the New England Journal of Medicine that community physicians interested in the treatment of Substance Abuse Disorder should be allowed to prescribe methadone to their patients seeing them in their offices and clinics. Doctors have been allowed to prescribe the opioid buprenorphine for medication-assisted treatment of addiction for years, and in recent years nurse practitioners and physicians’ assistants have been able to obtain waivers that allow them to engage in medication-assisted treatment as well.

The authors noted that methadone has been legally prescribed by primary care providers to treat opioid addiction in other countries for many years— in Canada since 1963, in the UK since 1968, and in Australia since 1970, for example. They state, 

Methadone prescribing in primary care is standard practice and not controversial in these places because it benefits the patient, the care team, and the community and is viewed as a way of expanding the delivery of an effective medication to an at-risk population.

Policymakers serious about addressing the ever-increasing overdose rate from (mostly) heroin and fentanyl afflicting our population should take a serious look at reforming the antiquated regulations that hamstring the use of methadone to treat addiction.

 

In the few days since President Trump nominated him to be an Associate Justice on the Supreme Court, Judge Brett Kavanaugh has seen his life put under the microscope. It turns out that the U.S Court of Appeals for the D.C Circuit judge really likes baseball, volunteers to help the homeless, and has strong connections to the Republican Party – especially the George W. Bush administration. More consequentially, Kavanaugh is an influential judge with solid conservative credentials. For libertarians, Kavanaugh’s record includes much to applaud, especially when it comes to reining in the power of regulatory authorities. However, at least one of Kavanaugh’s concurrences reveals arguments that should concern those who value civil liberties. Members of the Senate Committee on the Judiciary should press Kavanaugh on these arguments at his upcoming confirmation hearing.

In 2015, Kavanaugh wrote a solo concurrence in the denial of rehearing en banc in Klayman v. Obama (full opinion below), in which the plaintiffs challenged the constitutionality of the National Security Agency’s (NSA) bulk telephony metadata program. According to Kavanaugh, this program was “entirely consistent” with the Fourth Amendment, which protects against unreasonable searches and seizures.

The opening of the concurrence is ordinary enough, with Kavanaugh mentioning that the NSA’s program is consistent with the Third Party Doctrine. According to this doctrine, people don’t have a reasonable expectation of privacy in information they volunteer to third parties, such as phone companies and banks. This allows law enforcement to access details about your communications and your credit card purchases without search warrants. My colleagues have been critical of the Third Party doctrine, filing an amicus brief taking aim at the doctrine in the recently decided Fourth Amendment case Carpenter v. United States

Because the Third Party Doctrine remains binding precedent, Kavanaugh argues, the government’s collection of telephony metadata is not a Fourth Amendment search. Regardless of one’s opinion of the Third Party Doctrine, this is a reasonable interpretation of Supreme Court precedent from an appellate judge.

Yet in the next paragraph the concurrence takes an odd turn. Kavanaugh argues that even if the government’s collection of millions of Americans’ telephony metadata did constitute a search it would nonetheless not run afoul of the Fourth Amendment:

Even if the bulk collection of telephony metadata constitutes a search,[…] the Fourth Amendment does not bar all searches and seizures. It bars only unreasonable searches and seizures. And the Government’s metadata collection program readily qualifies as reasonable under the Supreme Court’s case law. The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient “special need” – that is, a need beyond the normal need for law enforcement – that outweighs the intrusion on individual liberty. Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports. […] The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States. See THE 9/11 COMMISSION REPORT (2004). In my view, that critical national security need outweighs the impact on privacy occasioned by this program. The Government’s program does not capture the content of communications, but rather the time and duration of calls, and the numbers called. In short, the Government’s program fits comfortably within the Supreme Court precedents applying the special needs doctrine.

This paragraph includes a few points worth unpacking: 1) That the collection of telephony metadata is permitted under the “Special Needs” Doctrine, and 2) The 9/11 Commission Report buttresses the claim that “The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States.”

Kavanaugh asserts that the NSA’s program serves a special need, and is therefore exempt from the Fourth Amendment’s warrant requirement. The so-called Special Needs Doctrine usually applies when government officials are acting in a manner beyond what is associated with ordinary criminal law enforcement. Justice Blackmun explained the justification for the doctrine in his New Jersey v. T.L.O. (1985) concurrence:

Only in those exceptional circumstances in which special needs, beyond the normal need for law enforcement, make the warrant and probable cause requirement impracticable, is a court entitled to substitute its balancing of interests for that of the Framers.

Kavanaugh’s concurrence includes a few notable examples of the Special Needs Doctrine, such as drug tests for high school athletes and drunk driving roadblocks. Unlike Klayman, which concerned the indiscriminate bulk collection of millions of citizens’ telephony metadata, these cases involved limited searches specific to an isolated government interest.

In United States v. United States District Court (1972) – the so-called “Keith Case” – the Supreme Court rejected the government’s argument that “the special circumstances applicable to domestic security surveillances necessitate a further exception to the warrant requirement.”

The Supreme Court did not find this or some of the government’s arguments persuasive:

But we do not think a case has been made for the requested departure from Fourth Amendment standards. The circumstances described do not justify complete exemption of domestic security surveillance from prior judicial scrutiny. Official surveillance, whether its purpose be criminal investigation or ongoing intelligence gathering, risks infringement of constitutionally protected privacy of speech. Security surveillances are especially sensitive because of the inherent vagueness of the domestic security concept, the necessarily broad and continuing nature of intelligence gathering, and the temptation to utilize such surveillances to oversee political dissent. We recognize, as we have before, the constitutional basis of the President’s domestic security role, but we think it must be exercised in a manner compatible with the Fourth Amendment. In this case we hold that this requires an appropriate prior warrant procedure.

Kavanaugh’s argument that the NSA’s domestic spying can override Fourth Amendment protections thanks to “special needs” is at odds with the Supreme Court’s holding in the Keith Case. If the Court expanded special needs to cover the bulk collection of telephony metadata it would be the most expansive application of the doctrine to date.

It’s important to consider why Kavanaugh believes “bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States.”

In making this claim, Kavanaugh cited the 2004 9/11 Commission Report. This report does not directly recommend the bulk collection surveillance at issue in Klayman, nor does it make the argument that such a program would have prevented the 9/11 attacks.  

In fact, the Privacy and Civil Liberties Oversight Board’s (PCLOB) 2014 report on the NSA’s bulk telephony surveillance program, published before Kavanaugh’s Klayman concurrence, found that the program was not a critically important part of the ongoing War on Terror:

Based on the information provided to the Board, we have not identified a single instance involving a threat to the United States in which the telephone records program made a concrete difference in the outcome of a counterterrorism investigation. Moreover, we are aware of no instance in which the program directly contributed to the discovery of a previously unknown terrorist plot or the disruption of a terrorist attack. And we believe that in only one instance over the past seven years has the program arguably contributed to the identification of an unknown terrorism suspect. In that case, moreover, the suspect was not involved in planning a terrorist attack and there is reason to believe that the FBI may have discovered him without the contribution of the NSA’s program.

Even in those instances where telephone records collected under Section 215 offered additional information about the contacts of a known terrorism suspect, in nearly all cases the benefits provided have been minimal — generally limited to corroborating information that was obtained independently by the FBI.

Kavanaugh’s assertion that the NSA’s invasive surveillance program is justified on national security grounds is simply not supported by the 9/11 Commission Report or the PCLOB’s report.

If the Senate does vote to confirm Kavanaugh, as is widely expected, he will likely be on the bench for decades. In that time, he will hear cases involving warrantless surveillance justified on national security grounds. This surveillance may involve facial recognition, drones, and other emerging surveillance methods. That a potential Supreme Court justice might view such warrantless surveillance as justified because of a national security-based “special needs” exception to the Fourth Amendment should worry everyone who values civil liberties. Members of the Senate Committee on the Judiciary must ask Kavanaugh to better explain his reasoning in Klayman.

Klayman v. Obama by Matthew Feeney on Scribd

Nationwide transit ridership in May 2018 was 3.3 percent less than in the same month of 2017. May transit ridership fell in 36 of the nation’s 50 largest urban areas. Ridership in the first five months of 2018 was lower than the same months of 2017 in 41 of the 50 largest urban areas. Buses, light rail, heavy rail, and streetcars all lost riders. 

These numbers are from the Federal Transit Administration’s monthly data report. I’ve posted an enhanced spreadsheet that has annual totals in columns GY through HO, mode totals for major modes in rows 2123 through 2129, agency totals in rows 2120 through 3129, and urban area totals for the nation’s 200 largest urban areas in rows 3131 through 3330.

Declines in 2018 continue a trend that began in 2014. Year-on-year monthly ridership has fallen in 21 of the last 24 months and all of the last seven months. The principle cause is likely the growth of Uber, Lyft, and other ride-hailing services, but whatever the cause, there seems to be no positive future for public transit.

Of the urban areas that saw ridership increase, ridership grew by 1.2 percent in Houston, 2.2 percent in Seattle, 2.4 percent in Denver, 1.2 percent in Portland, 5.0 percent in Indianapolis, 7.8 percent in Providence, 7.2 percent in Nashville, and an incredible 63.1 percent in Raleigh. Most of the growth in Raleigh was students carried by North Carolina State University’s bus system.

On a percentage basis, the biggest losers were Miami, Boston, Cleveland, Kansas City, and Milwaukee, all of which saw about 11 percent fewer riders in May 2018 than May 2017. Ridership fell 9.2 percent in Phoenix, 8.0 percent in Jacksonville, 7.2 percent in Virginia Beach-Norfolk, 6.4 percent in Dallas-Fort Worth, 5.9 percent in Atlanta, and 5.6 percent in Philadelphia.

Numerically, the biggest losses were in New York, whose transit systems carried 12.7 million fewer riders in May 2018 than 2017; Boston, -4.1 million; Los Angeles, -2.4 million; Philadelphia, -1.7 million; and Miami, -1.4 million. Chicago, Washington, Atlanta, and Phoenix all lost more than half a million monthly riders.

Some people have argued that ridership is declining because of cuts to transit services. Others have concluded that the cuts to transit service “mostly followed, and not led falling ridership.” The posted spreadsheet includes data for vehicle-revenue miles of service that could support either view.

Transit service in both Houston and Seattle grew by 2.6 percent, supporting Houston’s 1.2 percent and Seattle’s 2.2 percent ridership gains. Indianapolis’ 5.0 percent increase in ridership was supported by a 9.9 percent increase in service. Service declined 2.0 percent in New York and 3.7 percent in Los Angeles, either reflecting or contributing to falling ridership in those urban areas.

However, ridership declined 2.5 percent in San Diego despite a 10.9 percent increase in service. Ridership in San Jose fell by 4.2 percent despite a 2.4 percent increase in service. Jacksonville’s 8.0 percent loss of riders came in spite of a 2.6 percent increase in service.

It seems clear that service levels are only one of the factors influencing transit ridership. Moreover, there appear to be rapidly diminishing returns to service: large service increases are needed to get small ridership gains. On the other hand, ridership declines reduce agency revenues forcing reductions in service, leading to further ridership declines: a classic death spiral.

Transit industry leaders must be hoping for some kind of catastrophe that will send gasoline prices above $4 a gallon, for that is probably the only thing that could save the industry from its current trajectory. That is unlikely, and the industry is not worth saving any other way.

The Senate Judiciary Committee recently voted in favor of a bill that would update copyright law and apply new regulations to interactive streaming services, such as Spotify. The Music Modernization Act (MMA) addresses the issues of non-payment to copyright holders—the basis of a $1.6 billion lawsuit against Spotify—and undefined unenforceable music property rights stemming from the lack of a comprehensive database that records the ownership of copyrights. In the current issue of Regulation, Thomas Lenard and Lawrence White recount the history of music copyright law and discuss some of the shortcomings of the MMA.

The New York Times quotes one supporter of the Act as stating, “This is going to revolutionize the way songwriters get paid in America.” But the MMA primarily incorporates streaming services into the existing framework through which distributors of music obtain permission from and provide compensation to music copyright holders.

A key provision of the MMA is that the Register of Copyrights would designate a Musical Licensing Collective (MLC) with two primary functions. The first is to serve as a collective rights organization that grants licenses for interactive streaming, receives royalties from streaming services, and remits the royalties to copyrights holders. The second function is to create and manage a database of rights holders.

The revolutionary aspect of the MMA is the creation of such a database. Currently, the music industry lacks a comprehensive database that keeps track of copyrights, which is what has created the problems of nonpayment and limited music distributors’ ability to negotiate with individual copyright holders. Lenard and White contend that the database building function of the MLC may be necessary because the economies of scale in managing such a database might be large enough to create a natural monopoly (though nongovernmental groups are already developing open source and blockchain initiatives to solve these problems).

However, by linking the database function of the MLC with its role as a collective rights organization, Lenard and White argue that the MMA simply extends a regulatory regime that limits competition. As it stands, the music copyright system largely consists of compulsory licenses and rates set by administrative or judicial proceedings. The MLC as outlined in the MMA would be a government enforced monopoly with the same anticompetitive practices.

As Lenard and White state,

Whenever an opportunity for pro-competitive reform of music licensing arises, policymakers seem to revert to the traditional regulatory model that discourages competition. They never miss an opportunity…to miss an opportunity. The MMA— with its reliance on compulsory licensing, blanket licensing by a marketing collective, and regulated rates—is the latest of several recent examples.

Instead of extending the current anticompetitive regulations to streaming services, policymakers should instead update the music copyright registration system and allow a competitive copyright market to develop through which those copyrights are traded.  Those changes would provide greater benefits for music creators, distributors, and consumers.

Written with research assistance from David Kemp.

Readers who watched the Cato forum last November on prosecutorial fallibility and accountability, or my coverage at Overlawyered, may recall the story of how a Federal Trade Commission enforcement action devastated a thriving company, LabMD, following a push from a spurned vendor. Company founder and president Mike Daugherty, who took part on the Cato panel, wrote a book about the episode entitled The Devil Inside the Beltway: The Shocking Exposé of the U.S. Government’s Surveillance and Overreach into Cybersecurity, Medicine and Small Business.

Last month two separate federal appeals courts issued rulings offering, when combined, some consolation for Daugherty and his now-shuttered company. True, a panel of the D.C. Circuit Court of Appeals, finding qualified immunity, disallowed the company’s claims that FTC staffers had violated its constitutional rights by acting in conscious retaliation for its criticism of the agency. On the other hand, an Eleventh Circuit panel sided with the company and (quoting TechFreedom) “decisively rejected the FTC’s use of broad, vague consent decrees, ruling that the Commission may only bar specific practices, and cannot require a company ‘to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.’” [More on the ruling here and here]

As usual, John Kenneth Ross’s coverage at the Institute for Justice’s Short Circuit newsletter is worth reading, both descriptions appearing in the same roundup since they were decided in such quick succession:

Allegation: Days after LabMD, a cancer-screening lab, publicly criticized the FTC’s yearslong investigation into a 2008 data breach at the lab, FTC staff recommend prosecuting the lab. Two staffers falsely represent to their superiors that sensitive patient data spread across the internet. (It hadn’t.) The FTC prosecutes; the lab lays off all workers and ceases operations. District court: Could be the staffers were unconstitutionally retaliating for the criticism. D.C. Circuit: Reversed. Qualified immunity. (Click here for some long-form journalism on the case.)…

Contrary to company policy, a billing manager at LabMD—a cancer-screening lab—installs music-sharing application on her work computer; a file containing patient data gets included in the music-sharing folder. In 2008 a cybersecurity firm finds it and tells LabMD the file has spread across the internet. (Which is false.) When LabMD declines to hire the cybersecurity firm, the firm reports the breach to the FTC, which prosecutes the case before its own FTC judge. LabMD does not settle; the expense of fighting forces the company to shutter. The FTC orders LabMD to adopt “reasonably designed” cybersecurity measures. Eleventh Circuit: The FTC’s vague order is unenforceable because it doesn’t tell LabMD how to improve its cybersecurity.

Our friend Berin Szóka of TechFreedom sums it up: “The court could hardly have been more clear: the FTC has been acting unlawfully for well over a decade.” He continues by calling this “a true David and Goliath story”:

Well over sixty companies, many of them America’s biggest corporations, have simply rolled over when the FTC threatened to sue them [over data security practices]. … Only Mike Daugherty, the entrepreneur who started and ran LabMD, had the temerity to see this case through all the way to a federal court. …After losing his business and a decade of his life, Daugherty is a hero to anyone who’s ever gotten the short end of the regulatory stick.

 

 

When a user clicks on a Google search result, the web browser transmits a “referral header” to the destination website, unless a user has disabled them. The referral header contains the URL of the search results page, which includes the user’s search terms. Websites use this information for editorial and marketing purposes.

In 2010, Paloma Gaos filed a class action in the Northern District of California, seeking damages for the disclosure of her search terms to third-party websites through referral headers, claiming fraud, invasion of privacy, and breach of contract, among others. She eventually settled with Google on behalf of an estimated class of 129 million people in return for an $8.5 million settlement fund and an agreement from Google to revise its FAQ webpage to explain referral headers. Attorneys’ fees of $2.125 million were awarded out of the settlement fund, amounting to 25 percent of the fund and more than double the amount estimated based on class counsel’s actual hours worked.

But no class members other than the named plaintiffs received any money! Instead, the remainder of the settlement fund was awarded to six organizations that “promote public awareness and education, and/or…support research, development, and initiatives, related to protecting privacy on the Internet.” Three of the recipients were alma maters of class counsel.

This diversion of settlement money from the victims to causes chosen by the lawyers is referred to as cy pres. “Cy pres” means “as near as possible,” and courts have typically used the cy pres doctrine to reform the terms of a charitable trust when the stated objective of the trust is impractical or unworkable. The use of cy pres in class action settlements—particularly those that enable the defendant to control the funds—is an emerging trend that violates the due process and free speech rights of class members.

Accordingly, class members objected to the settlement, arguing that the district court abused its discretion in approving the agreement and failed to engage in the required rigorous analysis to determine whether the settlement was “fair, reasonable, and adequate.” The U.S. Court of Appeals for the Ninth Circuit affirmed the settlement, so two objecting class members, including Competitive Enterprise Institute lawyer Ted Frank, asked the Supreme Court to take the case (with a supporting brief from Cato)—which it has.

Cato has filed an amicus brief at this merits stage, arguing that the use of cy pres awards in this manner violates the Fifth Amendment’s Due Process Clause and the First Amendment’s Free Speech Clause. Specifically, each class member has a right to his claim, any compensation that arises from it, and representation that will defend the first two rights. The aggregate nature of class actions makes it easy to forget that their sole foundation is individual rights; class counsel and defendants end up ignoring that foundation and using the class as an aggregate tool for self-interest and collusion. When the settlement includes a cy pres award, it’s worse because class members’ property is involuntarily transferred to strangers. That those strangers are charitable organizations does not improve the situation, because it just gives class counsel and defendants’ collusion a philanthropic veneer. In the end, cy pres awards guarantee that every participant in the litigation derives some benefit except for the class members, the owners of the property being doled out. This perversion of the role of the judiciary is a gross violation of due process, and only a shift to an opt-out system and rigorous supervision by the courts can salvage individual rights.

This morning, USA Today published an article by Brad Heath that examined data showing Baltimore (City) Police Department (BPD) activity slowed at the same time Baltimore homicides infamously spiked since 2015. The piece is worth reading in full and the data deserves a more detailed response, but at the outset it’s important to note what the data do not say.

Several comments by current and former members of the BPD quoted in the piece say that front line officers are unwilling to do their jobs because of the public backlash to Freddie Gray’s death. Recall that, following a chase, several Baltimore police officers shackled Freddie Gray but left him unsecured in the back of a police van—strongly resembling what is colloquially known as  a “rough ride,” an unofficial retaliation for making police officers chase someone, also known as a “run tax”—and Gray consequently died of a broken neck suffered in that van. The subsequent though unsuccessful criminal prosecutions of the BPD officers involved for what looked like an illegal extrajudicial punishment that led to a man’s death, apparently, discourages front line officers from being proactive to keep the community safe. And, one way to look at the USA Today data is to say that, as a consequence of this slow down, murder rates have jumped precipitously.

It is a damning indictment indeed if BPD officers feel they need the freedom to needlessly kill Baltimore residents to do their jobs effectively. The data certainly shows a work slow-down by Baltimore officers and that slow-down may, in fact, be one factor that partially contributes to the rise in homicides. But that front-line officers feel this way about the people they are sworn to protect reflects a mindset that is anathema to positive police-community relations and thus endangers the community that has no reason to trust its police force.

Rather than being the cause of Baltimore’s murder spike, the BPD work slow-down is more likely just one symptom of an unhealthy departmental culture. As a result, that department has repeatedly proven itself unworthy of the public trust and the community suffers greatly because of it. 

Watch this space for more on this topic.

Even as public opinion shifts in favor of marijuana legalization, with sixty percent of Americans supporting broad legalization and ninety percent supporting medical use, Attorney General Jeff Sessions and the Department of Justice (DOJ) continue to stonewall efforts to expand availability of cannabis and cannabis-derived treatments for medical research.

In testimony to a Senate Appropriations subcommittee in April, Sessions argued that although recent studies have shown that access to medical marijuana reduces opioid overdose deaths, the evidence to support expanding access is still insufficient.

This is simply untrue. While DOJ and DEA policy have limited the ability of U.S. researchers to access and experiment with medical grade marijuana, substantial peer-reviewed scientific research supports the benefits of medical marijuana.

Medical marijuana has been shown to improve the quality of life and health outcomes of patients with cancer, multiple sclerosis, Parkinson’s disease, chronic pain, PTSD, and many other ailments. Israel and many European Union countries lead the way in medical and pharmaceutical research. The market for medical marijuana is projected to be worth 55 billion dollars by 2025, and biopharmaceutical firms are entering multi-million dollar partnerships with universities to advance the research and development of new cannabis-based medications.

Yet despite the economic and humanitarian gains from expanding research into of medical marijuana, the DOJ refuses to expand marijuana production for scientific use. In August 2016 the DEA issued a policy statement providing a legal registration process for marijuana suppliers. None of the 25 applications submitted thus far has been accepted or rejected. Instead of allowing the regulated production of marijuana for research purposes, as allowed under the law, the DEA is keeping applicants in bureaucratic limbo.

When questioned about the administrative inaction, Sessions argued that language in the policy violated the 1961 United Nations Single Convention on Narcotic Drugs.  Yet the treaty contains broad exemptions regarding medical research and use, and, given the proliferation of marijuana research abroad, legal pathways exist that do not violate the treaty.

Even as the DEA refuses to take action, other federal agencies are quietly accepting medical marijuana. The FDA recently approved a drug containing CBD (cannabidiol) derived from marijuana. In a statement, FDA Commissioner Scott Gottlieb said, “We’ll continue to support rigorous scientific research on the potential medical uses of marijuana-derived products and work with product developers who are interested in bringing patients safe and effective, high quality products.”

Restricting scientific research and development within the United States will only hurt American scientists, companies, and patients. While Jeff Sessions may continue to argue fiercely against medical marijuana, the tide is turning.

Research assistant Erin Partin contributed to this blogpost.

In a recent Philadelphia Inquirer opinion piece White House economic advisor Peter Navarro hailed the christening of a new transport ship in the nearby Philly Shipyard as evidence of the “United States commercial shipbuilding industry’s rebirth.” As is typical of Navarro’s pronouncements, the reality is almost the exact opposite. In fact, a closer examination of the ship’s construction reveals it to be symptomatic not of a rebirth, but of the industry’s long downward slide.

Named after the late Senator Daniel K. Inouye of Hawaii, Navarro describes the 850-foot Aloha-class vessel as “massive” and notes that it is “the largest container ship ever built in the United States.” This, however, is somewhat akin to the tallest Liliputian. Although perhaps remarkble in a domestic context, by international standards the ship is a relative pipsqueak. Triple-E class ships produced by Daewoo Shipbuilding & Marine Engineering for Maersk Line, for example, are over 1,300 feet in length. While the Inouye’s cargo capacity is listed at 3,600 TEUs (twenty-foot equivalent units, roughly equivalent to a standardized shipping container), the Triple-E class can handle 18,000.

The only thing truly massive about the Inouye is its cost. The price tag for this vessel and another Aloha-class ship also under construction at the Philly Shipyard is $418 million, or $209 million each. The Triple-E vessels, purchased by Maersk Line, meanwhile, each cost $190 million. The South Korean-built ships, in other words, offer five times the cargo capacity for nearly $20 million dollars less.

But the story gets worse.

The Wall Street Journal reports that after the Philly Shipyard completes work on “two small ships”—a reference to the Inouye and its sister vessel—it has no more orders lined up. The shipyard is already laying off 20 percent of its workforce and the dearth of future work has prompted speculation of a possible shutdown. Sadly, the Philly Shipyard’s travails are hardly atypical of the U.S. shipbuilding industry, and even Navarro admits that the sector has seen its workforce decline from 180,000 in 1980 to 94,000 today.

And yet we are to believe that the Inouye’s construction heralds the pangs of an alleged rebirth? 

At least credit the White House advisor for assigning proper blame for this sad state of affairs (which he misguidedly presents as credit). The Inouye, Navarro says, is in large part the result of a protectionist law called the Jones Act. He’s not wrong. Formally known as the Merchant Marine Act of 1920, the law mandates that ships transporting merchandise between two domestic ports be U.S.-built, U.S.-owned, U.S.-flagged, and U.S.-crewed.

The result is that instead of purchasing cheaper foreign-built ships, Americans are faced with enormous prices for relatively small ships. The cost of transportation, in turn, is higher than what it would otherwise be while the number of Jones Act-compliant vessels has gone down, along with jobs for mariners and shipbuilders. Those ships that remain, meanwhile, are far older than the foreign counterparts—no surprise given the cost deterrent to buying new ships. While the Inouye is brand new, the average Jones Act cargo ship is 34 years old. The international average is 25.2.

Consistent with other protectionist misadventures, the Jones Act’s list of victims includes those it was meant to help.

Rather than recommitting to the Jones Act and other failed forms of maritime protectionism as Navarro is so eager to do, the United States should instead be aggressively seeking this law’s repeal. An increasingly untenable status quo demands nothing less. Learn more about Cato’s Project on Jones Act Reform.

President Trump and his trade advisers are the most vocal in putting forward misguided views on the trade deficit, but, unfortunately, their position is a bipartisan one. Here’s something Congressman Brad Sherman of California said recently:

But Rep. Brad Sherman (D-CA), ranking member of the House Foreign Affairs Asia and the Pacific subcommittee, told Inside U.S. Trade he would be “surprised if any [bilateral] deal is finalized in the next 12 months.” Sherman met with Gerrish late last week, he said.

“Look, we spent 50 years telling the world that the only moral and correct thing to do was to have the United States run an enormous trade deficit with the entire world,” he said. “Of course, they decided to agree. Getting them to change their minds is not something that we are doing all that effectively and it’s certainly not something that is easy.”

Asked if he was confident a bilateral deal would be initiated in the near future, Sherman said “no, definitely not.”

Gerrish, he said, “was getting my input, but my input is certainly if you are dealing with a managed economy there has to be stated goals for how large the trade deficit will be or whether it will be balanced trade,” he said. “And it’s good to have people focused on the trade deficit; whether they are going about it the right way is perhaps another story. But ignoring it is a short-term strategy.”

Asked which countries might be top contenders for a bilateral, Sherman said none, adding that the criteria USTR was using to determine candidates was based on countries that trade fairly.

The U.S. will “strike deals” only with countries “that will provide for balanced and fair trade, of which there are none that I’m aware of right now,” he said.

The notion that a trade deal should lead to “balanced trade” seems like it comes from a Cuba-Venezuela trade arrangement, in which oil is traded for doctors. In the free market world of trade agreements, by contrast, the parties agree not to a barter of goods and services, but rather to remove tariffs and other protectionist barriers. The resulting bilateral trade balance is something to be determined by the market. The new trade flows are probably worth studying for various academic reasons, but are not a measure of success or failure of the deal.

By contrast, Congressman Sherman seems to think that the negotiation is over the trade deficit itself: “there has to be stated goals for how large the trade deficit will be.” But that’s not how U.S. trade negotiations do work or should work. What we negotiate about is the level of tariffs and other barriers. (Ideally, both sides would agree to have no tariffs, although in practice it is often just a lowering of tariffs.)

There can be complications from trading with the “managed economies” that he refers to, but those can be dealt with in trade agreements through specific rules. For example, they can establish rules on how state-owned enterprises should behave. There were rules of this sort in the Trans Pacific Partnership, and it would be a good idea for someone to propose similar rules in an agreement with China. 

International rules to limit managed trade and constrain protectionism are a good idea. A (bipartisan) focus on bilateral trade deficits, by contrast, won’t address these fundamental issues, and is a big mistake.

Congressman Sherman’s comments did not surprise me, because I had a brief exchange with him on this very issue in a House Committee hearing last year on the impact of a US-UK trade agreement (starts at 1:12:28). He asked the following question and was looking for a short answer: “Would a deal with Britain that simply eliminated all tariffs be good or bad for reducing America’s trade deficit? … it’s possible that it can’t be estimated.” I knew I wouldn’t be able to have a real discussion with him in this setting on the value of trade deficits as a metric, but in answering I wanted to get the point out there that looking at trade deficits is a mistake, so I said: “I can’t estimate it but I also don’t think trade deficits are bad for the economy.” He responded by saying, “We lose 10,000 jobs for every billion dollars of trade deficits …”, but then quickly moved on.

We have spent a lot of time over the years rebutting the misunderstandings about trade deficits: See, e.g., here, here, here, and here. But clearly, there is still work to do. 

I’ve previously blogged about Allah v. Milling, a case in which a pretrial detainee was kept in extreme solitary confinement for nearly seven months, for no legitimate reason, and subsequently brought a civil-rights lawsuit against the prison officials responsible. Although every single judge in Mr. Allah’s case agreed that these defendants violated his constitutional rights, a split panel of the Second Circuit said they could not be held liable, all because there wasn’t any prior case addressing the “particular practice” used by this prison. Cato filed an amicus brief in support of Mr. Allah’s cert pertition, which explicitly asks the Supreme Court to reconsider qualified immunity—a judge-made doctrine, at odds with the text and history of Section 1983, which regularly allows public officials to escape accountability for this kind of unlawful misconduct.

I also blogged about how, on June 11th, the Supreme Court called for a response to the cert petition, indicating that the Court has at least some interest in the case. The call for a response also triggered 30 days for additional amicus briefs, and over the last month, Cato has been coordinating the drafting and filing of two such briefs—one on behalf of a group of leading qualified immunity scholars (detailing the many recent academic criticisms of the doctrine), and the other on behalf of an incredibly broad range of fifteen public interest and advocacy groups concerned with civil rights and police accountability. 

The interest-group brief is especially noteworthy because it is, to my knowledge, the single most ideologically and professionally diverse amicus brief ever filed in the Supreme Court. The signatories include, for example, the ACLU, the Institute for Justice, the Second Amendment Foundation, Americans for Prosperity (the Koch brothers’ primary advocacy group), the American Association for Justice (formerly the Association of Trial Lawyers of America), the Law Enforcement Action Partnership (composed of current and former law-enforcement professionals), the Alliance Defending Freedom (a religious-liberties advocacy group), and the National Association of Criminal Defense Lawyers. The brief’s “Statement of Interest” section, after identifying and describing all of the individual signatories, concludes as follows:

The above-named amici reflect the growing cross-ideological consensus that this Court’s qualified immunity doctrine under 42 U.S.C. § 1983 misunderstands that statute and its common-law backdrop, denies justice to victims of egregious constitutional violations, and fails to provide accountability for official wrongdoing. This unworkable doctrine has diminished the public’s trust in government institutions, and it is time for this Court to revisit qualified immunity. Amici respectfully request that the Court grant certiorari and restore Section 1983’s key role in ensuring that no one remains above the law.

The primary theme of this brief is that our nation is in the midst of a major accountability crisis. The widespread availability of cell phones has led to large-scale recording, sharing, and viewing of instances of egregious police misconduct, yet more often than not that misconduct goes unpunished. Unsurprisingly, public trust in law enforcement has fallen to record lows. Qualified immunity exacerbates this crisis, because it regularly denies justice to victims whose constitutional rights are violated, and thus reinforces the sad truth that law enforcement officers are rarely held accountable, either criminally or civilly.

Moreover, qualified immunity not only hurts the direct victims of misconduct, but law enforcement professionals as well. Policing is dangerous, difficult work, and officers—most of whom do try to uphold their constitutional obligations—increasingly report that they cannot effectively carry out their responsibilities without the trust of their communities. Surveys of police officers thus show strong support for increased transparency and accountability, especially by holding wrongdoing officers more accountable. Yet continued adherence to qualified immunity ensures that this worthy goal will never be reached.

The Supreme Court is in recess now, and the defendants’ response brief won’t be due until September 10th, so we’re going to have to wait until early October to find out if the Supreme Court will take the case. But the Court, the legal community, and the public at large should now be aware that criminal defense lawyers, trial lawyers, public-interest lawyers of every ideological stripe, criminal-justice reform groups, free-market & limited-government advocates, and law enforcement professionals themselves all agree on at least one thing—qualified immunity is a blight on our legal system, and the time has come to cast off this pernicious, counter-productive doctrine.

In a 2012 dissent from a District of Columbia Appellate Court opinion, Supreme Court nominee Brett Kavanaugh acknowledged that “dealing with global warming is urgent and important” but that any sweeping regulatory program would require an act of Congress:

But as in so many cases, the question here is: Who Decides? The short answer is that Congress (with the President) sets the policy through statutes, agencies implement that policy within statutory limits, and courts in justiciable cases ensure that agencies stay within the statutory limits set by Congress.

Here he sounds much like the late justice Antonin Scalia, speaking for the majority in the 2014 case Utility Air Regulatory Group v. EPA:

When an agency claims to discover in a long-extant statute an unheralded power to regulate “a significant portion of the American economy” we [the Court] typically greet its announcement with a measure of skepticism.  We expect Congress to speak clearly if it wishes to assign to an agency decisions of vast “economic and political significance.”

Scalia held this opinion so strongly that, in his last public judicial act, he wrote the order (passed 5-4) to stay the Obama Administration’s sweeping “Clean Power Plan.” Such actions occur when it appears the court is likely to vote in a similar fashion in a related case.

This all devolves to the 2007 landmark ruling, 5-4, in Massachusetts v. EPA, that the EPA indeed was empowered by the 1990 Clean Air Act Amendments to regulate emissions of carbon dioxide if the agency found that they endangered human health and welfare (which they subsequently did, in 2009). Justice Kennedy, Kavanaugh’s predecessor, voted with the majority.

Will Kavanaugh have a chance to reverse that vote? That depends on what the new Acting Administrator of the EPA plans to do about carbon dioxide emissions. If the agency simply stops any regulation of carbon dioxide, there will surely be some type of petition to compel the agency to continue regulation because of the 2009 endangerment finding. Alternatively, those already opposed to it might petition based upon the notion that the science has changed markedly since 2009, with increasing evidence that the computer models that were the sole basis for the finding have demonstrably overestimated warming in the current era. It’s also possible that Congress could compel EPA to reconsider its finding, and that a watered-down version might find itself at the center of a court-adjudicated policy fight.

Whatever happens, though, it is clear that Brett Kavanaugh clearly prefers Congressional statutes to agency fiat. Assuming that he is confirmed, he will surely exert his presence and preferences on the Court, including that global warming is “urgent and important,” but it is the job of Congress to define the regulatory statutes.

Alexandria Ocasio-Cortez, the recent winner of a Democratic primary for Congress in New York, argued that free-trade agreements (FTAs) have caused the number of refugees and asylum seekers to the United States to grow.  This is a somewhat common claim among some critics of trade or FTAs in particular. 

To test this claim, we gathered a list of all the FTAs that the United States has signed and how many asylum seekers and refugees they sent to the United States since the year 2000.  We combined all asylum seekers, affirmative and defensive, that were counted by the United Nation Human Rights Commission.  Some asylum seekers from these countries are double or triple counted due to the oddities of the asylum system.  We then added refugee admissions from the Department of Homeland Security. 

Next, we ran several regressions to see the relationship between having an FTA with the United States and the number of asylum seekers, refugees, or those two categories of humanitarian visas combined who arrive in the United States from those countries.  The first regression was a difference-in-differences with two-way fixed effects.  The second was a difference-in-differences regression with linear time trends.  The third was a triple difference-in-differences with two-way fixed effects that also included asylum seekers, refugees, and humanitarian immigrants from Latin America specifically.  To ensure proper statistical inference, we computed robust standard errors clustered at the country level to correct for country-level autocorrelation in these variables.

Our results are that there is no statistically significant change in the number of asylum seekers or refugees that countries send to the United States after they sign an FTA in any of the above regressions.  We find very low within R-squares for these models that suggest that the presence of FTAs has very little predictive power for within-country variability for the number of asylum seekers and refugees.  In other words, FTAs don’t explain the flow of asylum seekers and refugees, and other variables that we did not include in our model do.       

Figure 1 shows the number of asylum seekers from countries that have signed an FTA since 2000 in the five years before and after the agreement going into effect.  Each line represents a different country.  There is no relationship between signing an FTA and the number of asylum seekers.

Figure 1

Asylum Seekers within Five Years of Signing an FTA per Country

Source: United Nation Human Rights Commission

The refugee system is the other half of the humanitarian immigration system and it shows no change in the number of asylum seekers before and after the signing of FTAs (Figure 2).  It’s worth noting that nations that send refugees to the United States send very few refugees and almost all of those sent in Figure 2 are Colombian.

Figure 2

Refugees within Five Years of Signing an FTA per Country

 

Source: Department of Homeland Security.

There are many potential explanations for changes in the number of asylum seekers and refugees coming to the United States.  They range from changing conditions in other countries to alterations in American law or policy and everything in between—but let us set aside the notion that FTAs somehow force people to flee their home countries. 

The Trump administration has announced it is suspending so-called “risk adjustment” payments to insurers who participate in ObamaCare’s Exchanges, and cutting spending on so-called “navigators,” who help (few) people enroll in ObamaCare plans. 

The Washington Post’s Catherine Rampell and other ObamaCare supporters are calling these steps sabotage. In fact, what these steps will do is make the costs of ObamaCare’s supposedly popular preexisting-conditions provisions more transparent.

Risk-Adjustment (Bailout) Payments to Insurers

ObamaCare’s so-called “risk adjustment” program exists to funnel money to insurers who enroll lots of sick people who cost more in claims than they pay in premiums. Without it, insurers probably wouldn’t participate in ObamaCare. We may therefore confidently describe the risk-adjustment program as a bailout designed to rescue insurers from the costs of ObamaCare’s preexisting-conditions provisions. 

The risk-adjustment program does a better job of protecting insurance companies than sick patients. Those preexisting-conditions provisions literally punish insurers for offering coverage that the sick find attractive. They therefore create powerful financial incentives for insurers to make their offerings unattractive to the sick.

The risk-adjustment program is supposed to counteract those incentives. Anecdotal evidence and empirical research both show it’s not working. The risk-adjustment program is failing to counteract the perverse incentives that ObamaCare itself creates. ObamaCare coverage is therefore getting worse for many sick patients. Don’t worry, the insurance companies come out okay. Insurers can mitigate whatever losses the bailouts don’t cover with even more restrictive benefit designs to keep the sick away. Sick patients fare less well.

Reducing or eliminating spending on the risk-adjustment program would reveal more of the harms of the preexisting-conditions provisions. More of the cost would fall on insurers, who would respond by offering even more restrictive coverage, or exiting the market. More such transparency might finally push Congress to repeal those provisions and put health care for the sick on a more stable footing. 

In February, a federal district court in New Mexico ordered the Centers for Medicare & Medicaid Services to cease using its methodology for making risk-adjustment payments until the agency adequately explains that methodology. On July 7, the agency announced it will not make any risk-adjustment payments until the issue is resolved.

The insurers will eventually get their bailouts. But the delay will cost them money and add uncertainty to the process. Those effects in turn may lead insurers to take even greater steps to protect themselves from the costs of the preexisting-conditions provisions—thereby making those costs more transparent.

Cutting Navigator Spending

ObamaCare authorizes CMS to make grants to “navigators”—i.e., groups who are supposed to help people enroll in ObamaCare plans. They are a waste of taxpayer money, and likewise hide the costs of ObamaCare’s preexisting-conditions provisions.

According to CMS, navigators received $63 million for plan year 2017 and $36 million for plan year 2018. In both years, they signed up less than 1 percent of ObamaCare enrollees. “During grant year 2016-2017,” CMS reports, “seventeen of those Navigators enrolled fewer than 100 people at an average cost of $5,000 per enrollee.” That’s more than the cost of the health insurance, in many cases. The Wall Street Journal reports, “One grantee took in $200,000 to enroll a grand total of one person. The top 10 most expensive navigators collected $2.77 million to sign up 314 people.” The Las Vegas Review-Journal editorializes, accurately, “the navigator scheme is a make-work government jobs program rife with corruption and highly susceptible to scam artists. It’s a slush fund for progressive constituent groups.”

The navigator program also hides the cost of ObamaCare’s preexisting-conditions provisions. Since the sick will reliably enroll in ObamaCare even without navigators, those whom navigators end up enrolling are going to be disproportionately healthy. Thus navigators are also helping to hide the costs of those provisions by spreading the costs across more (healthy) people. Cutting spending on navigators will likewise reveal more of the costs of those provisions.

The Trump administration announced it is cutting spending on navigators to $10 million for plan year 2019. It should eliminate the program entirely. The less the federal government spends on navigators, the more transparent ObamaCare’s costs will be.

* * * 

When ObamaCare supporters complain about such steps, they are describing transparency as sabotage. Think about what that means.

The anxiety leading up to this week’s NATO summit is unusually intense, thanks in large part to President Trump’s fractious relationship with European allies. Trump’s political values are often in tension with that of his transatlantic counterparts, and the White House is inching ever closer to an all-out trade war with Europe and Canada, but the real drama of the NATO summit will center on Trump’s brash accusations of allied free-riding. He recently sent letters to many European capitals berating them for not meeting their pledge to spend at least 2 percent of GDP on defense.

In a post at the International Institute for Strategic Studies, Lucie Béraud-Sudreau and Nick Childs try to push back on the notion that providing for European defense is all that costly for the United States. While it is true that the $602.8 billion the United States spent on its military in 2017 “was the equivalent of 70.1% of aggregate spending by all NATO member states,” this exaggerates the true cost, they argue.

Direct U.S. spending on European defense, by their estimate, is only about $30.7 billion in 2017 and $36 billion in 2018, or between 5.1% and 5.5% of the total U.S. defense budget.

How do they calculate this number? They tally up the cost of three things: (1) direct funding for NATO, including common procurements; (2) the costs of the U.S. military presence in Europe; and (3) U.S. foreign military assistance.

Now, $30-$40 billion every year is nothing to sniff at. That is an enormous chunk of change for an America that is $21 trillion in debt to be spending on the defense of a region that is remarkably rich, powerful, and safe.

The problem, however, is that this understates the true cost of America’s NATO commitments. It is misleading to count the U.S. contribution to NATO solely as a sum of direct annual costs. The tally should also account for the indirect cost of maintaining a military big enough to fulfill our security commitments in Europe. It must account for some share of the permanent force structure that would shift to the reserves, or disappear entirely, if the United States wasn’t pledged to treating an attack on Paris, France or Podgorica, Montenegro as synonymous with attacks on Paris, Texas, or Portland, Maine. This more inclusive count is very difficult if not impossible to calculate with precision, but it is more honest.

Moreover, if the debate about NATO burden sharing boils down to bickering over budget accounting, it would seem like proponents of the status quo are playing hide the ball. The object of U.S. foreign policy is to discourage other countries from spending more on defense. It is disingenuous to pretend otherwise. Free riding is not a bug of U.S. grand strategy, it is a feature of it — a point made perhaps too candidly by the Manhattan Institute’s Claire Berlinski: “How is it, then,” she asks, “that suddenly, we’re consumed with rage that Europe is ‘taking advantage’ of us? How have we forgotten that this is the point of the system? We designed it this way…”

She’s right. As Hal Brands, one of the leading scholarly proponents of America’s post-war grand strategy, explains in his book American Grand Strategy in the Age of Trump, the United States provides “protection that allows other countries to underbuild their militaries.” Or, as Christopher Layne writes in his book The Peace of Illusions, Washington “used NATO…to foreclose the possibility that the West European states would re-nationalize their security policies.”

If America is going to have a debate about security guarantees, it must be an informed one. It should not rest on downplaying the true costs of such policies, nor should it pretend that free riding is some kind of mistake. It seems rather futile to defend the strategy by arguing against its very logic.

The author thanks Christopher A. Preble and Caroline Dorminey for input on this post. 

The “fighting season” for public schools, not surprisingly, is roughly September through May, with summer vacations in June through August keeping the clash-rate down. So June doesn’t have as many new values and identity-based battles as most other months—15 were added to the Map—and we won’t be posting dispatches for August and September, unless something surprising happens. Of course, you can follow the Battle Map Twitter feed@PubSchoolFights–for new and updated conflicts whenever news breaks, and you can also search #WWFSchool, or post battles you find using that hashtag. And while the Facebook page will also slow down a bit, we’ll post interesting tussles we find there, too.  

Despite the waning action, June produced a few battles exemplifying the problems of forcing diverse people to fund a single system of government schools.

There is no bigger stage in the country—including in education—than New York City, with its 8.6 million residents and more than 1.2 million school-aged children. It is also very diverse ethnically and racially, and Mayor Bill de Blasio’s proposal to change how students are admitted to the city’s eight top high schools, from using test scores alone to admitting anyone finishing in the top 7 percent of their middle school class, sparked a battle not just about admissions, but race. While many African Americans and Hispanics, whose children have disproportionately low representation in the highly competitive schools, saw the proposal as at least a first step toward equity, many Asian Americans, whose children have disproportionately high representation, vigorously objected.

“The mayor is pitting minority against minority and that’s really messed up,” said Kenneth Chiu, president of the New York City Asian-American Democratic Club. “New York City has taken our money for several years and no one has provided help for us.”

When government controls access to schools for which everyone must pay, especially competitive admissions schools, it often creates a zero-sum game: if my child gets in, yours doesn’t. It’s a war waiting to happen, and when race is involved—indeed, when admission based on race is explicitly at issue—it stokes racial conflict, in this case primarily pitting different minority groups against each other.

In June we also saw high-profile throwdowns over what is taught in schools, especially history and sex education, subjects inextricably linked to race, moral values, politics, and other highly personal identities and values. In Michigan new social studies standards were being debated that, at least in draft form, removed some material on gay rights, Roe v. Wade, and took “democratic” out “core democratic values.” Of course, accusations of bias were lobbed back and forth.

State Sen. Patrick Colbeck (R-7th Dist.), who worked for many of the changes, said, “When I saw the bias inherent in those standards, I wanted to make changes.” Meanwhile, State Rep. Darrin Camilleri (D-23rd Dist.) called the proposed revisions a “thinly veiled attempt to push an ultra-conservative agenda.”

In Fairfax County, Virginia—the nation’s 11th largest district—an on-going war over its Family Life Education program produced a new battlefront, as proposed standards reportedly removed “clergy” from a list of trustworthy adults. Religion, then, was directly involved in the battle, even though the public schools are supposed to be religiously neutral. Of course they can’t be, which the perpetual sex education debate in Fairfax County and countless other districts has made crystal clear. Religious values are unavoidably entangled with matters of sex.

Speaking of impossible religious neutrality, check out the op-ed Corey DeAngelis and I wrote a couple of weeks ago presenting the case that, constitutionally, true religious neutrality requires school choice, then read this blog post—and the law review article to which it links—to get a much deeper treatment of the matter. If nothing else, it will help you pass the time, and contemplate a sustainable path to peace, as September inevitably approaches.

President Trump and others who are mistakenly troubled by trade deficits with specific countries should at least get the facts straight.  To fret about trade deficits in goods alone (ignoring services) is hopelessly old-fashioned in a world where the most exciting business and investment opportunities are typically in the service industries.   U.S. businesses are famously outstanding in software and communications services, health and education services, food and lodging services, legal, financial, accounting and marketing services, and so on.  Hollywood, Wall Street, Madison Avenue, Las Vegas and D.C.’s K-Street lawyers have always been known for their services, not “making stuff.”

The table shows a rapidly growing U.S. trade surplus in services with many important economies and regions.  The U.S. services surplus tripled from 2003 to 2017 with Canada and was 7-times larger for the EU, 12-times larger for South Korea, 25-times larger for China.   Rising trade surpluses in services have become large enough to more than offset the trade deficit in goods with some major trading partners – notably Canada.   For all countries combined, of course, the surplus in services is not yet large enough to offset the familiar cyclical uptick in the trade deficit in goods (most imported goods are industrial components and materials).  But it does not take much imagination or statistical expertise to envision an interesting trend in that direction.

 

Recently David Beckworth and Martin Sandbu, among others, have drawn attention to an interesting paper by James Bullard and Riccardo DiCecio unveiled in Norway earlier this year. In it, Bullard and DiCecio investigate a model economy possessing both a large private credit market and “Non-state contingent nominal contracting (NSCNC).” They conclude that, in such an economy, NGDP targeting is the “optimal monetary policy for the masses.”

Here is David Beckworth’s intuitive explanation for that finding:

The basic idea is that in a world of fixed-price nominal debt contracts (i.e. the real world), a NGDP level target provides better risk sharing among creditors and debtors against economic shocks than does a price stability target.

This is because a NGDP level target makes inflation countercyclical. During recessions, inflation rises and causes creditors to bear some of the unexpected pain by lowering the real debt payments they receive from debtors. During booms, inflation falls and allows creditors to share in some of the unexpected gain by increasing the real debt payments they receive from debtors. Debtors, in other words, bear less risk during recessions but also share unexpected gains during expansions.

NGDP level targeting, in other words, causes a fixed-price nominal debt world to look and feel a lot like an equity-world. In a similar spirit, some observers have called for a risk-sharing mortgages as a way to avoid another Great Recession. The point of this paper is that the same benefit that such risk-sharing mortgages would bring can be had by having a central bank target the growth path of NGDP.

Although Bullard and DiCecio’s specific argument is novel, the idea that fluctuations in the general price level can actually contribute to optimal risk sharing in a world of fixed nominal debts is itself by no means knew. Bullard and DiCecio themselves refer to previous work making the same basic argument by Evan Koenig and Kevin Sheedy , while in my previous article here I traced the idea all the way back to Samuel Bailey’s (1837) classic monograph, Money and its Vicissitudes in Value.

I myself first cottoned-on to the view that what’s now called NGDP targeting is more conducive to what economists nowadays call achieving optimal risk sharing in a world with many fixed nominal debt contracts (but which used to be called avoiding “debtor-creditor injustice”) while working on my PhD dissertation in the early 1980s. Back then I still didn’t know about Bailey, though I did discover a few other works — all written some years before — supporting my perspective.

My conclusions eventually found their way into my dissertation, and thence into my first (1988) book, The Theory of Free Banking. I later expanded and refined them in Less than Zero (1997, especially pp. 41-5; new edition forthcoming!). Because my earlier discussion is especially informal and intuitive, I thought that persons interested in more recent works addressing the same issue, like those of Koenig, Sheedy, and Bullard and DiCecio, might find it of interest, if not helpful to their understanding of these much more sophisticated works. So here it is, with no changes save (1) the addition of a new note; (2) the removal of two original notes that contained references only; and (3) the insertion of ellipses in place of a phrase that would seem meaningless here, where it has been stripped of its context.

***

To address the problem of debtor-creditor injustice, one must first understand how different kinds of price changes actually affect the well-being of parties on either side of a debt contract. One also has to have a definition of injustice. For the latter we may adopt the following: parties to a long-term debt contract may be said to be victims of injustice caused by price-level changes if, when the debt matures, either (a) the debtors on average find their real burden of repayment greater than what they anticipated at the time of the original contract and creditors find the real value of the sums repaid to them greater on average than what they anticipated; or (b) the creditors find the real value of the sums repaid to them smaller on average than what they anticipated and debtors find their real burden of repayment smaller than what they anticipated at the time of the original contract. When injustice occurs the parties to the debt contract, if they had had perfect foresight, would have contracted at a nominal rate of interest different from the one actually chosen.

It is not always appreciated that not all movements in the general level of prices involve injustice to debtors or creditors. Unanticipated general price movements associated with changes in per-capita output…do not affect the fortunes of debtors and creditors in the same, unambiguous way as do unanticipated price movements associated with monetary disequilibrium.* Where price movements are due to changes in per-capita output, it is not possible to conclude that unanticipated price reductions favor creditors at the expense of debtors. Nor can it be demonstrated that unanticipated price increases favor debtors at the expense of creditors. The standard argument that unanticipated price changes are a cause of injustice is only applicable to price changes caused by unwarranted changes in money supply or by unaccommodated changes in money demand.

This is so because in one of the cases being considered aggregate per-capita output is changing, whereas in the other it is stationary. In both cases a fall in prices increases the value of the monetary unit and increases the overall burden of indebtedness, whereas a rise in prices reduces the overall burden, other things being equal. In the case where per-capita output is stationary (the monetary disequilibrium case), the analysis need go no further, and it is possible to conclude that falling prices injure debtors and help creditors and vice versa. Were parties to long-term debt contracts able to perfectly anticipate price-movements, they would, in anticipation of higher prices, contract at higher nominal rates of interest; in anticipation of lower prices they would contract at lower nominal rates of interest. In the first case the ordinary real rate of interest is increased by an inflation premium; in the latter, it is reduced by a deflation discount. These adjustments of interest rates to anticipated depreciation or appreciation of the monetary unit are named the “Fisher” effect, after Irving Fisher who discussed them in an article written just before the turn of the century.

When per-capita output is changing, one must take into account, in addition to the Fisher effect, any intertemporal-substitution effect associated with changes in anticipated availability of future real income. Here (assuming no monetary disequilibrium) reduced prices are a consequence of increased real income, and increased prices are a consequence of reduced real income. Taking the former case, although the real value of long-term debts increases, debtors do not necessarily face a greater real burden of repayment since (on average) their real income has also risen. In nominal terms they are also not affected because, as distinct from the case of falling prices due to a shortage of money, their nominal income is unchanged. Thus debtors need not suffer any overall hardship: the damage done by the unanticipated fall in prices may be compensated by the advantage provided by the unanticipated growth of real income. If the parties to the debt contract had in this situation actually negotiated with the help of perfect foresight, their anticipation of reduced prices would have caused the nominal rate of interest to be reduced by a deflation discount — the Fisher effect. But their anticipation of increased real income would also reduce their valuations of future income relative to present income, raising the real component of the nominal rate of interest — the intertemporal-substitution effect. Since the Fisher effect and the intertemporal substitution effect work in opposite directions it is not clear that the perfect-foresight loan agreement would have differed from the one reached in the absence of perfect foresight — at least, the direction in which it would have differed is not obvious. So there is no reason to conclude that a monetary policy that permits prices to fall in response to increased production would prejudice the interests of debtors.

Similarly, to allow prices to rise in response to reduced per-capita output would not result in any necessary injustice to creditors, even if the price increases were not anticipated. Here the Fisher effect in a perfect-foresight agreement would be positive, and the intertemporal substitution effect would be negative, so it cannot be said a priori that the perfect-foresight nominal rate of interest would differ from the rate agreed upon in the absence of perfect foresight.

____________________________________

*By “monetary disequilibrium” I mean unanticipated changes in nominal spending (MV or, equivalently, Py). Earlier in my book I explain that a monetary policy “that maintains monetary equilibrium [is one] that prevents price changes due to changes in the demand for money relative to income without preventing price changes due to changes in productive efficiency.” I would have chosen my terms more carefully had I known better.

[Cross-posted from Alt-M.org]

Pages