Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Ilya Shapiro

If you ask reasonably informed consumers of news media what the year’s big Supreme Court case was, most would probably say Burwell v. Hobby Lobby, that case where “five white men” (in Harry Reid’s description) decided that corporations can deny women access to birth control. But, as I’ve said elsewhere, what was at stake in Hobby Lobby has nothing to do with the power of big business, the freedom to use any kind of legal contraceptive, or how to balance religious liberty against other constitutional considerations. Much like Citizens United (which struck down restrictions on corporate political speech without touching campaign contribution limits) and Shelby County (which struck down Section 4(b) of the Voting Rights Act because it was based on obsolete voting data that didn’t reflect current realities as constitutionally required), Hobby Lobby is doomed to be misunderstood.

The case was actually a rather straightforward question of statutory interpretation regarding whether the government was justified in this particular case in overriding religious liberties. The Supreme Court evaluated that question and ruled 5-4 that closely held corporations can’t be forced to pay for all of their employees’ contraceptives if doing so would violate their religious beliefs. There was no constitutional decision, no expansion of corporate rights, and no weighing of religion versus the right to use birth control.

That’s it. Nobody has been denied access to contraceptives and there’s now more freedom for all Americans to live their lives how they want, without checking their conscience at the office door. The contraceptive mandate fell because it was a rights-busting government compulsion that lacked sufficient justification.

That the Hobby Lobby dissenters and their media chorus made so much noise over this case is evidence of a larger process whereby the government foments needless social clashes by expanding its control over areas of life we used to think of as being “public” yet not governmental. The government thus uses private voluntary institutions as agents in its social-engineering project. These are places that are beyond the intimacies of the home but still far removed from the state: churches, charities, social clubs, small businesses, and even “public” corporations (which are nevertheless part of the “private” sector).

Where Alexis de Tocqueville celebrated the civil society that proliferated in the young American republic, the Age of Obama has heralded an ever-growing administrative state that aims to standardize “the Life of Julia” from cradle to grave. Through an ever-growing list of mandates, regulations, and assorted other devices, the government is pushing aside the “little platoons” that made this country what it was. We can call this tide of national collectivism overtaking the presumptive primacy of individual liberty and voluntarism the “Hobbylobbification of America.”

For more on all this, read my recently published book – Religious Liberties for Corporations? Hobby Lobby, the Affordable Care Act, and the Constitution – where my co-author David Gans and I debate all sorts of interesting issues. Perhaps most curious is that I minimize the significance of the ruling or its precedential value, while David says it’s really, really big (and really, really bad). That’s an unusual inversion in Supreme Court commentary; typically the winning side trumpets its victory while the losers try to explain why the decision really doesn’t mean that much. (If you’re curious about any of this, come to our book forum/debate this Tuesday, or watch online.)

Patrick J. Michaels and Paul C. "Chip" Knappenberger

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

In this issue of You Ought To Have A Look, we feature the work of Martin Hoerling and his research team at the Physical Science Division (PSD) of NOAA’s Earth System Research Laboratory—a place where scientists live and breathe atmospheric dynamics and a rare government facility that puts science before hype when it comes to anthropogenic climate change.

It is pretty obvious by now that whenever severe weather strikes—rain, snow, heat, cold, flood, drought, etc.—someone will proclaim the events are “consistent with” expectations of global warming from human emissions of greenhouse gases.

Harder to find (at least on TV) are folks who pooh-pooh such notions and instead point out that nature is a noisy place and a definitive study linking such-and-such weather event to human climate modifications does not exist.

In truth, the science of severe weather is a messy, muddy place, not at all the simple, clean “science is settled” description preferred by climate alarmists and regulation seekers.

Hoerling is one scientist who does conjure some press coverage when describing the general lack of human fingerprint on all manner of extreme weather events. While most others hand-wave the science, Hoerling and his team actually put the historical observations and the behavioral expectations from climate models directly to the test.

Take, for example, the ongoing California drought. There are all manner of folks calling the drought conditions there “historic” and “epic” and the “worst in 1,200 years” and, of course, pointing the finger directly at humans. Even President Obama has gotten in on the act.

Not so fast say Hoerling’s team, in this case, led by Richard Seager. They decided to look at just what the expectations of California drought should be under an increasing greenhouse effect—expectations, in this case, defined by the very climate models making the future climate projections and upon which the case for catastrophic climate change (and equally catastrophic regulations) are founded. Their findings caught the attention of Seth Borenstein, science writer for the Associated Press, who highlighted them in an article earlier this week—an article that raised awareness of Seager and Hoerling’s findings.

Borenstein’s article was headlined “Don’t Blame Man-made Global Warming for the Devastating California Drought, a New Federal Report Says” and began:

A report issued Monday by the National Oceanic and Atmospheric Administration said natural variations—mostly a La Niña weather oscillation—were the primary drivers behind the drought that has now stretched to three years.

Here are some additional highlights from the report itself (with emphasis added):

The current drought is not part of a long-term change in California precipitation, which exhibits no appreciable trend since 1895. Key oceanic features that caused precipitation inhibiting atmospheric ridging off the West Coast during 2011–14 were symptomatic of natural internal atmosphere-ocean variability.

Model simulations indicate that human-induced climate change increases California precipitation in mid-winter, with a low-pressure circulation anomaly over the North Pacific, opposite to conditions of the last three winters. The same model simulations indicate a decrease in spring precipitation over California. However, precipitation deficits observed during the past three years are an order of magnitude greater than the model-simulated changes related to human-induced forcing. Nonetheless, record-setting high temperature that accompanied this recent drought was likely made more extreme because of human-induced global warming.

Basically, aside from perhaps an added bit of warming, human-caused climate change played no role in the drought. In fact, climate models indicate almost the opposite set of occurrences (i.e., more winter precipitation).

This is but one of the Hoerling and team studies that burst warmie bubbles.

Here are a few more that you ought to have a look at:

Great Russian Heatwave of 2010:

We conclude that the intense 2010 Russian heat wave was mainly due to natural internal atmospheric variability. Slowly varying boundary conditions [that is, slowly increasing the greenhouse effect] that could have provided predictability and the potential for early warning did not appear to play an appreciable role in this event.

Great Plains Drought of 2012:

Climate simulations and empirical analysis suggest that neither the effects of ocean surface temperatures nor changes in greenhouse gas concentrations produced a substantial summertime dry signal over the central Great Plains during 2012.

In each case, the real expectations that the events are “consistent with” human-caused climate change are slim to none (despite the headlining media coverage to the contrary).

That’s the science. You ought to have a look!

K. William Watson

While government intervention often makes people’s lives worse, it can sometimes have aesthetically valuable side effects. For example, ancient pyramids are true marvels of human engineering, feudal despotism, and slave labor. Also, I’ll admit I’ve always enjoyed the iconic image of 1950s American cars in Cuba, which exist today because Cubans largely have been forbidden from buying new cars for over half a century.

A more modern consequence of big government causing cool things to happen is the existence of the Airbus Beluga Super Transporter. The Beluga exists because Airbus manufactures different parts of its planes in different European countries. Why does it do this? Subsidies! Lots of subsidies.

Airbus is based in France, where most of its planes are assembled. But the company is also subsidized by the United Kingdom, Germany, and Spain, and they each get at least one factory that makes some airplane component. In order to transport giant airplane parts like fuselages and wings from country to country, Airbus has designed a plane for the sole purpose of carrying plane parts between its factories.

 


I think it’s pretty cool looking. It’s also absurd. When your business model involves flying airplane parts around Europe in an airplane, it’s very possible you are inadequately concerned about efficiency.

To be clear, there’s nothing wrong with international supply chains.  In today’s globalized economy, it is not at all uncommon for manufacturing activity to be spread across multiple countries, particularly for complex or high-tech products. Today’s automobiles, regardless of their brand, contain a varied mix of foreign and domestic-made parts. An iPhone may be assembled in China, but its components were made in Korea and Japan, and its software was designed in California. Lots of factors go into deciding where to source manufacturing components, and when other factors outweigh transportation costs, global supply chains are born.

Perhaps, then, the Airbus Beluga would still exist in a free market, but I doubt it. The curious part of Airbus’s operations is not the fact of transportation, but the method. You may think it’s only natural that an airplane maker, when devising a logistical scheme for its supply chain, would gravitate toward air shipments. However, despite all the subsidies at home, Airbus has also set up factories in China and Alabama, where they somehow manage to send parts the same unexciting way all profit-maximizing global companies do—by sea.

 

Emma Ashford

While Washington focused yesterday on the prospect of yet another government shutdown, both House and Senate quickly and quietly passed bills which increase sanctions on Russia and authorize the sale of defensive arms to Ukraine.  S.2828 passed mid-afternoon by voice vote, while H.R. 5859 was passed without objection at 10:25pm last night, on a largely empty House floor. Indeed, the House resolution had been introduced only that day, giving members no time to review or debate the merits of a bill which has major foreign policy implications.

The bill requires the imposition of further sanctions on Russia, particularly on Rosboronexport, Russia’s main weapons exporter, as well as increasing licensing requirements for the sale of oil extraction technology to Russia. Any Russian company exporting weapons to Syria is also liable for sanctions. In addition, the bill contained a contingency, requiring the President to sanction Gazprom in the event that it interferes with the delivery of gas supplies to NATO members or to Ukraine, Georgia and Moldova. The bill also takes aim at Russia more broadly, directing the President to hold Russia accountable for its violations of the Intermediate Nuclear Forces (INF) Treaty, and to consider whether it remains in U.S. interests to remain a party to this treaty.

Significantly, the bill authorizes the president to make available defensive weapons, services and training to Ukraine, including anti-tank weapons, crew weapons and ammunition, counter-artillery radar, tactical troop-operated surveillance drones, and command and communications equipment. It  also includes additional aid for Ukraine, earmarked to help Ukraine loosen its reliance on Russian energy, and strengthen civil society. Other funds go to increasing Russian-language broadcasting in Eastern Europe by Voice of America and Radio Free Europe/Radio Liberty, in order to ‘counter Russian propaganda.’

S.2828 and H.R. 5859, which are reportedly identical, will likely be signed into law, although President Obama expressed concern on Thursday that further sanctions on Russia could prove counterproductive. While the bill stops short of some of the more extreme proposals found in various failed congressional bills (i.e., the Russian Aggression Prevention Act of 2014), it will have serious ramifications for U.S.-Russian relations. Up to this point, the White House has resisted arming Ukraine, fearing escalation of the conflict. But this bill will make it extremely difficult for the White House to continue this policy.

Arming Ukraine will escalate tensions with Russia, but it will do little to help the Ukrainian army - which is corrupt and in dire need of reform - to combat the insurgency in its Eastern regions. The bill ties the hands of diplomats, requiring that Russia ceases “ordering, controlling… directing, supporting or financing” any acts or groups which undermine Ukrainian sovereignty before sanctions can be lifted. The INF treaty stipulation is also dangerous, raising tensions, and increasing the possibility that both Russia and the U.S. could withdraw from the treaty.

Unfortunately, the provisions in this bill will make it all the more difficult to find a negotiated settlement to the Ukraine crisis, or to find a way to salvage any form of productive U.S.-Russia relationship. No wonder congress didn’t want to debate it openly.  

Patrick G. Eddington

Yesterday, CIA Director John Brennan delivered his public response to the Senate Select Committee on Intelligence report on the CIA’s detention and interrogation program. Rather than use the opportunity to fully acknowledge and accept the report’s findings and implications, Brennan offered a vigorous defense of the CIA, invoking the emotional trauma suffered by the country to help justify subsequent his agency actions.

Indeed, there were numerous, credible, and very worrisome reports about a second and third wave of major attacks against the United States,” Brennan said. “And while we grieved, honored our dead, tended to our injured, and embarked on the long process of recovery, we feared more blows from an enemy we couldn’t see … and an evil we couldn’t fathom.

“This is the backdrop against which the Agency was directed by President Bush to carry out a program to detain terrorist suspects around the world.

“In many respects, the program was uncharted territory for the CIA, and we were not prepared. We had little experience housing detainees, and precious few of our officers were trained interrogators. But the President authorized the effort six days after 9/11, and it was our job to carry it out.” (emphasis added)

But as the Senate report makes clear (p. 11), President Bush’s covert action Memorandum of Notification (MON, the formal authorization for the rendition and detention program) “made no mention of interrogations or interrogation techniques.” Thus, the initiative for the interrogations—including techniques involving torture under international and U.S. law—originated within the CIA. And as the Senate report lays out repeatedly—using the CIA’s own internal documents—agency personnel, and particularly its attorneys, knew very well that what they were proposing almost certainly violated U.S. and international law.

One early example (from p. 33 of the Senate report summary) of this articulated concern came in July 2002, when CIA attorneys

drafted a letter to Attorney General John Ashcroft asking the Department of Justice for “a formal declination of prosecution, in advance, for any employees of the United States, as well as any other personnel acting on behalf of the United States, who may employ methods in the interrogation of Abu Zubaydah that otherwise might subject those individuals to prosecution.” (emphasis added)

Enough CIA personnel understood they would be breaking the law that they had the foresight to ask preemptively for a DoJ “get out of jail free card” in the form of formal opinions from the Office of Legal Counsel. They got those opinions—which were later withdrawn, but which still likely would provide a shield from prosecution for waterboarding and the other torture tactics used by CIA interrogators.

And when pressed on whether the CIA should ever conduct such a program again, Brennan amazingly said he would “defer to the policymakers in future times when there is going to be the need to be able to ensure that this country stays safe if we face a similar type of crisis.”

Instead of learning the right lessons from this episode—that torture never produces accurate, reliable intelligence and that its use destroys the moral and political authority of the user—Brennan clearly left the door open to a future CIA rendition and detention program, including the use of coercive interrogation techniques.

Sen. Diane Feinstein, the outgoing chairwoman of the Senate Intelligence Committee, has called publicly for legislation to prevent a repeat of this episode. In light of Director Brennan’s remarks, it will be interesting to see whether Senator Feinstein’s first act in the 114th Congress will be to take legislative action to close the door on the CIA ever again engaging in rendition, detention and torture.

Walter Olson

For years the U.S. Department of Justice and Securities and Exchange Commission have been on a crusade to prosecute “insider trading,” even though it’s far from clear that activity should be criminal to begin with. Lately, those efforts have been led by Preet Bharara, U.S. Attorney for the Southern District of New York, who has obtained more than 80 convictions and plea deals, ruining countless careers and fortunes along the way.

On Wednesday, things changed. A three-judge panel of the New York-based Second Circuit U.S. Court of Appeals—the most influential lower court on questions of financial regulation—unanimously threw out Bharara’s high-profile conviction of hedge funders Todd Newman and Anthony Chiasson, directing that charges against them be dropped. It’s a “huge blow” to Bharara’s campaign, notes the New York Post, while Bloomberg Media calls it a “harsh rebuke” that “is likely to have far-reaching effects.” Alison Frankel of Reuters describes the ruling as “emphatic” and its conclusion “momentous.” The opinion is here.

Yale law professor Jonathan Macey, writing in the WSJ:

[The SEC and Bharara] prefer that the law exalt vague conceptions of “fairness” above the more concrete goals of having robust, liquid and efficient securities markets.

The new opinion is a game-changer. It signals to prosecutors that they cannot bring flawed cases and then hide behind the excuse that the law is vague. The Court of Appeals admonished that “the Supreme Court was quite clear” in previous cases about what is required to establish illegal insider trading.

Specifically, the Supreme Court and the lower federal courts have been explicit in saying that trading on an informational advantage is not necessarily illegal. To be illegal, the courts have said, trading by insiders must involve breaching a duty of trust and confidence. Courts have been clear, as the Supreme Court noted in Chiarella v. U.S. (1980) and again in U.S. v. O’Hagan (1997), that there is no “general duty between all participants in market transactions to forgo actions based on material, nonpublic information” because it is possible to acquire such information legitimately.

Tellingly, the Court of Appeals pointed out “the doctrinal novelty” of the government’s “recent insider trading prosecutions, which are increasingly targeted at remote tippees many levels removed from corporate insiders.”

And note this, from Alison Frankel’s Reuters column

The opinion not only sets clearer definitions for future insider trading prosecutions but also undermines the foundation of at least a half-dozen guilty pleas the Manhattan U.S. attorney’s office has already obtained [emphasis added—W.O.]. According to the 2nd Circuit, the government hasn’t proved the illegality of the insider disclosures underlying those pleas.

Yesterday, at a Cato book lunch for Brian Aitken, who was imprisoned by the State of New Jersey for carrying unloaded guns and ammunition in the trunk of his car, I observed that students of the plea bargaining system consider it absolutely standard that under the pressure of prosecution a large share of persons who believe themselves innocent of charges, and would have been cleared had they persisted to a final resolution in court, will nonetheless accept a plea deal for fear of the consequences of remaining defiant. That’s one reason congressional oversight of this area is urgently needed: our plea-bargain-driven system of criminal prosecution doesn’t really police itself well, even when the courts do the right thing. 

For another example of financial prosecutions run amok, see my post a while back on Justice Department enforcement actions under the Foreign Corrupt Practices Act.

Daniel R. Pearson

Sen. Sherrod Brown (D-OH) introduced a bill on Wednesday called the “Leveling the Playing Field Act.” According to the accompanying press release, the proposal would “restore strength to antidumping and countervailing duty laws” via a “crack down on unfair foreign competition.” The bill includes several provisions relating to practices used by the Department of Commerce to determine dumping and subsidy margins (i.e., the extent to which imported products are unfairly underpriced). It also contains modest changes to procedures used by the U.S. International Trade Commission (ITC) in deciding whether domestic industries have been “materially injured” by imports.

Since I have had only indirect exposure to the role of Commerce in antidumping and countervailing duty (AD/CVD) investigations, I will leave analysis of those proposed changes to others. However, my 10 years of experience as chairman and commissioner at the ITC provide a reasonable basis for commenting on the bill’s suggested modifications to the injury determination.

The existing AD/CVD statutes instruct the ITC to “evaluate all relevant economic factors” that relate to the effects of imports on the industry under consideration. A number of those factors are specifically mentioned, including the industry’s profits. Not being satisfied with just having the commission examine profits in general, the Brown bill adds, “gross profits, operating profits, net profits, [and] ability to service debt.” As a practical matter, the commission already looks in detail at an industry’s profitability and its ability to repay debts, so this additional wording would contribute nothing of substance.

The Brown bill would add a provision to the effect that an improvement in the industry’s performance over the period of investigation (normally about three years) should not preclude a finding that the industry has been materially injured by imports. Yes, there can be circumstances in which an industry’s results are strengthening, yet it is still being held back by import competition. However, the commission’s existing practice already considers this possibility, so the new language would not really change anything.

The bill also adds a section addressing the possible effects of a recession on the ITC’s injury analysis. It states that the commission may extend its period of investigation to begin at least a year before the recession started, which would allow before and after comparisons of how the domestic industry has performed. The ITC already has authority to adjust the period of investigation under special circumstances, but it relatively seldom does so.

Adding another year or two onto the period of investigation makes it more difficult for both domestic and foreign firms to provide the data required for the commission’s analysis. It also is likely to provide less helpful information than might be expected. In a recession, demand for goods tends to contract. It is quite normal for a recession to cause reductions in the quantity of imports as well as in the quantity produced domestically. Thus, extending the period of investigation probably won’t provide much information other than that both domestic industries and foreign producers tend to be hurt by recessions. The key reason that lengthening the period of investigation is not likely to influence the injury determination is that the most recent trends in the marketplace are almost always more relevant than whatever was happening four or five years earlier.

Although the changes proposed by Brown to the ITC’s injury determination are relatively modest, I would recommend against adopting them for a simple reason: litigation risk. The skilled and creative attorneys who represent domestic industries in AD/CVD cases would be only too happy to have another basis for appeal of commission decisions with which they disagree. A claim that the ITC had not adequately considered the newly crafted provisions would provide a potential justification for an appeal. Why invite such mischief?

If members of Congress actually are interested in modifying the AD/CVD statutes to make them better serve the interests of U.S. manufacturers, they should propose legislation that would balance the interests of domestic producers relative to the interests of U.S. consumers. Currently the ITC injury determination is limited to the effect of imports “on domestic producers of domestic like products.” In essence, the commission must disregard any costs that would be imposed on users of the product in the event that imports are restricted. Those costs often can be very large—well in excess of the potential benefits that might flow to domestic producers. (For more on this issue, see this thoughtful analysis by my colleague, Daniel Ikenson.)

As an example, the United States now imposes antidumping or countervailing duties (or both) on imports of hot-rolled steel from China, India, Indonesia, Russia, Taiwan, Thailand, and Ukraine. Hot-rolled is a basic form of steel coil that is further manufactured into products such as cold-rolled steel, corrosion-resistant steel, tin-coated steel, and welded steel pipe. In turn, those steel products are used to make automobiles, farm machinery, appliances, ventilation ducts, and a wide range of other products too numerous to mention. Manufacturing those value-added products employs far more people and contributes far more to the U.S. economy than is the case for hot-rolled steel.

The AD/CVD duties very likely cause hot-rolled steel to be higher priced in the United States than in many other countries. Thus, protection for hot-rolled producers raises costs for all other U.S. firms that utilize flat-rolled steel products. The spread between the cost of steel in the United States relative to other countries doesn’t have to be very wide before it can become more economical to import steel-containing manufactured products from other countries rather than producing them here.

If the Leveling the Playing Field Act achieves its intended purpose of providing an even greater level of protection to firms producing basic products, it certainly will have the unintended consequence of weakening the U.S. economy and reducing employment overall. Artificially increasing the costs borne by the wide swath of U.S. manufacturers that depend on steel as an input will make them more vulnerable to competition from overseas. Supporters of the bill instead should consider the possibility of adjusting the AD/CVD statutes to ensure that interests of downstream users are taken into account. The ITC’s injury determination should be changed so that the commission is required to assess not only the effects of imports on producers, but also the costs that import restrictions impose on users. This would be an important first step toward ensuring that AD/CVD measures don’t inadvertently damage the broad U.S. economy.

Simon Lester

Liberal activist Naomi Klein has a new book out provocatively subtitled “Capitalism vs. the Climate.” (For those of you who don’t want to buy her book, an essay she wrote a couple years ago with the same title is here.)

What amuses me about her attempt to pit capitalism and the climate against each other is that I came across an excerpt from the book in the Toronto Globe and Mail in which, unwittingly, she advocated policies that, even accepting the debate on her terms, can’t be good for the climate.

Not surprisingly, she supports subsidies for renewable energy. It’s hard to have renewable energy without those. But what struck me was that she also argued for tying those subsidies to the use of local content. For example, there is a government program in Ontario that subsidizes solar energy, but only if the energy suppliers use a certain percentage of labor and materials that are made in Ontario.

There are two obvious and related problems with such a requirement. First, by requiring local inputs, you make your product more expensive (especially when local means high-cost Canada). If your goal is cheaper renewable energy, raising the price of inputs doesn’t make a whole lot of sense.

Second, the idea that each sub-federal government should promote local production of a particular product is absurd. Imagine if that happened worldwide: there would be thousands of producers of these products! I can’t think of a more inefficient and energy-wasting approach to manufacturing.

Just to be clear, I know many strong supporters of taking action against climate change who do not believe in this kind of protectionist approach. They recognize that local content requirements are economically harmful and shouldn’t be part of these policies. For reasons that are difficult to understand, Klein seems to have missed this pretty obvious point. (I did tweet it at her, but I’m not expecting much from that!)

Matthew Feeney

Those who have argued for the deregulation of the taxi industry will be familiar with the claim that taxi deregulation was tried in the U.S. and that the results were so undesirable that regulation was introduced. In a recent Washington Post article about ridesharing and taxi regulation, Catherine Rampell states that prices rose in deregulated taxi markets and that the latest calls for deregulation are only the latest in a familiar cycle. However, future taxi deregulation will be different from past deregulation schemes thanks to relatively new changes in technology that allow passengers to overcome knowledge problems that led to price increases in deregulated taxi markets.

Rampell’s article includes some interesting historical insights. Regulations and licensing laws for passenger transport vehicles are nothing new. In the 17th century, Charles I tried to limit the number of horse-drawn carriages in London by passing an order which was ignored. During the Great Depression, some unemployed Americans found a source of income in the unlicensed taxi industry. By the 1990s much of the American taxi industry had been subjected to re-regulation following a wave of deregulation in roughly two dozen cities beginning in the 1960s.

Today, there are calls for the taxi industry to be deregulated amid the growth of ridesharing companies such as Uber, Lyft, and Sidecar. Some argue that taxis cannot fairly compete with ridesharing companies because they are hampered by outdated regulations, and that if taxis were deregulated they would be better suited to compete with rideshare companies. Rampell warns against deregulation, saying that we have “Been there, done that.”

While it is the case that the taxi industry in a number of American cities was re-regulated after a period of deregulation, many of the pricing problems cited as justification for taxi re-regulation are not applicable today thanks to technological advances.

In her article, Rampell links to a 1996 paper on taxi regulation written by Paul Dempsey, a law professor at McGill. The paper highlights an interesting problem that taxi customers face: a lack of good information.

Most taxi customers take the first taxi that appears. As Dempsey points out, it is not worth taxi customers conducting a price or service comparison in a deregulated market:

…consumers buying taxi service in a deregulated market often have little comparative pricing or service information, for the opportunity costs of acquiring it are high.

Taxi consumers do not have perfect information, so it is almost always worth taking the first taxi that appears. As Dempsey notes (citing work by economist Chanoch Shreiber), in the absence of fare regulation the prices of a taxi rides tends to increase:                                                                                                                

… because a prospective passenger who values his or her time will not likely turn down the first available cab on the basis of price, this will have an “upward pressure on the price.” A consumer hailing a cab from a sidewalk has an incentive to take the first taxi encountered, because both the waiting time for the next cab and its price are unknown. Paradoxically, in an open entry regime, prices tend to rise.

Although taxi prices did go up after the deregulation Dempsey discusses, we should not expect taxi deregulation in the future to have the same outcome.

Keep in mind that Dempsey’s paper came out in 1996, before smartphones allowed for companies like Uber and Lyft to emerge as strong taxi competitors.

Part of the appeal of ridesharing is that the apps used by Lyft and Uber customers allow users to overcome the knowledge problems highlighted by Dempsey. Uber and Lyft users can see the location of drivers, and Uber users can estimate a fare before their ride begins. Today, a taxi company could, unlike a taxi company in 1996, develop an app that allows for users to be better informed about fares and the availability of taxi drivers.

However, even if a taxi company were to develop such an app, it would have to compete with rideshare companies. One app that did allow its users to hail taxis, Hailo, was driven out of North America by the fierce competition between Uber and Lyft. MyTaxi, a Germany-based taxi app, is available in Washington, D.C and does allow users to estimate a fare before a ride begins and see the location of available drivers. If taxi companies want to remain competitive in markets where ridesharing drivers are operating an app like MyTaxi may be their best chance of surviving in the long term.

Ridesharing has dramatically changed vehicle-for-hire transportation, and as regulators look to address the rise of the sharing economy we should expect anything but the familiar regulatory cycle Rampell references. Taxi companies are facing strong competition from companies that would have been inconceivable almost twenty years ago, and they have the opportunity to develop products that can address the lack of information which contributed to taxi prices rising in deregulated markets. There may well be good arguments against the deregulation of the taxi industry, but such arguments must take into account changes in technology.  

Jason Kuznicki

This month at Cato Unbound, we’re talking about the Search for Extra-Terrestrial Intelligence, or SETI.

Why’s that, you ask?

Several reasons, really. First, although it’s not exactly a hot public policy topic, it will certainly become one if we ever actually find anything. But that’s hardly where the importance of the topic ends.

Much more interesting to me at least is that SETI can serve as a springboard for discussing all kinds of important concepts in public policy. Our contributors this month - David Brin, Robin Hanson, Jerome H. Barkow, and Douglas Vakoch - have talked about the open societycost-benefit analysisevolutionary psychology, the hubris of experts, the narcissim of small differences, and even Pascal’s Wager (and what’s wrong with it)

So… lots of interesting stuff, particularly for libertarians who are interested in public policy.

Doug Bandow

MOSCOW—Red Square is one of the world’s most iconic locales. Even during the worst of the U.S.S.R. the square was more symbolic than threatening. 

Very different, however, is Lubyanka, just a short walk away. 

In the late 19th century 15 insurance companies congregated on Great Lubyanka Street.  The Rossia agency, one of Russia’s largest, completed its office building in 1900. 

But in 1917 the Bolsheviks seized power.  They took the Rossia building for the new secret police, known as the All-Russian Extraordinary Commission for Combating Counter-Revolution and Sabotage, or Cheka.

The first Cheka head was Felix Dzerzhinsky.  He conducted the infamous “Red Terror,” what he called a “fight to the finish” against the Bolsheviks’ political opponents. 

After his death in 1926 Grand Lubyanka Street was renamed Dzerzhinsky Street.  A great statue of Dzerzhinsky, weighing 15 tons, was erected in a circle in front of the Cheka headquarters. 

After the KGB was dissolved the building went to the Border Guard Service, later absorbed by the Federal Security Service (FSB), responsible for foreign intelligence. Today Lubyanka looks non-threatening, a yellowish color and architectural style less severe than the harshly grandiose Stalinist architecture seen throughout the city.

The KGB faced its greatest challenge in the Gorbachev era.  Demands for reform raced beyond Mikhail Gorbachev’s and the KGB’s control.  In August 1991 KGB head Vladimir Kryuchkov helped plan the coup against Gorbachev. 

After the coup’s collapse a crowd gathered in front of Lubyanka and attempted to pull down the Dzerzhinsky monument.  City officials used a crane to finish the job.

Journalist Yevgenia Albats wrote:  “If either Gorbachev or [Boris] Yeltsin had been bold enough to dismantle the KGB during the autumn of 1991, he would have met little resistance.”  However, these two reformers attempted to fix rather than eliminate the agency.

And the KGB effectively ended up taking over Russia.  Yeltsin named Chekists, or members of the “siloviki” (or power agents), to important government positions, most importantly Vladimir Putin, who headed the FSB and then became prime minister—and Yeltsin’s successor as president when the latter resigned.

In 1999 Vladimir Putin became prime minister under President Yeltsin.  Anne Applebaum, Washington Post columnist, argued “that Putin—and, more importantly, most of the people around him—is deeply steeped in the culture of Andropov’s KGB.”  In her view they are modernizers but authoritarians, who “believe that the rulers of the state must exert careful control over the life of the nation.” 

After taking over Putin turned to his KGB network to run both the government and the economy.  The result, wrote UCLA’s Daniel Treisman, is a “silovarchy” in which “silovarchs” replaced the earlier economic oligarchs.  Whatever the economic consequences of this system, noted Treisman, “the temptation to use secret service tools and techniques predisposes such regimes toward authoritarian politics.”

As I wrote in the American Spectator online, “This system offers a tragic detour for people who desperately need liberty.  But despite the frenzied push in Washington for economic sanctions and military threats, the success of Putinism is well beyond America’s control.  The U.S. certainly should not promote military confrontation with nuclear-armed Moscow over an issue of limited importance to Washington.”

In fact, Putinism may face its strongest challenge on the economic front from declining energy prices, Western sanctions, and domestic distortions.  Putin’s poll ratings have risen since seizing Crimea, but as the nationalistic fervor fades the Russian people’s desire for prosperity may overcome the desire for order.

Finally, the system faces a natural limit:  The silivoki will die off.  Noted Applebaum, “Sooner or later, the generation trained in the mindset of Andropov’s KGB will retire.”  It’s hard to predict what will follow, but change is likely. 

It then will be critical for Russia’s new leaders to eliminate the Chekist mindset.  But  Lubyanka should be preserved, perhaps as a museum about tyranny.  No one should want to repeat the KGB experience.

Randal O'Toole

A left-coast writer named Mark Morford thinks that gas prices falling to $2 a gallon would be the worst thing to happen to America. After all, he says, the wrong people would profit: oil companies (why would oil companies profit from lower gas prices?), auto makers, and internet retailers like Amazon that offer free shipping.

If falling gas prices are the worst for America, then the best, Morford goes on to say, would be to raise gas taxes by $6 a gallon and dedicate all of the revenue to boondoggles “alternative energy and transport, environmental protections, our busted educational system, our multi-trillion debt.” After all, government has proven itself so capable of finding the most cost-effective solutions to any problem in the past, and there’s no better way to reduce the debt than to tax the economy to death.

Morford is right in line with progressives like Naomi Klein, who thinks climate change is a grand opportunity to make war on capitalism. Despite doubts cast by other leftists, Klein insists that “responding to climate change could be the catalyst for a positive social and economic transformation”–by which she means government control of transportation, housing, and just about everything else.

These advocates of central planning remind me of University of Washington international studies professor Daniel Chirot assessment of the fall of the Soviet empire. From the time of Lenin, noted Chirot, soviet planners considered western industrial systems of the late nineteenth century their model for an ideal economy. By the 1980s, after decades of hard work, they had developed “the most advanced industries of the late 19th and early 20th centuries–polluting, wasteful, energy intensive, massive, inflexible–in short, giant rust belts.”

Morford and Klein want to do the same to the United States, using climate change as their excuse, and the golden age they wish to return to is around 1920, when streetcars and intercity passenger trains were at their peak (not counting the WWII era). Sure, there were cars, but only a few compared with today.

What they don’t understand is that, even at their peak, intercity passenger trains carried the average American only about 900 miles a year, while streetcars and other urban transit carried the average American about 700 miles a year. Moreover, nearly all of this travel was by the top 25 or 30 percent: until that evil capitalist Henry Ford made his mass produced automobile available at affordable prices, the working class people that progressives claim to care about were no more mobile than Americans had been a hundred years before.

Thanks to profiteering automakers and greedy oil companies, the average American today travels by car nearly 15,000 miles a year, close to 10 times the total per capita urban and intercity rail travel of 1920. Morford and Klein, of course, think less travel would be a good thing, since it would result (says Morford) in “people shopping more locally and patronizing small businesses again.” Yet there’s no guarantee of that. Higher gas prices could also lead to people shopping on Amazon or seeking out WalMart’s “always” low prices even more than they do today.

Are Morford, Klein, and their allies ignorant of the facts, economically naive, or do they just object to the choices other people make? It always seems like demagoguery to say that opponents are afraid of freedom, but it’s a natural conclusion for progressives like Morford and Klein.

When they say, “shop locally,” what they mean is, “pay more for inferior goods.” When they say, “don’t reward the oil companies,” what they mean is, “most people shouldn’t be allowed to travel as much as they like.” When they say, “capitalism is bad,” what they mean is, “you shouldn’t allowed to buy things that other people make because they might earn a profit from it.” When they say, “a planet of suburbs is a terrible idea,” what they mean is, “everyone should live like I do.”

In reality, low gas prices mean increased mobility which in turn should promote the economic recovery that has been stalled for six years by Obama’s central planning. Cars are getting more fuel efficient no matter what oil and gas prices are, and even if that is partly because of government fiat, it is also a lot more cost-effective than trying to change everyone’s lifestyles.

Freedom means allowing people to make choices you wouldn’t make for yourself. Moreover, it means allowing people to make choices you may not agree with for anyone because in a democracy we agree that no one person has all the answers for everyone else. Ultimately, freedom means understanding that the alternative, no matter how good it sounds on paper, always leads to tyranny and oppression.

If you really care about certain values, and some technologies seem to run counter to those values, then you need to figure out ways to make your values more attractive, not try to tax or regulate those technologies to death. If the price of freedom is a slightly warmer world–and I’m not convinced that it is–then we are better off learning to live with it than having to live under the yoke of well-intentioned but ignorant planners who don’t understand such basic concepts as cost effectiveness or supply and demand.

Steve H. Hanke

Kevin Dowd, a long-time friend and eminent free-banking authority, set his sights on Bitcoin in the book he published this summer: New Private Monies: A Bit Part Player?.  His work delivers a refreshingly accurate and straightforward assessment of Bitcoin, ignoring the hype which surrounds it.

Both Kevin and I appreciate the importance of cryptocurrencies: in his own words, “The broader implications of cryptocurrency are extremely profound.”  The peer-to-peer exchange structure common to cryptocurrencies like Bitcoin cuts the intermediary out of transactions.  This eliminates the need for a third party in exchanges and protects wealth against exchange controls or capital controls.  Because Bitcoin and other cryptocurrencies are entirely digital, the location of the two parties of a transaction is irrelevant: transactions can be carried out anywhere.  This also makes transactions highly anonymous, a feature appealing to consumers who cherish privacy.

The intermediary-free, digital transactions characteristic of cryptocurrencies such as Bitcoin are an important step towards exchanges free of regulatory meddling.  In addition, this technology should enable low-cost banking accessible to anyone with a cellphone.  Indeed, cryptocurrencies should improve access to financial services in developing countries and elsewhere because they will complement existing services that now rely on standard currencies (see the M-Pesa in Kenya).

There is, however, an important line to be drawn between the future of the technology behind Bitcoin and the future of Bitcoin itself, thanks to its notorious volatility.  Kevin is crystal clear on the distinction:

“Though the supply of Bitcoin is limited, the demand is very variable; this variability has made its price very uncertain and created a bubble-bust cycle in the Bitcoin market.  Perhaps the safest prediction is that Bitcoin will eventually be displaced by alternative cryptocurrencies with superior features.” 

I couldn’t agree more.  The uncertainty (read: volatility) of Bitcoin speaks for itself in the accompanying chart.

As a supporter of cryptocurrencies, I only disagree with Kevin on one basic point. We are both well aware that volatility is Bitcoin’s great weakness.  I, however, also believe that Bitcoin’s volatility proves that it is a speculative asset, not a currency.  Thanks to its volatility, Bitcoin is unreliable enough that it is useless as a unit of account.  A unit of account is a well understood measurement for defining and comparing the values of goods, services, or purchases.  This is a crucial qualification of money.  Due to its volatility, Bitcoin fails to be a reliable unit of account.  It cannot be considered a money or currency.

While we might differ on whether Bitcoin should be classified as money, Kevin and I agree on one big thing: expect many more innovations and improvements from cryptocurrency technology.  Do not, however, hope for the same from Bitcoin.

This blogpost was co-authored by Connor Kenehan.

Tim Lynch

Over at Cato’s Police Misconduct web site, we have identified the worst case for the month of November. It turns out to be the Cleveland Police Department.

To begin with, in late November, a Cleveland police officer shot and killed a 12-year old boy, Tamir Rice.

The press reports based on the police accounts at the time of the incident read:

A rookie Cleveland police officer shot a 12-year-old boy outside a city recreation center late Saturday afternoon after the boy pulled a BB gun from his waistband, police said.

Police were responding to reports of a male with a gun outside Cudell Recreation Center at Detroit Avenue and West Boulevard about 3:30 p.m., Deputy Chief of Field Operations Ed Tomba said.

A rookie officer and a 10-15 year veteran pulled into the parking lot and saw a few people sitting underneath a pavilion next to the center. The rookie officer saw a black gun sitting on the table, and he saw the boy pick up the gun and put it in his waistband, Cleveland Police Patrolmen’s Association President Jeffrey Follmer said.

The officer got out of the car and told the boy to put his hands up. The boy reached into his waistband, pulled out the gun and the rookie officer fired two shots, Tomba said.

As detailed in this video report by MSNBC’s Chris Hayes, the initial reports by the police do not jibe with video evidence in several major respects.

The video shows Rice, alone, playing with his toy gun and also with the snow, as 12 year olds are wont to do. He was not, as the police said, with “a few people” in the pavilion. Other police reports to the press said the shooting officer got out of his car and told Rice three times to put his hands up. The video, unfortunately without audio and recording at the speed of two frames per second, shows the officer shooting Rice within 1.5-2 seconds after exiting the police vehicle.

The officers also waited several minutes before administering CPR to the fallen child.

The original call that drew the police to the park in the first place said the person with the gun in the park was likely a minor and likely was a toy gun. Apparently, this information was not relayed to the responding officers, who called-in the shooting victim as “possibly 20” years old.

The officer who shot Rice “was specifically faulted for breaking down emotionally while handling a live gun” according to subsequent reporting. The internal memo that informed the report concluded that the officer be “released from the employment of the City of Independence [,Ohio].”

Here’s the thing: The Cleveland Police Department hired the officer without checking his personnel file from his previous law enforcement job!

This tragic event is just the latest that involves police using deadly force, and likely too quickly. The facts released by the police department that favor the police officers involved were either misleading or inaccurate.

At best, this event highlights poor communication and procedure leading up to and immediately following a tragedy. At worst, this is a police department caught covering up a series of preventable mistakes that cost the life of a young boy.

The Department of Justice recently issued a report after looking into the policies and practices of the Cleveland Police Department.  According to the New York Times,

The Justice Department report on Cleveland cataloged many instances of unjustified force, including officers who assaulted, pepper-sprayed and even Tasered people already being restrained. In one case last year, the police fired two shots at a man wearing only boxer shorts who was fleeing from two armed assailants. In a 2011 case, a man who had been restrained on the ground with his arms and legs spread was then kicked by officers. He was later treated for a broken bone in his face.

The city’s policing problems, [Attorney General] Holder said, stemmed from “systemic deficiencies, including insufficient accountability, inadequate training and equipment, ineffective policies and inadequate engagement with the community.”

 

David Boaz

Former Florida governor – but Texas native – Jeb Bush told the Wall Street Journal CEO Council:

Republicans need to show they’re not just against things, that they’re for a bunch of things. 

Which reminds me of a quotation from Lyndon B. Johnson that George Will often cites:

We’re in favor of a lot of things and we’re against mighty few.

Let’s hope Bush’s “bunch” is different from Johnson’s “lot.” We can’t afford another such escalation in the size, scope, and power of government.

Paul C. "Chip" Knappenberger and Patrick J. Michaels

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

Despite what you may think if you reside in the eastern United States, the world as a whole in 2014 has been fairly warm. For the past few months, several temperature-tracking agencies have been hinting that this year may turn out to be the “warmest ever recorded”—for whatever that is worth (keep reading for our evaluation). The hints have been turned up a notch with the latest United Nations climate confab taking place in Lima, Peru through December 12.  The mainstream media is happy to popularize these claims (as are government-money-seeking science lobbying groups).

But a closer look shows two things: first, whether or not 2014 will prove to be the record warmest year depends on whom you ask; and second, no matter where the final number for the year ranks in the observations, it will rank among the greatest “busts” of climate model predictions (which collectively expected it to be a lot warmer). The implication of the first is just nothing more than a jostling for press coverage. The implication of the latter is that future climate change appears to be less of a menace than assumed by the president and his pen and phone. 

Let’s examine at the various temperature records.

First, a little background. Several different groups compile the global average temperature in near-real time. Each uses slightly different data-handling techniques (such as how to account for missing data) and so each gets a slightly different (but nevertheless very similar) values. Several groups compute the surface temperature, while others calculate the global average temperature in the lower atmosphere (a bit freer from confounding factors like urbanization). All, thus far, only have data for 2014 compiled through October, so the final ranking for 2014, at this point in time, is only a speculation (although a pretty well-founded one).

The three major groups calculating the average surface temperature of the earth (land and ocean combined) all are currently indicating that 2014 will likely nudge out 2010 (by a couple hundredths of a degree Celsius) to become the warmest year in each dataset (which begin in mid-to-late 1800s). This is almost certainly true in the datasets maintained by the U.S. National Oceanographic and Atmospheric Administration (NOAA) and the UK Met Office Hadley Centre. In the record compiled by NASA’s Goddard Institute for Space Studies (GISS), the 2014 year-to-date value is in a virtual dead heat with the annual value for 2010, so the final ranking will depend heavily on the how the data come in for November and December. (The other major data compilation, the one developed by the Berkeley Earth group is not updated in real time).

There is one other compilation of the earth’s surface temperature history that has recently been developed by researchers Kevin Cowtan and Robert Way of the University of York. This dataset rose to prominence a year ago, when it showed that if improved (?) methods were used to fill in data-sparse regions of the earth (primarily in the Arctic), the global warming “hiatus” was more of a global warming “slowdown.” In other words, a more informed guess indicated that the Arctic had been warming at a greater rate than was being expressed by the other datasets. This instantly made the Cowtan and Way dataset the darling of folks who wanted to show that global warming was alive and well and not, in fact, in a coma (a careful analysis of the implications of Cowtan and Way’s findings however proved the data not up to that task). So what are the prospects of 2014 being a record warm year in the Cowtan and Way dataset? Slim. 2014 currently trails 2010 by a couple hundredths of a degree Celsius—an amount that will be difficult to make up without an exceptionally warm November and December. Consquently, the briefly favored dataset is now being largely ignored.

It is worth pointing out, that as a result of data and computational uncertainty,  none of the surface compilations will 2014 be statistically different from 2010—in other words, it is impossible to say with statistical certainty, that 2014 was (or was not) the all-time warmest year ever recorded.

It is a different story in the lower atmosphere.

There, the two groups compiling the average temperature show that 2014 is nowhere near the warmest (in data which starts in 1979), trailing 1998 by several tenths of a degree Celsius. This difference is so great that it statistically clear that 2014 will not be a record year (it’ll probably fall in the lower half of the top five warmest years in both the Remote Sensing Systems (RSS) and the University of Alabama-Huntsville (UAH) datasets). The variability of temperatures in the lower atmosphere is more sensitive to the occurrence of El Niño conditions and thus the super El Niño of 1998 set a high temperature mark that will likely stand for many years to come, or at least until another huge El Niño occurs.

Basically, what all this means, is that if you want 2014 to be the “warmest year ever recorded” you can find data to back you up, and if you prefer it not be, well, you can find data to back up that position as well.

In all cases, the former will make headlines.

But these headlines will be misplaced. The real news is that climate models continue to perform incredibly poorly by grossly overestimating the degree to which the earth is warming.

Let’s examine climate model projections for 2014 against the observations from the dataset which has the greatest chance of 2014 as the warmest year—the NOAA dataset.

Figure 1 shows the average of 108 different climate model projections of the annual surface temperature of the earth from 1980 through 2014 along with the annual temperature as compiled by NOAA.

 

Figure 1. Global annual surface temperature anomalies from 1980 to 2014. The average of 108 climate models (red) and observations from NOAA (blue) are anomalies from the 20th century average. In the case of the NOAA observations, the 2014 value is the average of January-October.

For the past 16 straight years, climate models have collectively projected more warming than has been observed.

Over the period 1980-2014, climate models projected the global temperature to rise at a rate of 0.24°C/decade while NOAA observations pegged the rise at 0.14°C/decade, about 40 percent less. Over the last 16 years, the observed rise is nearly 66 percent less than climate model projections. The situation is getting worse, not better. This is the real news, because it means that prospects for overly disruptive climate change are growing slimmer, as are justifications for drastic intervention.

We don’t expect many stories to look any further than their “2014 is the warmest year ever” headlines.

As to the rest of the picture, and the part which holds the deeper and more important implications, well, you’ll have to keep checking back with us here—we’re happy to fill you in!

Tim Lynch

Last November, voters in Washington, DC overwhelmingly approved a referendum that would have legalized marijuana in the city.  Now that measure has been stymied by House Republicans–led by Rep. Andy Harris (R-MD).

 

From today’s Washington Post: The move “shocked elected DC leaders, advocates for marijuana legalization and civil liberties groups.”

 

As a constitutional matter, the Congress can set policies for the District of Columbia, but this is an awful move.  No vote on marijuana reform, just override the voter-approved measure by inserting language into a gigantic spending bill.

 

Isn’t it interesting that such tactics never seem to be used to downsize the federal government and reduce its powers?  Why not zero out the budget for the DEA or the Export-Import Bank?

Charles Hughes

A new working paper from the National Bureau of Economic Research finds that significant minimum wage increases can hurt the very people they are intended to help. Authors Jeffery Clemens and Michael Wither find that significant minimum wage increases can negatively affect employment, average income, and the economic mobility of low-skilled workers. The authors find that significant “minimum wage increases reduced the employment, average income, and income growth of low-skilled workers over short and medium-run time horizons.”  Most troublingly, these low-skilled workers saw “significant declines in economic mobility,” as these workers were 5 percentage points less likely to reach lower middle-class earnings in the medium-term. The authors provide a possible explanation: the minimum wage increases reduced these workers’ “short-run access to opportunities for accumulating experience and developing skills.” Many of the people affected by minimum wage increases are on one of the first rungs of the economic ladder, low on marketable skills and experience. Working in these entry level jobs will eventually allow them to move up the economic ladder. By making it harder for these low-skilled workers to get on the first rung of the ladder, minimum wage increases could actually lower their chances of reaching the middle class.

Most of the debate over a minimum wage increase centers on the effects of an increase on aggregate employment, or the total number of jobs and hours worked that would be lost. A consensus remains elusive, but the Congressional Budget Office recently weighed in, estimating that a three year phase in of a $10.10 federal minimum wage option would reduce total employment by about 500,000 workers by the time it was fully implemented. Taken with the findings of the Clemens and Wither study, not only can minimum wage increases have negative effects for the economy as a whole, they can also harm the economic prospects of  low-skilled workers at the individual level.

Four states approved minimum wage increases through ballot initiatives in the recent midterm, and the Obama administration has proposed a significant increase at the federal level. This study should give them a reason to reconsider.

Recent Cato work on this topic can be found here and here

Nicole Kaeding

Last night, House and Senate negotiators released the legislative text for the government’s newest spending bill, dubbed the “Cromnibus.” The bill authorizes the government to spend $1.1 trillion on discretionary programs between now and September 30, 2015. The total spending level honors last year’s Ryan-Murray budget deal, but also makes a number of important changes to federal law.

These changes include:

Environmental Protection Agency (EPA): The EPA’s funding was cut by $60 million over last fiscal year. The agency’s budget has been cut by 21 percent since fiscal year 2010.

Department of Homeland Security (DHS): Following President Obama’s executive action on immigration, Republican sought to limit funding for DHS. According to the deal, DHS is only funded through February. The incoming Congress will need to fund the agency for the remainder of the fiscal year.

Internal Revenue Service (IRS): The IRS’ budget is cut by $345.6 million.

ObamaCare: The bill does not cut funding to ObamaCare implementation, but it also does not include any new funding to the Department of Health and Human Services and the Internal Revenue Service, the two agencies with primary implementation responsibilities. The bill also limits ObamaCare’s risk corridor provision, which provided a bailout to insurance companies.

Marijuana: The District of Columbia voted overwhelmingly in November to legalize marijuana. The Cromnibus halts the legalization process.

Yucca Mountain: The bill continues funding for the proposed nuclear storage site. Earlier this year, the Nuclear Regulatory Commission confirmed Yucca Mountain’s safety.

Overseas Contingency Operations: The budget deal also provides $64 billion in funding for military operations, including $5 billion for the fight against ISIS. The $64 billion is in addition to the $1.1 trillion in discretionary spending.

Internet Tax Moratorium: The federal moratorium on state and local internet taxes continues for one year.

 

Neal McCluskey

When I first heard about the White House Summit on Early Education being held today, I worried. “I sure hope this isn’t going to be a PR stunt to cheerlead for government pre-kindergarten programs,” I thought. Then I got the announcement: U.S. Secretary of Education Arne Duncan will be having a Twitter chat with pop sensation Shakira in conjunction with the summit! “Oh, I was just being silly,” I said to myself, relieved that this would be a sober, objective discussion about what we do – and do not – know about the effectiveness of pre-K programs.

Okay, that’s not actually what happened. In fairness to Shakira, she does appear to have a very serious interest in children’s well-being. Unfortunately, the White House does not appear to want to have an objective discussion of early childhood education.

Just look at this, from the official White House blog:

For every dollar we invest in early childhood education, we see a rate of return of $7 or more through a reduced need for spending on other services, such as remedial education, grade repetition, and special education, as well as increased productivity and earnings for these kids as adults.

Early education is one of the best investments our country can make. Participation in high-quality early learning programs—like Head Start, public and private pre-K, and childcare—provide children from all backgrounds with a strong start and a solid foundation for success in school.

Let me count the ways that this is deceptive, or just plain wrong, as largely documented in David Armor’s recent Policy Analysis The Evidence on Universal Preschool:

  • The 7-to-1 ROI figure – for which the White House cites no source – almost certainly comes from work done by James Heckman looking at the rate of return for the Perry Preschool program. It may well be accurate, but Perry was a microscopic, hyperintensive program from the 1960s that cannot be generalized to any modern, large-scale program.
  • If you look at the longitudinal, “gold-standard” research results for Head Start, you see that the modest advantages accrued early on essentially disappear by first grade…as if Head Start never happened. And federal studies released by the Obama administration are what report this.
  • It stretches credulity to call Head Start “high quality,” not just based on its results, but on its long history of waste and paralysis. Throughout the 2000s the federal Government Accountability Office and general media reported on huge waste and failure in the program.
  • Most evaluations of state-level pre-K programs do not randomly assign children to pre-K and compare outcomes with those not chosen, the “gold standard” mentioned above. Instead they often use “regression discontinuity design” which suffers from several shortcomings, arguably the biggest of which is that you can’t do longitudinal comparisons. In other words, you can’t detect the “fade out” that seems to plague early childhood education programs and render them essentially worthless. One large-scale state program that was evaluated using random-assignment – Tennessee’s – appears to be ineffective.
  • The White House says early childhood programs can help “children from all backgrounds.” Not only is that not true if benefits fade to nothing, but a federal, random-assignment evaluation of the Early Head Start program found that it had negative effects on the most at-risk children.

I suspect the vast majority of people behind expanding preschool are well intentioned, and I encourage them to leverage as much private and philanthropic funding as they can to explore different approaches to pre-K and see what might work. But a splashy event intended to proclaim something is true for which we just don’t have good evidence doesn’t help anyone.

Let’s not mislead taxpayers…or kids.

Pages