Feed aggregator

Chris Edwards

Policymakers are battling over a funding bill for the Department of Homeland Security (DHS) and its agencies, including the Federal Emergency Management Agency (FEMA). The disagreement over the bill involves the funding of President Obama’s recent immigration actions.

If a DHS funding bill is not approved, the department will partially shut down. The administration has been highlighting the negative effects of that possibility, but the battle illustrates how the government has grown far too large. Federal shutdowns may cause disruption, but that is because the government has extended its tentacles into activities that should be left to state and local governments and the private sector.

To the extent possible, we should move the most important activities in society out of Washington because the federal government has become such a screwed-up institution. Air traffic control, for example, is too crucial to allow it to get caught in D.C. budget squabbles, as it did in 2013. Air traffic control should be privatized.

Let’s look at the story being told by FEMA head Craig Fugate about a possible shutdown:

I can say with certainty that the current standoff has a real impact on our ability to ensure that a wide range of emergency personnel across the country have the resources they need to do their jobs and keep our communities safer and more secure…

At FEMA, one of our critical missions is reviewing applications and awarding grants to communities across the country, which can help firefighters, police officers, hospital workers, and emergency managers get the staff, training and equipment they need to prepare for, respond to, recover from, and mitigate a wide array of hazards…

Today, we find ourselves in the midst of yet another continuing resolution, which only provides short-term, temporary funding to our agency. This isn’t just a slight technical difference – it has a major impact on our ability to assist state, local, and tribal public safety agencies…

Making matters worse, the current situation is a showstopper for our grant program. Our application process for grants should have started in October; it is now February and we still haven’t been able to issue new grants. Moreover, during these ongoing continuing resolutions, local first responders from across the U.S. have made plans to attend training classes at one of our three national training centers, where they will learn valuable skills they can bring back to their communities – only to have a wrench thrown in the works caused by uncertainty in the budget. Our state, local, and tribal partners are facing increasingly urgent choices about how they will make ends meet without matching FEMA grants.

Mr. Fugate is a well-regarded leader, unlike some of FEMA’s past leaders. But his argument is like if the government took over food distribution in America, the Federal Administrator for Food would point to the urgent crisis if his budget was blocked. The lesson is that the more control the federal government has over society, the more vulnerable we all are to its dysfunction.

As for FEMA, my recent study examines why it is a mistake to fund and direct disaster preparation, response, and relief from Washington. FEMA’s response to some major disasters has been slow, disorganized, and profligate. Fugate implied that local police and firefighters across the country are now hooked on the federal teat. How could that possibly be a good idea?

Federalism is supposed to undergird America’s system of handling disasters, particularly natural disasters. State, local, and private organizations should play the dominant role. So however the current battle over DHS funding turns out, policymakers should begin cutting FEMA’s budget and handing back responsibility for disasters to the states and private sector.

Paul C. "Chip" Knappenberger and Patrick J. Michaels

The White House Council for Environmental Quality (CEQ) has released a draft of revised guidance that “describes how Federal departments and agencies should consider the effects of greenhouse gas emissions and climate change” under reviews governed by the National Environmental Policy Act (NEPA)—an act which basically requires some sort of assessment as to the environmental impacts of all proposed federal actions.

Under the revised guidance, the CEQ makes it clear that they want federal agencies now to include the impact on climate change in their environmental assessments.

But here’s the kicker, the CEQ doesn’t want the climate change impacts to be described using measures of climate—like temperature, precipitation, storm intensity or frequency, etc.— but rather by using the measure of greenhouse gas emissions.

Basically, the CEQ guidance is a roadmap for how to circumvent the NEPA requirements.

Here is how the CEQ characterizes the intent of the NEPA:

NEPA is designed to promote disclosure and consideration of potential environmental effects on the human environment resulting from proposed actions, and to provide decisionmakers with alternatives to mitigate these effects. NEPA ensures that agencies take account of environmental effects as an integral part of the agency’s own decision-making process before decisions are made. It informs decisionmakers by ensuring agencies consider environmental consequences as they decide whether to proceed with a proposed action and, if so, how to take appropriate steps to eliminate or mitigate adverse effects. NEPA also informs the public, promoting transparency of and accountability for consideration of significant environmental effects. A better decision, rather than better—or even excellent—paperwork is the goal of such analysis.

Clearly, the emphasis of NEPA is on the “environment” and better informing policymakers and the public as to the potential impacts of proposed federal actions on the environment.

But here is how the CEQ summarizes the intent of its new guidance:

Agencies should consider the following when addressing climate change:

(1) the potential effects of a proposed action on climate change as indicated by its GHG emissions;

This is represents a fundamental scientific error—greenhouse gas (GHG) emissions are not themselves a measure of an “environmental effect” nor are they an indicator of “climate change.”

This misdirection—one inconsistent with the NEPA— immediately caught our attention and we developed and submitted a Comment on the CEQ guidance that pointed out this glaring error. The public comment period, which originally closed yesterday, has been extended until March 25, 2015.

The sense of our Comment was by cloaking climate change impacts in the guise of greenhouse gas emissions serves not to “promote transparency,” or “inform decisionmakers” and “the public” but rather has the opposite intent—misdirection and misinformation.

Why does the CEQ seek to limit the climate change discussion to greenhouse gases?

In light of the difficulties in attributing specific climate impacts to individual projects, CEQ recommends agencies use the projected GHG emissions and also, when appropriate, potential changes in carbon sequestration and storage, as the proxy for assessing a proposed action’s potential climate change impacts. This approach allows an agency to present the environmental impacts of the proposed action in clear terms and with sufficient information to make a reasoned choice between the no-action and proposed alternatives and mitigations, and ensure the professional and scientific integrity of the discussion and analysis.

They got the first part right. The reason it is “difficult” is not because tools don’t exist—after all that’s what climate models have been developed for, to take carbon dioxide emissions and covert them to environmental impacts—but rather that any attempt to run the emissions through such climate models would show they would have no detectable impact.

In other words, it would prove that the assessment of climate change impacts of federal actions, as directed by the CEQ, to be a complete and utter waste of time.

How do we know this? Because even a complete cessation of all greenhouse gases from the U.S. starting tomorrow and running forever would only serve to avert somewhat less than 0.15°C of future global temperature rise between now and the end of the century—an amount that is environmentally insignificant. Lesser actions will have lesser impacts; you can see for yourself here.

This is the last thing the White House wants federal agencies to conclude. So instead of assessing actual climate impacts (of which there are none) of federal actions, the CEQ directs agencies to cast the effect in terms of greenhouse gas emissions—which can be used for all sorts of mischief.  For example, see how the EPA uses greenhouse gas emissions instead of climate change to promote its regulations limiting carbon dioxide emissions from power plants.

No doubt this is the type of analysis that the CEQ has in mind—one which seeks to elevate policy initiatives (like the Climate Action Plan) above hard scientific analysis.

Here is how we concluded our Comment to the CEQ:

To best serve policymakers and the general public, the CEQ should state that all but the largest federal actions have an undetectable and inconsequential impact on the environment through changes in the climate. And for the largest federal actions, an analysis of the explicit environmental impacts resulting from greenhouse gas emissions arising from the action should be detailed, with the impacts assessment not limited to climate change but also to include other environmental effects such as impacts on overall vegetative health (including crop yield and production).

As called for in the guidelines described in this current draft—substituting greenhouse gas emissions for climate change and other environmental impacts—is not only insufficient, but is scientifically inadequate and potentially misleading. As such, these CEQ guidelines should be rescinded and discarded.

Our Comment, in its entirety, is available here.

Jonathan Blanks

Those who follow police misconduct closely know that patterns of abuse can become normalized when tolerated or unchecked by police supervisors. Abuses that went unreported or were unsubstantiated in years past have been exposed by the growing presence of camera phones and other technologies that record police-public interactions. But they can’t catch them all.

The Guardian’s Spencer Ackerman has reported a truly disturbing practice in Chicago. The police have established a “black site” area where Americans are held incommunicado to be interrogated. Prisoners are held without charge and in violation of their constitutional rights and without access to legal counsel:

The facility, a nondescript warehouse on Chicago’s west side known as Homan Square, has long been the scene of secretive work by special police units. Interviews with local attorneys and one protester who spent the better part of a day shackled in Homan Square describe operations that deny access to basic constitutional rights.

Alleged police practices at Homan Square, according to those familiar with the facility who spoke out to the Guardian after its investigation into Chicago police abuse, include:

  • Keeping arrestees out of official booking databases.
  • Beating by police, resulting in head wounds.
  • Shackling for prolonged periods.
  • Denying attorneys access to the “secure” facility.
  • Holding people without legal counsel for between 12 and 24 hours, including people as young as 15.

At least one man was found unresponsive in a Homan Square “interview room” and later pronounced dead.

Unlike a precinct, no one taken to Homan Square is said to be booked. Witnesses, suspects or other Chicagoans who end up inside do not appear to have a public, searchable record entered into a database indicating where they are, as happens when someone is booked at a precinct. Lawyers and relatives insist there is no way of finding their whereabouts. Those lawyers who have attempted to gain access to Homan Square are most often turned away, even as their clients remain in custody inside.

“It’s sort of an open secret among attorneys that regularly make police station visits, this place – if you can’t find a client in the system, odds are they’re there,” said Chicago lawyer Julia Bartmes.

This is not Chicago’s first brush with systematic abuse of citizens. Just this month, a retired CPD detective was released from prison for covering up the torture and false confessions of over 100 people in the 1970s and ‘80s. He still collects a $4,000 per month pension.

Police transparency is essential to effective policing, but police organizations often protect their officers from outside scrutiny, making it difficult to hold officers accountable for repeated violations of policy. Secretive internal investigations can stonewall public inquiry into disputed officer-related shootings committed in broad daylight. Left unchecked, entire police departments can develop institutional tolerance for constitutional violations in day-to-day policing. 

But what Ackerman reports seems to be the ultimate lack of police transparency. If what he reports is true, a full investigation should be launched by government officials outside of the Chicago Police Department to examine such egregious violations of civil and constitutional rights.

At PoliceMisconduct.net, we’re shining a light to bring these abuses out into the open.

Read Ackerman’s powerful report here.

Julian Sanchez

At a  New America Foundation conference on cybersecurity Monday, NSA Director Mike Rogers gave an interview that—despite his best efforts to deal exclusively in uninformative platitudes—did produce a few lively moments. The most interesting of these came when techies in the audience—security guru Bruce Schneier and Yahoo’s chief information security officer Alex Stamos—challenged Rogers’ endorsement of a “legal framework” for requiring device manufacturers and telecommunications service providers to give the government backdoor access to their users’ encrypted communciations. (Rogers repeatedly objected to the term “backdoor” on the grounds that it “sounds shady”—but that is quite clearly the correct technical term for what he’s seeking.) Rogers’ exchange with Stamos, transcribed by John Reed of Just Security, is particularly illluminating:

Alex Stamos (AS): “Thank you, Admiral. My name is Alex Stamos, I’m the CISO for Yahoo!. … So it sounds like you agree with Director Comey that we should be building defects into the encryption in our products so that the US government can decrypt…

Mike Rogers (MR): That would be your characterization. [laughing]

AS: No, I think Bruce Schneier and Ed Felton and all of the best public cryptographers in the world would agree that you can’t really build backdoors in crypto. That it’s like drilling a hole in the windshield.

MR: I’ve got a lot of world-class cryptographers at the National Security Agency.

AS: I’ve talked to some of those folks and some of them agree too, but…

MR: Oh, we agree that we don’t accept each others’ premise. [laughing]

AS: We’ll agree to disagree on that. So, if we’re going to build defects/backdoors or golden master keys for the US government, do you believe we should do so — we have about 1.3 billion users around the world — should we do for the Chinese government, the Russian government, the Saudi Arabian government, the Israeli government, the French government? Which of those countries should we give backdoors to?

MR: So, I’m not gonna… I mean, the way you framed the question isn’t designed to elicit a response.

AS: Well, do you believe we should build backdoors for other countries?

MR: My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.

AS: So you do believe then, that we should build those for other countries if they pass laws?

MR: I think we can work our way through this.

AS: I’m sure the Chinese and Russians are going to have the same opinion.

MR: I said I think we can work through this.

I’ve written previously about why backdoor mandates are a horrible, horrible idea—and Stamos hits on some of the reasons I’ve pointed to in his question.   What’s most obviously disturbing here is that the head of the NSA didn’t even seem to have a bad response prepared to such an obvious objection—he has no serious response at all. China and Russia may not be able to force American firms like Google and Apple to redesign their products to be more spy-friendly, but if the American govenrment does their dirty work for them with some form of legal backdoor mandate, those firms will be hard pressed to resist demands from repressive regimes to hand over the keys. Rogers’ unreflective response seems like a symptom of what a senior intelligence official once described to me as the “tyranny of the inbox”: A mindset so myopically focused on solving one’s own immediate practical problems that the bigger picture—the dangerous long-term consequences of the easiest or most obvious quick fix solution—are barely considered.

What we also see, however, is a hint to why officials like Rogers and FBI Director James Comey seem so dismissive of the overwhelming consensus of security professionals and crypographers that it’s not technically feasible to implement a magical “golden key” that will permit the “good guys” to unlock encrypted data while leaving it secure against other adversaries.  No doubt these officials are asking their own experts a narrow, technical question and getting a narrow, technically correct answer: There is a subfield of cryptography known as “kleptography” that studies the design of “asymmetric backdoors.”  The idea is that the designer of a cryptographic algorithm can bake into it a very specific vulnerability that depends on a lengthy mathematical key that is too large to guess and cannot be easily reverse-engineered from the algorithm itself. Probably the most famous example of this is the vulnerability in the Dual Ellipitic Curve algorithm NSA is believed to have inserted in a widely-used commercial security suite.  More prosaically, there is the method companies like Apple use to control what software can run on their devices: Their processors are hard-coded with the company’s public key, and (in theory) will only run software signed by Apple’s private developer key.

So there’s a sense in which it is technically feasible to do what NSA and FBI would like. There’s also a sense in which it’s technically possible for a human being to go without oxygen for ten minutes—but in practice you’ll be in for some rude surprises unless you ask the follow up question: “Will the person be left irreperably brain damaged?” When Comey or Rogers get a ten minute briefing from their experts about the plausibility of designing “golden key” backdoors, they are probably getting the technically accurate answer that yes, on paper, it is possible to construct a cryptographic algorithm with a vulnerability that depends on a long mathematical key known only to the algorithm’s designer, and which it would be computationally infeasible for an adversary to find via a “brute force” attack. In theory. But to quote that eminent cryptographer Homer Simpson: “I agree with you in theory, Marge.  In theory, communism works. In theory.” 

The trouble, as any good information security pro will also tell you, is that real world systems are rarely as tidy as the theories, and the history of cryptography is littered with robust-looking cryptogaphic algorithms that proved vulnerable under extended scrutiny or were ultimately impossible to implement securely under real-world conditions, where they crypto is inevitably just one component in a larger security and software ecosystem. A measure of adaptability is one virtue of “end to end” encryption, where cryptographic keys are generated by, and held exclusively by, the end users: If my private encryption key is stolen or otherwise compromised, I can “revoke” the corresponding public key and generate a new one. If some clever method is discovered that allows an attacker to search the “key space” of a cryptosystem more quickly than was previously thought possible, I can compensate by generating a longer key that remains beyond the reach of any attacker’s computing resources. But if a “golden key” that works against an entire class of systems is cracked or compromised, the entire system is vulnerable—which makes it worthwhile for sophisticated attackers to devote enormous resources to compromising that key, far beyond what it would make sense to expend on the key for any single individual or company.

So maybe you don’t want a single master key: Maybe you prefer a model where every device or instance of software has its own corresponding backdoor key. This creates its own special set of problems, because now you’ve got to maintain and distribute and control access to the database of backdoor keys, and ensure that new keys can’t be generated and used without creating a corresponding key in the master database. This weak point—key distribution—is the one NSA and GCHQ are purported to have exploited in last week’s story about the theft of cell phone SIM card keys.  Needless to say, this model also massively reduces the flexibility of a communications or data storage system, since it means you need some centralized authority to generate and distribute all these keys.  (Contrast a system like GPG, which allows users to generate as many keys as they need without any further interaction with the software creator.) You also, of course, have the added problem of designing your system to resist  modification by the user or device owner, so the keys can’t be changed once they leave the manufacturer.

As I’ve argued elswhere,  the feasibility of implementing a crypto backdoor depends significantly on the nature of the system where you’re trying to implement it.  If you want backdoors in an ecosystem like Apple’s, where you have a single manufacturer producing devices with hardcoded cryptographic keys and exerting control over the software running on its devices, maybe (maybe) you can pull it off without too massive a reduction in the overall security of the system.  Ditto if you’re running a communications system where all messages are routed through a centralized array of servers—assuming users are willing to trust that centralized hub with access to their most sensitive data.  If, on the other hand, you want backdoors that are compatible with a decentralized peer-to-peer communications network that uses software-generated keys running on a range of different types of computing hardware, that’s going to be a much bigger problem.  So when Mike Rogers asks his technical experts whether Apple could realistically comply with a mandate to provide backdoor access to encrypted iPhone data, they might well tell him it’s technically doable—but that doesn’t mean there wouldn’t be serious problems implementing such a mandate generally.

In short, Rogers’ dismissive attitude in the exchange above seems like prime evidence that a little knowledge can indeed be a dangerous thing. He’s got a lot of “world class cryptographers” eager to give him the—very narrowly technically accurate—answer he wants to hear: It is mathematically possible to create backdoors of this sort, at least on certain types of systems.  The reason the rest of the cryptographic community disagrees is that they’re not limiting themselves to giving a simplified five-minute answer to the precise question the boss asked, or finding an abstract solution to a chalkboard problem.   In other words, they’re looking at the bigger picture and recognizing that actually implementing these solutions across a range of data storage and communications architectures—even on the dubious premise that the global market could be compelled to use broken American crypto indefinitely—would create an intractable array of new security problems.  We can only hope that eventually one of the in-house experts that our intelligence leaders actually listen to will sit the boss down for long enough to break the bad news.

Ilya Shapiro

Freedom of contract—the right of individuals to manage and govern their own affairs—is a basic and necessary liberty. The appropriate role of the government in contract-law disputes is to hold parties to their word, not to enforce its own policy preferences.

The New Jersey Supreme Court recently struck a blow against that basic freedom, however, in ruling that clearly worded arbitration provisions—one of the most common parts of consumer contracts—are unenforceable unless the parties comply with multiple superfluous formalities. The case arose when Patricia Atalese retained a law firm, U.S. Legal Services Group, to negotiate with creditors on her behalf. Atalese signed a retainer agreement with a standard arbitration provision: she checked a box that unambiguously indicated that she read and understood that all disputes would be settled via arbitration. Then, after a dispute over legal fees, Atalese disregarded the arbitration agreement and filed a lawsuit in state court.

The trial court dismissed her complaint and compelled arbitration, a ruling that was affirmed by the intermediate appellate court. But instead of letting that decision stand, the New Jersey Supreme Court broke from years of tradition and federal precedent found the arbitration provision unenforceable because it lacked certain magic words stating, in addition to all disputes being resolved by arbitration, that the parties were waiving their right to a civil jury trial.

Cato, joined by the National Federation of Independent Business, has filed an amicus brief urging the U.S. Supreme Court to review the case. We make three key points. First, the New Jersey court’s proposed requirement—that contracts with an arbitration provision include belt-and-suspenders-and-drawstring language regarding jury-trial waiver—is redundant. Agreeing to submit a dispute to an impartial arbitrator instead of going through the expense of litigation is the very essence of an arbitration agreement.

Second, the ruling is contrary to federal law. The Federal Arbitration Act, which has been in place for nearly 100 years, affords arbitration agreements certain protections. Specifically, it demands from the states a certain amount of even-handedness: states can’t nullify or refuse to enforce such agreements for reasons other than those that would invalidate any contractual provision (e.g., coercion, fraud, illegal subject matter, etc.). Since New Jersey law doesn’t require parties to state clearly that they understand the legal consequences of other contractual provisions (since a signature is accepted as evidence that the agreement was read and understood), applying that additional standard to arbitration agreements alone would violate federal law.

Finally, this ruling threatens to upset the business world by calling into question the validity of millions of contracts to which parties have mutually and unambiguously agreed. The vast majority of arbitration provisions, including many of those featured in previous Supreme Court cases, don’t include the explicit language mandated by the New Jersey ruling.

In short, the value of contracts lies in the certainty they create while enabling mutual gains. By creating substantial uncertainty about the enforceability of tens of thousands of arbitration agreements, the New Jersey Supreme Court will force businesses, litigants, and the taxpayers who fund the court system to waste time and money litigating disputes that would be more appropriately resolved by arbitration—as all the parties agreed in the first place.

The Supreme Court will decide whether to take up U.S. Legal Services Group v. Atalese this spring.

Steve H. Hanke

The financial press has become inundated with the word “austerity.” Since Greece’s left-wing Syriza proclaimed an “anti-austerity revolution,” strong adjectives, like “incredibly savage,” precede that overused word.

What was once a good word has become a weaselword. That, according to the Oxford Dictionary, is “a word that destroys the force of a statement, as a weasel ruins an egg by sucking out its contents.” How could that be?

Well, in the hands of an unscrupulous or uninformed writer, the inversion of a perfectly good word into a weaselword is an easy task. All one has to do is leave the meaning of a word undefined or vague, rendering the word’s meaning so obscure as to make it non-operational. With that, a meaningless weaselword is created.

In its current usage, the word austerity is so obscure as to evoke Fritz Machlup’s paraphrase of Goethe’s line from Faust: To conceal ignorance, Mephistopheles counsels a student to misuse words. Such is the story and fate of austerity.

Mark A. Calabria

Debate over whether to subject the Federal Reserve to a policy audit has occasionally focused on the size and composition of the Fed’s balance sheet. While I don’t see this issue as central to the merits of an audit, it has given rise to a considerable amount of smug posturing. Let’s step beyond the posturing and give these questions some of the attention they deserve.

First the facts. The Fed’s balance sheet has ballooned over the last few years to about $4.5 trillion. And yes, the Fed discloses such. No argument there. The Fed, like most central banks, has traditionally conducted its open-market operations in the “short end” of the market. The various rounds of quantitative easing have changed that. For instance the vast majority of its holdings of Fannie & Freddie mortgage-backed securities ($1.7 trillion) have an average maturity of well over 10 years. Similarly the Fed’s stock of treasuries have long maturities, about a fourth of those holdings in excess of 10 years.

Now the leverage question. We all get that the Fed cannot go “bankrupt” like Lehman. But that’s because “bankrupt” is a legal condition and one from which the Fed has been exempted. Just like Fannie and Freddie cannot go “bankrupt” (they are considered legally outside the bankruptcy code). The eminent economist historian Barry Eichengreen tells us the Fed’s leverage doesn’t matter as “the central bank can simply ask the government to replenish its capital, much like when a government covers the losses of its national post office.” Some of us would say that’s a problem not a solution, just like it is with the Post Office.

Others would suggest the Fed’s leverage doesn’t matter because “the Fed creates money”. Again that misses the point. Any losses could be covered by printing money, but isn’t that inflationary?  And that, of course, is just another form of taxation. So it seems Senator Paul’s primary point, that the Fed’s balance sheet exposes the taxpayer to some risk has actually been supported, not discredited, by these supposed rebuttals.

Let’s get to another issue, the maturity of the Fed’s assets. There’s a good reason central banks generally stay in the short end of the market. It avoids taking on any interest rate risk.  When rates go up, bond values fall. Yes the Fed can avoid recognizing those losses by simply not selling those assets. But that creates problems of its own. If we do see inflation, normally the Fed would sell assets to drain liquidity from the market. But would the Fed be willing to sell assets at a loss? At the very least there would be some reluctance. And yes they could cover those losses by printing money, but that’s hardly helpful if the Fed finds itself in a situation of rising prices.

The point here is that the Fed’s balance sheet does raise tough questions about its exit strategy.  Perhaps the economy will remain soft for years and the Fed can exit gracefully.  Perhaps not.  I raised this possibility before Congress a year ago.  I don’t know anyone with a crystal ball on these issues.  But one thing is certain, this is a debate we should be having.  Its the “nothing to see here, move along” crowd that poses the true risk to our economy.

Juan Carlos Hidalgo

Nicaragua’s plan to build an Interoceanic Canal that would rival the Panama Canal could be a major environmental disaster if it goes forward. That’s the assessment of Axel Meyer and Jorge Huete-Pérez, two scientists familiar with the project, in a recent article in Nature. Disturbingly, the authors point out,

No economic or environmental feasibility studies have yet been revealed to the public. Nicaragua has not solicited its own environmental impact assessment and will rely instead on a study commissioned by the HKND [The Hong Kong-based company that has the concession to build the canal]. The company has no obligation to reveal the results to the Nicaraguan public.

In recent weeks we have seen similar opinions aired in the Washington Post, Wired, The Economist, and other media. In their article, Meyer and Huete-Pérez explain how the $50-billion project (more than four times Nicaragua’s GDP), would require “The excavation of hundreds of kilometres from coast to coast, traversing Lake Nicaragua, the largest drinking-water reservoir in the region, [and] will destroy around 400,000 hectares of rainforests and wetlands.” So far, the Nicaraguan government has remained mum about the environmental impact of the project. Daniel Ortega, the country’s president, only said last year that “some trees have to be removed.”  

Interestingly, despite this potential massive threat to one of the most pristine environmental reservoirs in the Americas, none of the leading international environmental organizations, such as Greenpeace, Friends of the Earth or the Sierra Club, has issued a single statement about the Nicaragua Canal.

We know for a fact that this is not out of lack of interest in Central America. After all, some of these organizations were pretty vocal in their opposition to CAFTA. Why isn’t the Nicaragua Canal proposal commanding the attention of these international environmental groups?

Paul C. "Chip" Knappenberger and Patrick J. Michaels

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

In this week’s You Ought to Have a Look, we’re going to catch up on some new climate science that hasn’t gotten the deserved attention—for reasons soon to be obvious.

First up is a new study comparing climate model projections with observed changes in the sea ice extent around Antarctica.

While everyone seems to talk about the decline in the sea ice in the Northern Hemisphere, considerably less discussion focuses on the increase in sea ice in the Southern Hemisphere. If it is mentioned at all, it is usually quickly followed by something like “but this doesn’t disprove global warming, it is consistent with it.”

But, even the folks delivering these lines probably realize that the latter bit is a stretch.

In fact, the IPCC and others have been trying downplay this inconvenient truth ever since folks first started to note the increase. And the excuses are getting more involved.

A new study pretty much exposes the emperor.

A team of three Chinese scientists led by Qi Shu compared the observed trends in Southern Hemisphere sea ice extent (SIE) with those projected by the collection of climate models used to forecast future climate changes by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). In a nutshell, they found increases in sea ice around Antarctic were not consistent with human-caused climate change at all—or at least not by how climate models foresee it taking place. Figure 1 shows the extent of the mismatch—rather shocking, really.

 

Figure 1. Comparison of observed (blue) and mean climate model projected (red) changes in Antarctic sea ice extent  (from Shu et al., 2015).

Shu et al. write:

The linear trend of satellite-observed Antarctic SIE is 1.29 (±0.57) x 105 km2 decade-1; only about 1/7 [climate] models show increasing trends, and the linear trend of [multi-model mean] is negative with the value of -3.36 (±0.15)x105 km2 decade-1.

This should pretty much quell talk that everything climate is proceeding according to plan.

For all the details, be sure to check out the full paper (which is open access).

 

The next paper worth having a look at is one that examines the impact of urbanization on thunderstorm development in the southeastern U.S.

Recall that a global warming talking point is greenhouse gas-induced climate change will result in more episodes of intense precipitation.

As with all manner of extreme weather events, the association is far from being so simple. All sorts of confounding factors impact the observed changes in precipitation and make disentangling and identifying any impact from anthropogenic global warming nearly impossible. We have discussed this previously, and this new research provides more such evidence.

A team of researchers led by Alex Haberlie developed a method of locating “isolated convection initiation” (ICI) events from historic radar data. ICI events are basically thunderstorm kickstarters. Examining 17 years of data for the region around Atlanta, the team found:

Results reveal that ICI events occur more often over the urban area compared to its surrounding rural counterparts, confirming that anthropogenic-induced changes in land cover in moist tropical environments lead to more initiation events, resulting thunderstorms and affiliated hazards over the developed area.

In other words, pointing to increases in thunderstorms and declaring greenhouse gas emissions the culprit is overly simplistic and potentially misleading.

The full details are available here, although they are behind a paywall. But even a read of the abstract will prove enlightening. Turns out climate change is not so simple.

 

And finally, as the number of people shivering from cold in the Eastern U.S. increases, so, too, does the effort to link the cold to global warming—mostly through feedback from declines in Arctic sea ice.

While we have been over this before—the linkages are less than robust—we’re always happy to see new research on the topic.

Just published is a paper by University of Washington’s Dennis Hartmann that examines the causes behind last winter’s brutal cold in the eastern U.S. Instead of a link to sea ice and high latitude conditions, he found tropical sea surface temperature (SST) anomalies in the Pacific Ocean were a driving force behind the cold air outbreaks last winter. Hartmann further notes as of his writing the paper (in January 2015) the same conditions present last winter persisted into this one. The current situation bears this out.

This passage from Hartmann’s paper bears repeating and is worth keeping in mind:

This result is consistent with a long history of observational and modeling work indicating that SST anomalies in the extratropics are strongly driven by atmospheric circulation anomalies, while SST anomalies in the tropics can strongly force the atmospheric circulation.

In other words, while extratropical circulation drives our daily weather, tropical sea surface temperature patterns drive the circulation. Thus, don’t look to the Arctic to explain winter’s weather, but rather the Tropics. Hopefully, John Holdren etc. will take this to heart.

Dalibor Rohac

In a recent article for the Weekly Standard, I noted that freedom in Hungary was under attack. In the past several years, the Prime Minister Viktor Orban has tightened its control over media, harassed civil society organizations, politicized the judiciary, nationalized $14 billion worth of assets from private pension funds, and populated the board of Hungary’s central bank by appointees of the ruling party, Fidesz. Mr Orban – who was once seen as a pro-market, liberal reformer – has also become Vladimir Putin’s most reliable partner in the EU, having hosted him for a working visit just last week.

But not all Hungarians are applauding as the country descends deeper into what could be called ‘goulash authoritarianism’. In fact, the parliamentary by-election in the county of Veszprem on Sunday has brought a very encouraging piece of news. A Fidesz candidate was defeated by an independent candidate, Zoltan Kesz, endorsed by a coalition of left-of-center parties.

“The left-right divide has been turned on its head in Hungary; the relevant distinction here is between the pro-Western and pro-Eastern political parties,” says Mr Kesz, referring to Mr Orban’s geopolitical allegiances. It should also be said that Mr Kesz is no ordinary politician. An activist and English teacher, he is the founder of Hungary’s premier libertarian think-tank, the Free Market Foundation. Interestingly, given the toxic political and ideological environment in Hungary, the organization has become known as the leading voice against racism in the country, and much of its efforts have been aimed to counter the rise of political forces such as the xenophobic Jobbik party, which is currently the third largest political force in the country.

Mr Kesz’ election is significant because it brings an end to the narrow supermajority, which Fidesz enjoyed in the Hungarian parliament since the election last year. In 2013, the parliament passed a number of controversial constitutional amendments, and many feared that the unchecked dominance of Fidesz could herald the demise of Hungarian democracy. While Mr Kesz’ electoral victory assuages those fears somewhat, he will be fighting an uphill battle to get his country back on track.

Ilya Shapiro

Vietnam vet Robert Rosebrock is 72 years old, but he’s still got enough fight in him to stand up for what he believes in. The Veteran’s Administration of Greater Los Angeles (VAGLA) and the U.S. Court of Appeals for the Ninth Circuit would prefer his fight to be in vain.

Rosebrock’s fight here is a protest against VAGLA’s use of a parcel of land deeded to the U.S. government for the care of homeless veterans for purposes other than that purpose.  For example, VAGLA leased parts of the land to a private school, an entertainment company, and a soccer club, and occasionally used it for hosting events. Every Sunday for 66 weeks, Rosebrock hung at least one and as many as 30 U.S. flags from a border fence on the VA property that he believed was being misused.

After seeing a celebrity gala event on the property one Sunday afternoon, Rosebrock started hanging flags with the stars down, signifying dire distress to life and property—the distress faced by LA’s homeless veterans. At this point, VAGLA started enforcing its policy against “displaying of placards or posting of materials on bulletin boards or elsewhere on [VA] property.” When Rosebrock continued, believing his First Amendment rights would protect him, he was issued six criminal citations. He then stopped hanging his flag upside down but was later allowed to hang it right-side-up—a clear if unusual example of viewpoint-based speech discrimination that violates the First Amendment.

That part of his case was a slam-dunk; the difficulty came in making the violation matter. Rosebrock turned to the courts asking two things: an order that would stop VAGLA from discriminating against him in the future and one that would allow him to display his flag stars-down for an amount of time equal to how long he had been denied the right to display it. The district court found that because the VAGLA associate director sent an email to the VA police that the “no signs” regulation should be enforced precisely, Rosebrock’s requested remedies were moot—meaning, basically, that because VAGLA said it would play by the rules, the Court wouldn’t order them to. This is known in legal circles as “voluntary cessation”.

Not long after the district court’s dismissal, the VA police disregarded the email and allowed Iraq War veterans to protest in violation of the regulation. Rosebrock raised this fact when he appealed to the Ninth Circuit, but the Ninth Circuit affirmed the ruling without even addressing the continued discriminatory enforcement. To paraphrase, the appellate panel held that although there is a great burden for parties seeking to prove mootness through voluntary cessation, we should trust that the government will do what it says.

Robert Rosebrook said, “no thanks,” and is petitioning the Supreme Court to hear his case. Cato agrees, and has joined the Pacific Legal Foundation and Institute for Justice on a brief supporting the petition. We point out that the federal appeals courts are split on this mootness/voluntary cessation issue, that it’s an issue that arises all the time, and that there’s no reason that government entities should be given a benefit of the doubt while everyone else has to prove “it is absolutely clear that the allegedly wrongful behavior could not reasonably be expected to recur.”

The Supreme Court should take this case, Rosebrock v. Hoffman, and tell the lower courts what we know, what Robert Rosebrock knows, and what everyone else in the country should already know by now: it doesn’t always make sense to take the government at its word.

Cato legal associate Julio Colomba contributed to this blogpost.

Walter Olson

We’ve reported earlier in this space on how the Obama administration’s Equal Employment Opportunity Commission (EEOC) keeps getting slapped down by federal judges over what we called its “long-shot lawsuits and activist legal positions.” Now the Fourth Circuit has weighed in on a high-profile employment screening case from Maryland – and it too has given the EEOC a good thwacking, in this case over “pervasive errors and utterly unreliable analysis” in the expert testimony it marshaled to show the employer’s liability. Those are the words of a three-judge panel consisting of Judge Roger Gregory, originally appointed to the court by Bill Clinton before being re-appointed by his successor George W. Bush, joined by Obama appointee Albert Diaz and GWB appointee G. Steven Agee. 

The case arose from the EEOC’s much-publicized initiative of going after employers that use criminal background checks in hiring, which the agency insists often have improper disparate impact on minority applicants and have not been validated as necessary for business reasons. It sued the Freeman Cos., a provider of convention and exposition services, over its screening methods, but Freeman won after district court judge Roger Titus shredded the EEOC’s proffered expert evidence as “laughable,” “unreliable,” and “mind-boggling.” The EEOC appealed to the Fourth Circuit. 

If it was expecting vindication there, it was very wrong. Agreeing with Judge Titus, Judge Gregory cited the “pervasive errors and utterly unreliable analysis” of the commission’s expert report, by psychologist Kevin Murphy. “The sheer number of mistakes and omissions in Murphy’s analysis renders it ‘outside the range where experts might reasonably differ,’” which meant it could not have been an abuse of discretion for Judge Titus to exclude it. 

Strong language, yet Judge Agee chose to write a separate concurrence “to address my concern with the EEOC’s disappointing litigation conduct.” Noting a pattern in multiple cases, Agee faulted the commission’s lawyers for circling the wagons on behalf of its statistical methods despite repeated judicial hints that it needed to strengthen its quality control. “Despite Murphy’s record of slipshod work, faulty analysis, and statistical sleight of hand, the EEOC continues on appeal to defend his testimony.” If the agency doesn’t watch out, exasperated judges might start imposing more sanctions against it. 

Incidentally, as a counterpoint to the EEOC’s bullheadedness, the U.S. Commission on Civil Rights a year back did a briefing program on employee screening and criminal background checks that tries to include an actual balance of views. You can read and download it here.

Charles Hughes

Fresh off another victory lap last week, Obamacare supporters awoke last Friday to the news that the government had given nearly one million exchange enrollees incorrect tax forms that could significantly affect their tax returns. 800,000 enrollees in the federal exchange and roughly 100,000 in California were given the wrong forms, called 1095-As, which provide a monthly account of the premium subsidies exchange enrollees receive. The government uses that information to determine that the subsidy amounts are correct (although a pending Supreme Court case raises questions about the legality of any subsidies offered through the federal exchange). Enrollees using the wrong information when filing their taxes would make it impossible for the government to verify that they got the right amount of subsidies.

Government officials will now try to remedy their mistake by sending out new forms to the affected customers. These tax documents contained the wrong price for the ‘benchmark plan’, the second-lowest cost silver plan available that is used to calculate the exchange subsidy amount. A post on the HealthCare.gov blog explains that the erroneous forms included the benchmark plan premiums for 2015 instead of 2014, which led to the wrong subsidy amount being displayed on the forms people use to file their taxes. The errors are not confined to one area, so incorrect forms were sent throughout the country, making it harder for enrollees to know if they are affected. Those given the wrong form will be able to access their corrected one sometime in early March, according to the report. 50,000 people in this group have already filed their taxes using the incorrect tax information. Officials are now in the process of trying to contact this group, and they will likely have to resubmit their tax returns. Enrollees who already filed will not find much help at HealthCare.gov for now, which only reads: “Additional information will be provided shortly.” Overall, nearly one million exchange enrollees could see delays in getting their income tax refunds, or find that their size of the refund has changed due to corrections in the tax form. Many of these people depend on this tax refund, and unanticipated problems could have significant adverse consequences.

Filing taxes is already a cumbersome and aggravating process. Obamacare has made it even more arduous as people have to attest to having health insurance coverage and how much they receive in exchange subsidies. Even worse, it nearly one in five HealthCare.gov customers was sent the wrong forms, and these people will have to delay filing their taxes, or even resubmit them. While this blunder will not cause the law to spiral out of control, it does reveal the potential for ongoing problems with its implementation. Following the news, HealthCare.gov CEO Kevin Counihan told reporters “We’re not doing any victory laps.” Other Obamacare supporters should take this lesson to heart.

Doug Bandow

American foreign policy is a bipartisan failure. The U.S. must intervene everywhere all the time, irrespective of consequences?

No matter how disastrous the outcome, promiscuous interventionists insist that the idea was sound. Any problems obviously result from execution, a matter of doing too little:  too few troops engaged, too few foreigners killed, too few nations bombed, too few societies transformed, too few countries occupied, too few years involved, too few dollars spent.

As new conflicts rage across the Middle East, the interventionist caucus’ dismal record has become increasingly embarrassing. Anne-Marie Slaughter, a cheerleader for war in Libya, recently defended her actions after being chided on Twitter for being a war-monger. She had authored a celebratory Financial Times article entitled “Why Libya skeptics were proved badly wrong.” Alas, Slaughter’s Mediterranean adventure looks increasingly foolish.

Slightly more abashed is Samantha Power, one of the Obama administration’s chief Sirens of War. She recently pleaded with the public not to let constant failure get in the way of future wars:  “I think there is too much of, ‘Oh, look, this is what intervention has wrought’ … one has to be careful about overdrawing lessons.” Just because the policy of constant war had been a constant bust, people shouldn’t be more skeptical about a military “solution” for future international problems.

President Barack Obama also appears to be a bit embarrassed by his behavior. The Nobel Peace Prize winner has been as active militarily as his much-maligned predecessor.

Yet in 2013 he admitted that “I was elected to end wars, not to start them.” He sounded like he was trying to convince himself when he added:  “I’ve spent the last four and a half years doing everything I can to reduce our reliance on military power.”

The two parties usually attempt to one-up each other when it comes to reckless overseas intervention. Yet Uncle Sam has demonstrated that he possesses the reverse Midas Touch. Whatever he touches turns to mayhem.

In the Balkans the U.S. replaced ethnic cleansing with ethnic cleansing and set a precedent for Russian intervention in Georgia and Ukraine. In Afghanistan the U.S. rightly defenestrated the Taliban but then spent 13 years unsuccessfully attempting to remake that tribal nation.

Invading Iraq to destroy nonexistent WMDs cost the lives of 4500 Americans and 200,000 Iraqis, wrecked Iraqi society, loosed radical furies now embodied in the Islamic State, and empowered Iran. Bombing Libya prolonged a low-tech civil war killing thousands, released weapons throughout the region, triggered a prolonged power struggle, and offered another home for ISIL killers.

As I point out on Forbes online:  “Not only has virtually every bombing, invasion, occupation, and other interference made problems worse. Almost every new intervention is an attempt to redress problems created by previous U.S. actions. And every new military step is likely, indeed, almost guaranteed, to create even bigger new problems.”

Yet virtually never do foreign policy practitioners admit that things hadn’t gone well. Most of official Washington simply takes the Samantha Power position:  “What, me worry?”

There may have been a mistake or two, but one certainly wouldn’t want to “overdraw” a lesson from these multiple and constant failures. No responsible policymaker would want to admit that even one foreign problem was not America’s responsibility.

Washington’s elite might disagree about details, but believes with absolute certainty that Americans should do everything:  Fight every war, remake every society, enter every conflict, pay every debt, defeat every adversary, solve every problem, and ignore every criticism. Unfortunately, over the last two decades this approach has proved to be an abysmal disaster.

There’s an equally simple alternative. Indeed, the president came up with it:  “don’t do stupid” stuff.  Too bad he failed to practice his own professed policy.

Washington should stop doing stupid things. But only the American people can make that happen. They must start electing leaders committed to not doing “stupid” stuff.

Simon Lester

Some folks over at Heritage have a new Issues Brief in which they argue for including an Investor State Dispute Settlement (ISDS) mechanism in the U.S.-EU trade deal being negotiated right now.  In a nutshell, ISDS lets foreign investors sue host country governments in an international tribunal when they feel certain of their rights have been infringed.

I’ve been critical of ISDS.  I do see the potential that such international rules have for protecting property rights, but I worry about other aspects of the rules.  One issue is that these rules protect the rights only of foreign investors.  Using Venezuela as an example, there’s an assumption that the courts there can’t help much with protecting rights. To some extent, ISDS is a response to that. So, if Exxon feels its operations there have been badly treated by the Venezuelan government, it can use the ISDS mechanism to have recourse to an international tribunal.  However, if a small Venezuelan dry cleaner is being subject to governmental abuse, it’s just out of luck.  To me, that seems problematic.  Focusing on the wealthy seems like a fundamentally unbalanced way to protect property rights.

But beyond that, these investment obligations are not limited to protecting property rights.  There are much broader provisions that allow foreign investors to sue for, well, lots of things, and perhaps just about anything.  Here’s an example from a Canada-Barbados investment treaty:

… Mr. Peter Allard, Canadian owner of the Graeme Hall Nature Sanctuary, contends that the Government of Barbados has violated its international obligations by refusing to enforce its environmental laws.

Mr. Allard acquired the land for the Sanctuary in the mid-1990s and subsequently developed it into an eco-tourism facility. In the notice of dispute, Mr. Allard claims to have taken numerous steps to contribute to the sustainability of the Sanctuary only to have such efforts thwarted by the acts and omissions of Barbados.

Mr. Allard asserts that Barbados’ acts and omissions have severely damaged that natural ecosystem relied upon to attract tourists to the Sanctuary. Consequently, Mr. Allard contends that Barbados failed to provide his investment full protection and security and fair and equitable treatment in accordance with the Canada-Barbados BIT.

With respect to Barbados’ omissions to protect the Sanctuary, Mr. Allard argues that Barbados has, among other things, failed to: (i) prevent the repeated discharge of raw sewage into the Sanctuary wetlands, (ii) investigate or prosecute sources of runoff of grease, oil, pesticides, and herbicides from neighboring areas, and (iii) investigate or prosecute poachers that have threatened the wildlife within the Sanctuary.

To sum all that up: A Canadian who invested in Barbados is suing the Barbados government under an investment treaty for failure to protect its environment in accordance with its domestic law.

Will this claim succeed?  It’s not clear.  But what is clear is that the scope of these obligations is extremely broad and vague, enough so that it’s worth law firms’ time and money to explore the boundaries.  This is why I wrote that these agreements are more about litigation than liberalization.

Now, it may be that, in this particular case, the government of Barbados was behaving badly (“discharge of raw sewage” is rarely a good thing). I’m not familiar with the facts of the case, so I can’t say for sure who’s right and who’s wrong here.  But the larger issue is, what exactly is the scope for when foreign investors can sue governments for failing to protect the environment? Among his claims, the investor says he has not been provided with “fair and equitable treatment.” That’s a potentially broad obligation, which can be used in a lot of ways.  It’s not too hard to imagine, say, a claim that a government’s failure to take action against climate change was a violation. If you believe climate change needs to be addressed, you might cite to various international reports on climate change, and argue that the impact of climate change on your business is “unfair,” and the government needs to do something.  (To further illustrate the broadness of these obligations, if, in the alternative, you were skeptical of climate change, you might point to other climate data, and argue that actions that governments have taken against climate change (e.g., cap and trade) have harmed your business in a way that is “unfair.”)

The environment isn’t my area of expertise, so I’ll leave it to others to decide what our environmental problems are and what we should do about them. But it seems to me that an international law obligation that allows foreign investors to sue governments on the basis that they have not protected the environment, or have protected it too much, is kind of a big deal, and something that we should understand the scope of a little better than we do now. (And keep in mind, there is nothing special about the environment here – all domestic policy areas are in play). These issues are being litigated in international courts, and we should have a better sense of what that means before we extend ISDS further through new trade agreements.

Ilya Shapiro

SWAT teams—police units equipped with military-style weaponry and trained to deal with the most dangerous of criminals—were first created police realized that patrolmen equipped with revolvers and batons are generally able to keep the peace, they lack the resources and skills to deal with riots, urban terrorism, and other exotic crime. Since then, SWAT-style paramilitary units have been deployed to rescue hostages, end bank robberies, secure campuses after school shootings, and, in Wisconsin, to raid the houses and offices of people the state believed to be guilty of exercising their rights under the First Amendment.

That’s right: in the last few years, SWAT raids were part of a wide-ranging (politically motivated) investigation into whether certain unknown individuals—“John Does”—were violating campaign finance laws. Some of these John Does objected and challenged the validity of the subpoenas requiring them to turn over their records to the district attorney’s office.

The state trial court agreed and quashed the subpoena, finding that the state had no reason to believe that any violation of state law had occurred, or that the records taken would contain relevant evidence. Unsatisfied, the DA appealed the judge’s order. Rather than continuing this battle through the state courts, these John Does sued the state officials responsible for the investigation in federal court. They claimed that the investigation was a speech-chilling violation of their First Amendment rights and asked for a federal injunction preventing the state from pursuing the investigation.

The state argued that a federal law—the Anti-Injunction Act—prevents federal courts from ordering states to abandon in-progress criminal cases. Nevertheless, the district court issued an order stopping the SWAT-style fishing expedition, relying on a series of Supreme Court cases holding that the AIA doesn’t apply where the prosecution is known by the state to be baseless, is part of a campaign of harassment, or involves the enforcement of a blatantly unconstitutional law. The judge concluded that Wisconsin’s campaign-finance laws, as well as the methods used to enforce them, violated the First and Fourteenth Amendments.

The U.S. Court of Appeals for the Seventh Circuit reversed the district court’s order, however, concluding that since the state campaign-finance laws had not yet been declared unconstitutional—and their constitutionality was not directly before the district court—the AIA exceptions didn’t apply and the injunction was improper. In short, Wisconsin will be allowed to continue its investigation, the constitutionality of which is immune from legal challenge in federal court. In effect, the Seventh Circuit held that that under the AIA, the only time defendants can challenge the constitutionality of a state’s criminal laws is “when no state prosecution [is] pending.”

Cato has filed an amicus brief urging the Supreme Court to hear the plaintiffs’ appeal. We argue that regardless of whether Wisconsin’s election laws are unconstitutional, there was sufficient evidence suggesting that the sole purpose of the investigation was to harass the plaintiffs and discourage them (and others) from advocating a particular legislative agenda. Because the Supreme Court’s interpretation of the AIA allows federal judges to halt state enforcement of undeniably constitutional laws where there is evidence that a prosecution is being conducted for an improper purpose (like silencing political dissent), or in a manner that constitutes harassment, the district court had the power to issue an injunction regardless of whether or not Wisconsin’s campaign-finance laws are constitutional.

The fact that the constitutionality of those laws is in doubt—it happens to be one of the most heavily contested questions currently before the courts—only makes the district court’s decision all the more proper and the Seventh Circuit’s all the more worrying. If allowed to stand, the long-term effect of the Seventh Circuit’s ruling would be to give prosecutors carte blanche to do exactly what Wisconsin’s politically inspired prosecutors did: “investigate” perceived political threats for the very purpose of suppressing political speech. So long as arrests are never made and claims are never brought, the prosecutors are in the clear and no federal court can do anything about it. That can’t be the law.

The Supreme Court will decide this in the next couple of months whether to take the case of O’Keefe v. Chisholm.

Walter Olson

In connection with his new book The Libertarian Mind, my colleague David Boaz wrote a piece last week on how the struggle to abolish slavery was a defining episode for classical liberals and proto-libertarians of the past, indeed arguably their greatest accomplishment. In America, libertarian history and black history cannot be separated. 

We also know that after the end of slavery, the racial subjugation of American blacks did not end, but took new forms. As a new generation of historians has helped the nation remember, the “Black Codes” and Jim Crow laws that spread across the South after Reconstruction were part of an interlocking array of practices that at its worst succeeded in recreating “slavery by another name.” Some of those laws were explicitly racial–and “segregation” is wholly inadequate as a description of the racial subordination they enforced–but others worked through theoretically race-neutral legal institutions, including convict-leasing combined with steep penalties for minor or pretended offenses, debt peonage for tenant farmers, and laws prohibiting “vagrancy” (i.e., unemployment) or walking away from a labor contract, among other offenses.  

The other main branch of legalized racial oppression after the Civil War was, if anything, even more difficult yet necessary to confront: sanctioned violence outside the machinery of the state, symbolized by the practice of lynching. Last week the Equal Justice Initiative released a report (summary here) that was written up in the New York Times and has drawn attention from commentators including conservative Rod Dreher.

The details–be warned that they are gruesome in the extreme–include burnings alive and public tortures and mutilations carried out before crowds of hundreds, even thousands, of persons. “The white men, women, and children present watched the horrific murders while enjoying deviled eggs, lemonade, and whiskey in a picnic-like atmosphere.”

Contrary to the notion of mob violence as something arising from a moment of fury, they were often planned well in advance and even announced in newspapers beforehand. Contrary to the image of hooded and masked anonymous assailants, the participants often posed for photographs that were widely circulated and yet resulted in no legal consequences. And contrary to the portrayal of lynching as an extrajudicial means of ending the lives of lawbreakers who would have been punished in due course anyway, the report makes clear that a large number of lynchings were carried out over “minor social transgressions”–as punishment for attempts to speak out or organize against perceived injustice, and in general to instill a sense of terror and subordination in black populations. (There were even lynchings to punish blacks whose successful businesses were seen as “stealing” white merchants’ business.)

True, the report can be faulted on various points, as when it summarizes the post-Civil War experience by saying that “the nation did nothing to address the narrative of racial difference.” In fact, the Radical Republicans pursued a great battle for more thorough legal equality that resulted in some lasting achievements, notably the Reconstruction Amendments. But the wider message is real: for those who advocate the rule of law, lynching is among the darkest episodes in American history. 

And in those dark hours, classical liberals and early libertarians were on the right side. Few were more active than one of my youthful heroes, H.L. Mencken, whose crusade in the Baltimore Sun against lynching on Maryland’s Eastern Shore is well recounted by Marion Elizabeth Rodgers in this wonderful piece. Many of Mencken’s contemporaries equivocated:

When a second lynching occurred in Princess Anne, Maryland in 1933, President Franklin D. Roosevelt’s refusal to speak out on the atrocity was a matter of discussion throughout the country. Determined that this outrage not be dismissed, Mencken joined forces with Clarence Mitchell of the NAACP to promote the Costigan-Wagner Anti-Lynching Bill that would make lynching a capital offense. Mencken’s impassioned testimony in support of the bill galvanized senators on the committee. Predictably, Roosevelt refused to challenge the Southern leadership of his party, and the bill died.

On a happier note, National Public Radio and others have drawn attention to this long, powerful speech by U.S. District Judge Carlton Reeves of Mississippi. He delivered it in sentencing three defendants convicted in the sort of racial-murder episode for which Mississippi was once internationally notorious, but which has, blessedly, become exceedingly uncommon anywhere in the United States. “Today we take another step away from Mississippi’s tortured past … we move farther away from the abyss.” Judge Reeves’ speech is both a triumph of the rule of law over barbarism and, in what I think is not coincidence, a triumph of reason over emotion. You can read it here.

Paul C. "Chip" Knappenberger

A draft set of new dietary guidelines released yesterday by the U. S. Department of Health and Human Services (HHS) and the Department of Agriculture (USDA) was backed by a 571-page scientific report from the 2015 Dietary Guideline Advisory Committee (DGAC) that was assembled by the Obama administration.

The Washington Post reports that, for the first time ever, the Dietary Guidelines took into consideration the environmental impacts of food production in recommending that Americans decrease their consumption of red meat and increase their intake of plant-based food.

This is from the DGAC’s Executive Summary (emphasis added):

The major findings regarding sustainable diets were that a diet higher in plant-based foods, such as vegetables, fruits, whole grains, legumes, nuts, and seeds, and lower in calories and animal based foods is more health promoting and is associated with less environmental impact than is the current U.S. diet. This pattern of eating can be achieved through a variety of dietary patterns, including the Healthy U.S.-style Pattern, the Healthy Mediterranean-style Pattern, and the Healthy Vegetarian Pattern. All of these dietary patterns are aligned with lower environmental impacts and provide options that can be adopted by the U.S. population. Current evidence shows that the average U.S. diet has a larger environmental impact in terms of increased greenhouse gas emissions, land use, water use, and energy use, compared to the above dietary patterns. This is because the current U.S. population intake of animal-based foods is higher and plant-based foods are lower, than proposed in these three dietary patterns. Of note is that no food groups need to be eliminated completely to improve sustainability outcomes over the current status.

Among the environmental considerations is greenhouse gas emissions, which are significant for one reason only: climate change (despite the DGAC report explicitly stating it did not take into account climate change).

This is another example of the breadth of Obama’s Climate Action Plan—although one not announced as such … yet.

In anticipation, I wanted to see just what kind of a climate change impact these dietary guidelines could potentially avert.

My calculations are admittedly rough, but you’ll see once you get to the end, that it hardly makes much of difference even if I am off my an order of magnitude.

I’ll work through the extreme case that all Americans switch to becoming vegetarians (not a current recommendation of the DGAC).

According to a 2013 report from the United Nations Food and Agriculture Organization (FAO), globally, in 2005, greenhouse gas emissions from livestock production produced 7,100 million metric tons of carbon dioxide equivalents (mmtCO2-eq). Of that amount, 45% came from feed production, 39% from enteric fermentation, 10% from manure handling, and the remainder from bringing the product to market.

Of the global total, North American livestock production was responsible for just under 10%, or about 650 mmtCO2-eq.

Another study out last year compared the carbon footprint for different types of diets. It found that diets high in meat consumption had about twice the carbon footprint of vegetarian diets.

Combining the results of those studies (and assuming Canadian production amounts of about 10% of the North American total), I find that if all Americans become vegetarians, it would reduce greenhouse gas emissions by about 300 mmtCO2 per year.

So now, all that is left is to determine the climatic impact of 300 mmtCO2 (a calculation that the Obama administration strongly advises against). In previous work, I detailed a quick and dirty way to do this (see here for details). The result is that it takes about 1,767,250 mmtCO2 to raise the average global surface temperature by 1°C. (Note: this is a useful number to have handy for a fast check of announced plans to reduce carbon dioxide emissions for the sake of climate change. I recommend you write it down on a scrap of paper and tape it to your monitor–I did!).

With that number in hand, all we need to do is divide:

Three hundred mmtCO2 (saved by everyone in the United States converting to vegetarianism) divided by 1,767,250 mmtCO2/°C equals 0.0002°C.  This is the amount of global warming avoided each year if all Americans become vegetarians. Two ten-thousandths of a degree.

If we were to stick to this vegetarian diet between now and the end of the 21st century, we’d collectively help to keep global temperatures two-hundredths of a degree below where they’d otherwise be.

Seems like even if I were worried about future climate change and wanted to “do something” about it, ridding my table of steak wouldn’t be high on the list.

Paul C. "Chip" Knappenberger and Patrick J. Michaels

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

When it comes right down to it, the biggest potential threat from a warming climate is a large and rapid sea level rise. Everything else that a changing climate may bring we’ve seen before (or at least the likes of it), recovered from, and are better off for it (i.e., gained experience, learned lessons, developed new technologies, etc.). In fact, the more often extreme weather occurs, the more adaptive is our response (see for example, decreasing mortality in heat waves). So in that sense, climate change may hasten our adaptive response and reduce our overall vulnerability to it.

A large and rapid sea level rise is a bit of a different story—although perhaps not entirely so.

While we do have a large amount of infrastructure (e.g., big cities) in low-lying coastal regions, it is completely wrong to show them underwater in the future—a typical device used by climate activists. What will happen is that we will act to protect the most valued portions of that infrastructure, as shown in a recent report from leading experts (including from the U.S. Environmental Protection Agency) on sea level rise and response.

But, while targeted action will save our big cities, there is still a lot of real estate that will be lost if sea level rises a large amount in a short amount of time (say, by more than a meter [a little more than 3 feet] by the end of the 21st century).

We therefore keep a vigilant eye on sea level rise research. And what we’ve concluded is that sea level rise by the year 2100 is very likely to be quite modest, say about 15 inches—an amount that should allay concerns of a catastrophe. We’ve detailed literature in support of our conclusions here, there, and elsewhere.

This week, a new paper has come to our attention that further supports our synthesis.

In the journal Quaternary Science Reviews, researchers Nicolás Young and Jason Briner summarized the extant scientific literature on the size of the ice sheet covering Greenland during warm periods in the recent geologic past, with a special emphasis on the middle Holocene—a multi-millennial period centered some 3,000–4,000 years ago during which the temperatures across Greenland were about 1°–3°C higher than the 20th century average. They note this is similar to conditions projected to occur there by about the year 2100.

What Young and Briner found was that the size of the Greenland ice sheet—especially the best observed portions covering the west and southwestern parts of Greenland—during the mid-Holocene was smaller than it is today—but not by a whole lot. They wrote:

[W]e suggest that despite some degree of inland retreat, the West and Southwest [Greenland ice sheet] margin remained relatively stable and close to its current position through the Holocene thermal maximum.

The implication is that despite a period of warmer-than-present temperatures in Greenland lasting some 2,000 years, the ice sheet did not shrink to such an extent as to result in a whole lot of sea level rise. This is good news as to what to expect from future warming—the Greenland ice sheet seems pretty stable in the face of rising temperatures.

This is consistent with the remarkable findings of Dorthe Dahl-Jensen’s research team concerning the warmest era in the last 1.5 million years or so—the first 6,000 years of the last interglacial period, known as the Eemian.  It began 128,000 years ago. 

The ratio of two different isotopes of oxygen (18O/16O) in air trapped in ice provides an accurate measure of local temperature, and because snow compacts every year, it’s fairly straightforward to count backward, year-by-year, as one drills down through the Greenland icecap. Up until Dahl-Jensen’s report, no one had gotten completely through the Eemian.  And, up until then, it was thought that temperatures in those 6,000 years were some 2°–4°C warmer than in the current era. (Greenland’s temperatures were pretty flat during the 20th century.) Dahl-Jensen’s work shows that Greenland was an astounding 8° +/– 4° warmer! Over that 6,000 years, Greenland lost approximately a quarter of its ice, contributing to 2 meters of sea-level rise.

Young and Briner, along with Dahl-Jensen, provide strong evidence that Greenland’s ice will be disturbed very little by what humans are likely to do to the atmosphere. Let’s use the top-end of Young and Briner’s warming by 2100 (3°C) and jack it up to 5° for the next hundred years. Then, let’s make the plausible assumption that we haven’t a clue about society’s energy structure 200 years from now, so we’ll stop things there and let the warming damp back to 20th century levels in 500 years. The integrated heat applied to Greenland (we’ll provide gory details on request) works out to 1,500 degree-years.  What Young and Briner found was that the Holocene maximum provided, on average, 4,000 degree-years (2,000 years multiplied by 2°), over twice what humans can contribute. And Dahl-Jensen showed it took a whopping 36,000 degree-years to melt only a quarter of the ice there, 24 times what we can do. In other words, we can’t change the climate enough to ever cause a massive sea level rise from melting Greenland’s ice.

Young and Briner also find that the models tend to overdo the mid-Holocene ice sheet retreat. Examining a leading ice sheet model (described by Lecavalier and colleagues), Young and Briner conclude:

The modeled minimum ice sheet in Lecavalier et al. (2014) at 4 [thousand years ago] equates to a 0.16 [meter] sea-level contribution, but considering minimal inland retreat of the ice margin based on geological reconstructions, we suggest that this value may be a maximum estimate of the [Greenland ice sheet] contribution to sea level in the middle Holocene.

Lecavalier’s estimate of 0.16 meters equates to 6.3 inches—which Young and Briner think should represent the worst-case result of 2,000 years of projected end-of-the-century temperatures across Greenland. This also comports well with the estimates from the United Nations Intergovernmental Panel on Climate Change (IPCC), which, in its Fifth Assessment Report, projected the sea level rise from Greenland as being between 0.07 and 0.21 meters (2.8 to 8.3 inches) with a median value of 0.12 meters (4.7 inches) even under its highest greenhouse gas emission scenario.

Like we said, our view that future sea level rise will be modest is now firmly established by the scientific literature, contrary to nonscientific alarmist claims.

References:

Dahl-Jensen, D., et al., 2013.  Eemian interglacial reconstructed from a Greenland folded ice core.  Nature, 493, 489-494.

Lecavalier, B.S., et al., 2014. A model of Greenland ice sheet deglaciation constrained by observations of relative sea level and ice extent. Quaternary Science Reviews, 102, 54-84. DOI: 10.1016/jquascirev.2014.07.018.

Young, N.E., and J.P. Briner, 2015. Holocene evolution of the western Greenland Ice Sheet: Assessing geophysical ice-sheet models with geological reconstructions of ice-margin change. Quaternary Science Reviews, 114, 1-17, DOI: 10.1016/j.quascirev.2015.01.018

Ted Galen Carpenter

Proving that hawks never seem to learn, John McCain, Lindsey Graham, and the other usual suspects are advocating more substantial U.S. involvement in the civil wars convulsing such places as Iraq, Syria, and Ukraine. Before we head down that road again, we ought to insist that proponents of U.S. military crusades defend the results of their previous ventures. That exercise would cause all except the most reckless interventionists to hesitate.

It’s not merely the catastrophic outcomes of the Afghan and Iraq wars, which were pursued at enormous cost in both blood and treasure. The magnitude of those debacles is recognized by virtually everyone who is not an alumnus of George W. Bush’s administration. But even the less notorious interventions of the past two decades have produced results that should humble would-be nation builders. The current situations in Kosovo and Libya are case studies in the folly of U.S. meddling.

The United States led its NATO allies in a 78-day air war against Serbia to force that country to relinquish its disgruntled, predominantly Albanian province of Kosovo. In early 2008, the Western powers bypassed the United Nations Security Council and facilitated Kosovo’s unilateral declaration of independence. But today’s Kosovo is far from being a success story. In the past few months, there has been a surge of refugees leaving the country, fleeing a dysfunctional economy and mounting social tensions. Despite a massive inflow of foreign aid since the 1999 war, a third of the working-age population are unemployed, and an estimated 40 percent of the people live in dire poverty. Tens of thousands of Kosovars are now seeking to migrate to the European Union, ironically by traveling through arch-nemesis Serbia to reach European Union member Hungary.

Economic misery is hardly the only problem in the independent Kosovo that the United States and its allies helped create. Persecution of the lingering Serb minority and the desecration of Christian churches, monasteries, and other sites is a serious problem. Kosovo has also become a major center for organized crime, including drug and human trafficking.

Yet Kosovo is an advertisement for successful U.S.-led military crusades compared to the outcome in Libya. The Obama administration boasted of its “kinetic military action” (primarily cruise missile strikes) as part of the NATO mission to help insurgents oust dictator Muammar Gaddafi in 2011. Today, Libya is a chaotic mess. Once a major global oil producer, the country’s pervasive disorder has so thoroughly disrupted production that Libya faces financial ruin. Not only is Libya teetering on the brink of full-scale civil war, much of the country has become the plaything of rival militias, including an affiliate of ISIS. Journalist Glenn Greenwald concludes correctly that the Libyan intervention, which was supposed to show the effectiveness of international military action for humanitarian goals, has demonstrated the opposite.

Such sobering experiences confirm that U.S.-led interventions can often make bad situations even worse. Serbia’s control of Kosovo was hardly an example of enlightened governance, and Gaddafi was a corrupt thug who looted Libya. But as bad as the status quo was in both of those arenas, Western military meddling created far worse situations. That is the lesson that should be kept firmly in mind the next time armchair warhawks in Congress or the news media prod Washington to lead yet another ill-conceived crusade.

Pages