Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Benjamin H. Friedman

“Pleikus are like streetcars.” That’s how McGeorge Bundy, President Johnson’s national security advisor, explained what the escalation of U.S. bombing of North Vietnam in February 1965 to had to do with the administration’s justification for it, which was a Vietcong attack on U.S. bases near Pleiku. Johnson had already decided to increase bombing, but he wanted a pretext that would make it seem defensive. Bundy meant that, absent the Pleiku attack, another incident would have come along shortly to justify additional bombing. A similar bait-and-switch is occurring today in U.S. Iraq policy.

On August 7, President Obama explained that we were bombing Iraq again to defend U.S. personnel in Erbil and rescue ten of thousands of Yazidi civilians stranded on Mount Sinjar (really mountains) and surrounded by murderous militiamen of Islamic State of Iraq and the Levant (ISIL). Now, it turns out there were far fewer Yazidis on the mountain than the administration claimed; they are mostly out of harm’s way, and the threat to Erbil has ebbed. With the two goals he set for bombing achieved, the President quickly offered a third. In the letter sent to Congress Sunday (pursuant to the War Powers Resolution, which he flouts when it’s inconvenient) the President offered a third rationale for bombing: that U.S. bombing would help “Iraqi forces” retake the Mosul dam. Kurdish Peshmerga and Iraqi Special Forces have now done that.

Monday, the President again broadened the bombing’s objectives. The airstrikes against ISIS still protect U.S. personnel and serve humanitarian purposes, he said, but now, it seems, those are general goals that ongoing bombing serves. The President also suggested that ISIS is a security threat to the United States. Not for the first time, he said that once the new Iraqi government forms, we will “build up” Iraqi military power against ISIS. Only the speed of this slide down a slippery slope is surprising. As I recently noted, the humanitarian case for protecting the Yazidi easily becomes a case for continual bombing of ISIL and resumed counterinsurgency war in Iraq. Their danger to civilians was never limited to Sinjar. And as in Syria, the major humanitarian threat in Iraq is civil war.      

Americans, the president included, need to admit being out of Iraq potentially means letting it burn. The collapse of the fiction that U.S. forces stabilized Iraq before exiting forces us to confront the unpleasant contradictions in U.S. goals there. We want to avoid the tragic costs of U.S. forces trying to suppress Iraq’s violence. We want a stable Iraqi federal government and we want Iraqis to live peacefully. Each of those goals conflicts with the others. Even if the new Prime Minister is amenable to Sunni demands, U.S. bombing is unlikely to allow Iraqis to destroy ISIL and its allies. Large-scale violence will likely continue. Suppressing insurgency will likely require resumption of U.S. ground operations. And even that, we know, may not help much. Centrifugal forces in Iraq will remain strong, especially now that we are arming the Kurds. Vesting federal power in Prime Ministers that are inevitably Shi’ite makes continual sectarian fights likely. We should know by now that we lack the ability to stabilize Iraq at acceptable cost. We should also know that the primary threat to U.S. security in Iraq is the temptation to try to forcefully run it. Knowing these things means accepting some tragedy in Iraq.    

Simon Lester

Trade policy people spend most of their time talking about free trade between countries.  But there is still some work to do on free trade within countries.  Some Canadians are making a push right now, as Canadian business groups are calling for Canada’s leaders “to dismantle internal trade barriers and ensure the free movement of goods, services, capital and labour between all parts of the country.”

If that sounds odd, don’t get the wrong idea.  It’s not as though Canadian provinces are imposing tariffs on each other.  Rather, this is part of a more advanced notion of free trade, where you have a “single market” for goods and services.  So for example, these groups complain that:

Different regulations and standards means that manufacturers may need to adapt their machinery in order to produce containers such as dairy creamers, butter and drinkable yogurts for sale nationally across all provincial jurisdictions.


massage therapy is regulated in some provinces but not all, meaning that a professional would have to become certified in order to be allowed to practice.

These issues are difficult to address between sovereign nations, although people are trying, most notably in the Transatlantic Trade and Investment Partnership negotiations.  But within countries, it seems like this is something that could and should be dealt with.  Here in the U.S., we have the famous problem of not being able to buy health insurance across state lines.  The current effort in Canada seems like a valuable one; it might be useful to have a similar review of internal trade barriers here in the U.S.

Jason Bedrick

Today, Education Next released its latest survey results on education policy. As with the Friedman Foundation’s survey earlier this year and previous Education Next surveys, scholarship tax credits (STCs) remain the most popular form of private educational choice. STCs garnered support from 60% of respondents compared to 50% support for universal school vouchers and only 37% support for low-income vouchers.

The Friedman Foundation’s survey found the strongest support for educational choice among younger Americans. While Americans aged 55 and up favored STCs by a 53%-33% margin, Americans aged 18-34 supported STCs by a whopping 74%-14% margin. While it’s possible that younger Americans are more likely to support educational choice because they’re more likely to have school-aged children, it could also be evidence of growing support for educational choice generally. The series of Education Next surveys provides strong support for the latter interpretation, as shown in the chart below. (Note: the 2013 Education Next survey did not ask about STCs.)

While support for STCs was only 46% in 2009, it has grown to 60% this year. Over the same time, opposition has fallen from 27% to 24%, with a low of 16% in 2012. If support among millennials merely remains constant, overall support for educational choice will continue to grow in the coming years, making the adoption and expansion of such programs increasingly likely.

[See here for Neal McCluskey’s dissection of the Education Next survey questions concerning Common Core.]

Neal McCluskey

The annual Education Next survey is out, and its headliner is the Common Core. Unfortunately, it features basically the same incomplete, answer-skewing question it employed last year, and reports the same dubious finding of majority support. But even with that, the direction in which opinion has moved speaks volumes about the serious trouble the Core is in.

Just like last year, the question gives a misleading description of either the Core or national standards generically—pollsters asked a version that did not mention the Core by name—and got high rates of support. Here’s the question, with the parts that were omitted, for half the respondents, in brackets:

As you may know, in the last few years states have been deciding whether or not to use [the Common Core, which are] standards for reading and math that are the same across the states. In the states that have these standards, they will be used to hold public schools accountable for their performance. Do you support or oppose the use of these [the Common Core] standards in your state?

Like last year, the question completely ignores major federal coercion behind states’ adopting the Core, as well as the fact that the Core itself is only part of what’s necessary to “hold public schools accountable.” Tests, and consequences for performance on them, are needed for accountability, and those are driven by federally demanded testing and sanctions. Oh, and Washington selected and paid for specific Core-aligned tests.  Meanwhile, generic common standards would in no way have to be used to hold schools accountable; they could just be toothless measuring devices. And how many people would come out against something as seemingly positive as holding schools “accountable”? The devil is in how, exactly, that would be done.

So what results did these formulations get? With the Core specifically mentioned, 53 percent supported it, 26 percent opposed, and 21 percent had no opinion. Among teachers, 46 percent supported the Core, 40 percent opposed, and 14 percent had no opinion. Pull the specific reference to the Core, and public support leapt to 68 percent.

Given the question, those results almost certainly overstate Core support, perhaps markedly. What they likely don’t do—though there are slight variations in the question between last year and this year—is mask the trend in Core support: it is tumbling. Favorable responses from the general public dropped from 65 percent in 2013 to 53 percent this year, while teacher support free-fell from 76 percent to 46 percent.

Unfortunately, it seems the pollsters may have been looking to nail a specific culprit for this. In addition to the main question, this year they asked respondents whether three statements about the Core were true or false, and found that most of the general public got them wrong. But, as we shall see, that may well be because some people are aware of important details and realities missed by the simple true/false options with which they were presented.

Statement #1: “The federal government requires all states to use the Common Core standards.” Only 36 percent identified this as false—the “right” answer—but only in the strictest sense is it so. Washington can’t, say, send troops to make your state use the Core, but it can and did make it nearly impossible to win part of $4 billion in Race to the Top funds—money taxpayers, who live in states, had to fork over—if a state didn’t adopt. It also made it harder to get a waiver from the hidebound, reviled rules of No Child Left Behind. Indeed, Washington can’t require states to follow NCLB either, but it can make people pay taxes and then return billions of those dollars to states only if they accept federal rules. And you hear almost no one seriously say that following NCLB isn’t required.

Statement #2: “In states using the Common Core standards, the federal government will receive detailed data on individual students’ test performance.” Only 15 percent of people thought this false, but states using the Core do not, in fact, have to send detailed student-level data to D.C. That said, Race to the Top demands heightened state-level data collection; the feds have been pushing collection of specific data for years; and maybe some people have heard enough about the NSA to reasonably think, “Sure seems likely the feds will eventually grab student data, especially since they are the ones driving much more in-depth and centralized collection.” 

Statement #3: “Under the Common Core standards, states and local school districts can decide which textbooks and educational materials to use in their schools.” Some 48 percent of respondents said that this is true, and certainly states and districts can decide on their materials. But to succeed on federally pushed Common Core they’d better be Core-aligned materials. And after, say, King Lear and area models appear on Core tests for a few years, they’d better start teaching King Lear and area models. More directly, the Core recommends many specific texts. So sure, states and districts can decide. But they’d better decide on things that go with the Core.

Unfortunately, the pollsters seem to use the responses to these statements to explain away Core opposition, in much the same way Core fans have been dismissing opponents for years: portraying them as ignorant or confused. The pollsters write that the responses “may indicate that opposition to the Common Core is driven, in part, by misconceptions.”

Okay. Or perhaps the answers indicate that Common Core reality is far more complicated than simplistic survey questions can handle.

Thankfully, there is a lot more tackled in the latest Education Next survey than the Core, and most of it is not nearly as vexing. Stay tuned for coverage of that!

Nicole Kaeding

Yesterday’s Washington Post has an in depth—and very depressing—piece about Medicare fraud. The piece focuses on scammers taking advantage of Medicare’s payment systems to buy unnecessary motorized wheelchairs and scooters for Medicare enrollees and stick American taxpayers with the bill.

Medicare’s payment system is designed to pay bills within 30 days of receipt; the system receives 5 million claims daily. Due to the huge volume of payments, Medicare only reviews a very small percentage, 3 percent, before the payment is made. Instead, payments are reviewed after they are processed, but even then not all are subject to oversight and review.

That system design invites fraud and scammers are able to take advantage. The Washington Post describes it as an “honor system.” The lack of upfront investigations costs taxpayers billions annually in fraud and wasteful payments.

But even worse than Medicare’s lax oversight is that officials knew about the fraud regarding wheelchairs and still didn’t act. According to the Washington Post,

Now, the golden age of the wheelchair scam is probably over.

But, while it lasted, the scam illuminated a critical failure point in the federal bureaucracy: Medicare’s weak defenses against fraud. The government knew how the wheelchair scheme worked in 1998. But it wasn’t until 15 years later that officials finally did enough to significantly curb the practice.

This problem was widespread. Medicare has spent $8.2 billion on power wheelchairs since 1999 for an ever-increasing proportion of enrollees. Records suggest “that at least 80 percent of claims were ‘improper.’”

Before the fraud had taken off, the chairs were rare:  One study estimated that in 1994, only 1 in 9,000 beneficiaries got a new wheelchair.

By 2000, it was 1 in 479.

By 2001, it was 1 in 362.

By 2002, it was 1 in 242.

In 2012 up to 219,000 Medicare recipients received motorized wheelchairs, 1 in 235 patients, worse than in 2002. In 2013 only 124,000 individuals, 1 in 400 patients, received power wheelchairs from Medicare.

Medicare is slowly getting the issue under control; it is just 15 years too late.

Doug Bandow

U.S. foreign policy is a bipartisan fiasco.  George W. Bush gave the American people Iraq, the gift that keeps on giving.  Barack Obama is a slightly more reluctant warrior, but he is taking the country back into Iraq.

Hillary Clinton, the unannounced Democratic front-runner for 2016, supported her husband’s misbegotten attempt at nation-building in Kosovo and led the drive for war in Libya, which is violently unraveling.  Most of Clinton’s potential GOP opponents share Washington’s bomb, invade, and occupy consensus. 

The only exception is Kentucky Sen. Rand Paul.  He stands alone advocating a foreign policy which reflects the bitter, bloody lessons of recent years.

The Islamic State of Syria and the Levant is the latest result of Washington’s incessant and counterproductive meddling in the Middle East.  But the usual suspects are calling for more intervention, more war.  This time, they promise, everything will go well. 

This is the Obama administration’s position in Iraq and Syria.  However, Hillary Clinton has begun maneuvering for 2016 by running to Obama’s right.  While she mocked the president’s mantra of “Don’t do stupid stuff,” she spent her career doing just that.

Instead of offering an alternative leading Republicans are all in for war, more war, forever war.  Senators John McCain and Lindsey Graham, naturally have been advocating that America intervene more in both Syria and Iraq. 

Most plausible Republican candidates are running toward the interventionist sideline.  They blame Obama for Iraq even though it was George W. Bush who invaded that nation and failed to win Iraqi approval for a permanent U.S. garrison. 

New Jersey’s Gov. Chris Christie has ostentatiously joined the most hawkish GOP elements.  Former Arkansas Gov. Mike Huckabee accused President Obama of guessing wrong in Egypt, Iran, Libya, and Syria, even though the president acted on the traditional Republican script in all four cases.

Florida’s Marco Rubio advocated military action against ISIL, after supporting the usual plethora of interventionist disasters:  war in Libya, more involvement in Syria, and now combat in Iraq.  Texas Sen. Ted Cruz also pushes a strongly hawkish agenda, though he at least opposed bombing Syrian government forces. 

Last month Texas Gov. Rick Perry attacked Paul as an isolationist and advocating that the U.S. go back to war in Iraq.  Michael Goldfarb approvingly said of Perry “you have to assume he’d shoot first and ask questions later.” 

Dramatically misguided was the latter’s contention that “isolationism”—in contrast to the promiscuous interventionism of the last three decades which has spawned so many vicious attacks—threatened to increase terrorism. 

Underlying the torrent of Republican criticism of Paul is fear.  The American people are tired of incessant war-mongering by the Washington elite.  Paul rightly noted that “The country is moving in my direction.”  That’s scary if your political future is tied to policies that have failed so flagrantly and frequently. 

Paul is more cautious than his father, former Rep. Ron Paul.  Nevertheless, Paul fils recently noted that “The let’s-intervene-and-consider-the-consequences-later crowd left us with more than 4,000 Americans dead, over two million refugees and trillions of dollars in debt.”

In citing President Ronald Reagan’s maxim of “peace through strength,” Paul noted some Republicans “have forgotten the first part of the sentence:  That peace should be our goal even as we build our strength.” As I note in my latest Forbes online column, people are tired of young Americans “being treated as gambit pawns in an endless series of global chess games, to be sacrificed whenever folks in Washington dream up a grand new crusade.”

Hillary Clinton represents today’s foreign policy consensus—of constant intervention and war.  Nominating someone who advocates the same failed policy would seem to be the best way for Republicans to lose in 2016.  Will anyone join Rand Paul in charting a different course?

David Boaz

The New York Times reported Thursday:

Mr. Obama is fast becoming the past, not the future, for donors, activists and Democratic strategists. Party leaders are increasingly turning toward Mrs. Clinton and her husband, former President Bill Clinton, as Democrats face difficult races this fall in states where the president is especially unpopular, and her aides are making plain that she has no intention of running for “Obama’s third term.”

Which put me in mind of this statement famously attributed to another woman who had “the heart and stomach of a king” and the will to rule, Queen Elizabeth I:

I know the inconstancy of the English people, how they ever mislike the present government and have their eyes fixed upon that person who is next to succeed. More people adore the rising sun than the setting sun.

Which is why Elizabeth never designated a successor. Every incumbent president probably wishes he had that power.

Christopher A. Preble

It is good news that Nuri Kamal al-Maliki has decided to step down as Iraq’s prime minister. This means that, for the first time in Iraq’s modern history, there is the prospect of a peaceful transition of power, based on democratic principles and without the heavy hand of the U.S. military seeming to tip the scales to one party or group.

But don’t pop the champagne just yet. As the New York Times notes today, the new prime minister, Haider al-Abadi—like Maliki, a Shiite and member of the Dawa Party—will likely face many of the same challenges that Maliki did. Abadi will need to find a way to form an inclusive coalition government, one that protects the rights of Sunnis and appeases the Kurds’ desire for autonomy, while maintaining support from Iraqi Shiites.

This is a tall order. Many in the Shiite community that was terrorized for so long by the Sunni minority harbor deep resentment toward their former oppressors. Meanwhile, the Sunnis who held power want desperately to get it back, or at least to be able to protect themselves from reprisals. Some Sunnis are so distrustful of the central government that they’ve thrown their lot with the Islamic State in Iraq and Syria (ISIS), whose barabarism seems almost limitless. It is not clear how Abadi will bridge this trust gap.

Americans should wish Iraq’s new leader well, but policymakers should resist the urge to try to micromanage political events in Iraq. Even the appearance of U.S. influence over Abadi will undermine his legitimacy and thus could be counterproductive. Besides, it isn’t obvious that U.S. action—and only U.S. action—is essential to turning things around in Iraq. One suspects that the most vocal critics of President Obama’s Iraq policy have broader concerns. As I explain in today’s Orange County Register:

[W]hen the hawks screech that Obama isn’t doing enough, what they really worry about is that others might actually be able to do without us, or with only minimal assistance. A newly energized Kurdish militia already appears to have reversed some of ISIS’s recent gains. Syria’s Bashar al-Assad might begin rolling back ISIS fighters there. And a new government in Baghdad might finally be able to fashion a credible military force. At a minimum, even modest political reforms—or the prospect of them—could convince more Sunni Iraqis to fight against ISIS instead of for them.

Any time Iraq is in the news, Americans are reminded about those who pushed for war there in the first place. It should provide an opportunity to revisit our broader foreign policy goals. Instead, U.S. policymakers and elites still call for action without any obvious sense that they’ve learned anything.

It’s time to reconsider U.S. military intervention broadly, as well as the specific advice of those who confidently, yet incorrectly, predicted that there would be no civil war in Iraq or Libya, or who called for assisting the Syrian opposition (some members of whom are now waging war in Iraq—with U.S. weaponry no less).

And while President Obama’s approval rating goes down, including for his handling of foreign policy, it isn’t obvious that the GOP can turn this to its advantage, as Pew’s Andrew Kohut noted earlier this year. A bipartisan consensus inside Washington pushed the Iraq war, and Democrats and Republicans continue to push foolish military interventions that the public wants no part of.

People outside the Beltway bubble are seeking a real change of course, not just more of the same. If they don’t get it from those who have held power for so long, they’ll look elsewhere.

Jim Harper

On Monday, Cato is hosting a briefing on Capitol Hill about congressional Wikipedia editing. Over a recent 90-day period, there were over 400,000 hits on Wikipedia articles about bills pending in Congress. If congressional staff were to contribute more to those articles, the amount of information available to interested members of the public would soar. Data that we produce at Cato go into the “infoboxes” on dozens and dozens of Wikipedia articles about bills in Congress.

A popular Twitter ‘bot called @congressedits recently created a spike in interest about congressional Wikipedia editing. It puts a slight negative spin on the practice because it tracks anonymous edits coming from Hill IP addresses, which are more likely to be inappropriate. But Congress can do a lot of good in this area, so Cato intern Zach Williams built a Twitter ‘bot that shows all edits to articles about pending federal legislation. This should draw attention to the beneficial practice of informing the public before bills become law. Meet @Wikibills!

Also, as of this week, Cato data are helping to inform some 26 million visitors per year to Cornell Law’s Legal Information Institute about what Congress is doing. Thanks to Tom Bruce and Sarah Frug for adding some great content to the LII site.

Let’s say you’re interested in 18 U.S. Code § 2516, the part of the U.S. code that authorizes interception of wire, oral, or electronic communications. Searching for it online, you’ll probably reach the Cornell page for that section of the code. In the right column, a box displays “Related bills now in Congress,” linking to relevant bills in Congress.

Those hyperlinks are democratic links, letting people know what Congress is doing, so people can look into it and have their say. Does liberty automatically break out thanks to those developments? No. But public demands of all types—including for liberty and limited government—are frustrated now by the utter obscurity in which Congress acts. We’re lifting the curtain, providing the data that translates into a better informed public, a public better equipped to get what it wants.

The path to liberty goes through transparency, and transparency is breaking out all over!

Daniel J. Ikenson

It was good of the Washington Post Editorial Board to raise questions yesterday about the veracity of the “jobs-created-by-Export-Import-Bank-policies” claims proffered by the Bank’s supporters. I just wonder whether the editorial pulled its punches where a reporter on assignment or a more inquisitive journalist would have delivered an unabashed blow to the credibility of the Bank’s primary reauthorization argument: that its termination will lead to a reduction in U.S. exports and jobs.

Kudos to the Post for raising an eyebrow at the Bank’s claims of “jobs created” or “jobs supported” by Ex-Im financing:  

[W]hen it comes to jobs, well, just how rigorous are [Ex-Im’s] estimates, really? Congress ordered a study of that very question when it last reauthorized Ex-Im in 2012. In May 2013, the Government Accountability Office (GAO) produced its verdict: Meh.”

“GAO noted that Ex-Im must speak vaguely of “jobs supported,” rather than concretely of jobs created, since its methodology cannot really distinguish between new employment and retained employment. To get a number for “jobs supported,” which includes both a given firm and that firm’s suppliers, Ex-Im multiplies the dollar amount of exports it finances in each industry by a “jobs ratio” (calculated by the Bureau of Labor Statistics).

Using that approach, Ex-Im estimates an average of 6,390 jobs are “supported” by every billion dollars of exports financed. The Post is right to note the GAO’s conclusion:

These figures do not differentiate between full-time and part-time work and, crucially, provide no information about what might have happened to employment at the firms in question, or others, if the resources marshaled by Ex-Im had flowed elsewhere in the economy.

But, of course, what happens to the subsidized and nonsubsidized companies in the absence of Ex-Im is exactly what the Bank wants to conceal because it is the hyped-up specter of job loss that it relies upon to gain support for reauthorization. Realistically, in the short term, Ex-Im benefits some U.S. companies (those whose exports are subsidized) at the expense of other U.S. companies (those whose exports are not subsidized). Termination of Ex-Im will roughly reverse those fortunes by re-leveling the playing field between U.S. companies (to borrow and alter a metaphor) while freeing up the resources Ex-Im controls for more efficient uses.

Alas, the editorial fails to ponder this shuffling-of-resources-from-outsiders-to-insiders function that Ex-Im dutifully serves. Instead it gives an excerpt from the GAO report (in a manner slightly altered from the original) that has the effect of presenting Ex-Im in a more favorable light:

GAO found nothing fraudulent about any of this, nor do we.

Fraud? We weren’t considering fraud. We were evaluating whether the claim that Ex-Im creates or supports jobs is a credible one. Intentionally or not, exonerating Ex-Im from fraud seems to serve the purpose of rendering all other concerns about Ex-Im claims secondary by giving the false impression that the important thrust of the inquiry has been completed. Here’s the full paragraph:

GAO found nothing fraudulent about any of this, nor do we. The watchdog agency simply noted the rather crucial assumptions and limitations embedded in Ex-Im’s methodology and urged the bank to be more transparent about them—because “Congressional and public stakeholders may not fully understand what the jobs number that Ex-Im reports represents and the extent to which Ex-Im’s financing may have affected U.S. employment.” (Emphasis added)

Again, the context in which the editorial reports this GAO finding seems to change the tone and thrust of GAO’s message. Here is the text from the GAO report on this point:

Because of a lack of reporting on the assumptions and limitations of its methodology and data, Congressional and public stakeholders may not fully understand what the jobs number that Ex-Im reports represents and the extent to which Ex-Im’s financing may have affected U.S. employment.

The emphasis in the editorial’s portrayal implies that the Ex-Im data may be chock full of compelling evidence of the Bank’s importance, but without adequate explanation that evidence is too complex for Congress and public stakeholders to fully comprehend. However, the clear meaning of the GAO report is that the Ex-Im data are limited in their utility as evidence of the Bank’s importance to jobs, exports, and the economy, and that using the data for that purpose is misrepresentative and misleading.

Had the Editorial Board dug a little deeper into the GAO report, it might have found, among other limitations to “Employment Requirements Tables” (ERTs, used by Ex-Im to project its jobs-supported figures), is that “the employment data are a count of jobs, not of persons employed… Persons who hold multiple jobs show up multiple times in the employment data.” Basically, it is job “functions” that are counted, not jobs, and—despite the best efforts of organized labor—it is quite common for one worker to perform multiple job functions.

Moreover, Ex-Im’s “jobs-supported” numbers derive from ERTs that themselves derive from input-output analysis conducted by the Bureau of Economic Analysis and roughly follows—with some customized adjustments—the approach of other Commerce Department studies on the relationship between exports and jobs.  In the “Methodology and Caveats” section of the 2010 Commerce Department study “Exports Support American Jobs,” there is this disclaimer:

Averages derived from IO analysis should not be used as proxies for change. They should not be used to estimate the net change in employment that might be supported by increases or decreases in total exports, in the exports of selected products or in the exports to selected countries or regions.

Of course, that is precisely what Ex-Im proponents are doing. Those important caveats have not deterred pro-reauthorization lobbyists from warning members of Congress of how many jobs are imperiled in their states and districts in the absence of reauthorization. In most cases, the actual effect of Ex-Im authorizations on particular states and districts is so small relative to the economy and relative to overall exports that creative arithmetic features prominently. 

For instance, yesterday the Chamber of Commerce tweeted:

In Ohio alone, Ex-Im has supported 15,300 jobs and financed $2.4 billion worth of exports since 2007.” (Emphasis added)

Let’s parse: We are talking about a seven-year period here, so on an annual basis Ex-Im has financed an average of $343 million in Ohio exports, supporting 2,186 jobs. (The jobs figure is based on Ex-Im’s estimate of 6,390 jobs supported per $1 billion of exports financed, and as mentioned above, these are really job “functions.”)  But Ohio has exported an average of $47 billion per year since 2007 and employs 5.4 million workers. I suppose tweeting, “In Ohio alone, Ex-Im has supported 0.04% of all jobs and financed 0.7% of all exports” wouldn’t convey the same sense of urgency to Ohio’s congressional caucus.

In the end, the editorial seems to diminish the importance of its own inquiry by giving a nod to the way things are done inside the Beltway, offering a faintly exasperated, but more tongue-in-cheek “lobbyists will be lobbyists” excuse instead of speaking out against the continued use of propaganda in the policy debate.

Doug Bandow

The New York Times wonders if the libertarian moment has arrived. Unfortunately, there’ve been false starts before. 

Ronald Reagan’s election seemed the harbinger of a new freedom wave. His rhetoric was great, but actual accomplishments lagged far behind. 

So, too, with the 1994 Republican takeover of Congress.  Alas, the GOP in office behaved little different than many Democrats. 

Since then there’s been even less to celebrate—in America, at least. George W. Bush was an avid proponent of “compassionate,” big-government conservatism. Federa outlays rose faster than under his Democratic predecessor. Barack Obama has continued Uncle Sam’s bailout tradition, promoting corporate welfare, pushing through a massive “stimulus” bill for the bank accounts of federal contractors, and seizing control of what remained private in the health care system.

Over the last half century, members of both parties took a welfare state that was of modest size despite the excesses of Franklin Delano Roosevelt’s New Deal and put it on a fiscally unsustainable basis as part of the misnamed “Great Society.” Economist Lawrence Kotlikoff figures government’s total unfunded liability at around $220 trillion. 

The national government has done no better with international issues. Trillions went for misnamed “foreign aid” that subsidized collectivism and autocracy. Trade liberalization faces determined resistance and often is blocked by countries that would gain great benefits from global commerce.

Even worse has been foreign policy. The joy people felt from the collapse of the Berlin Wall a quarter century ago has been forgotten. 

The defense budget has turned into a new form of foreign aid for America’s populous and prosperous allies. The United States has been constantly at war, repeatedly proving that the Pentagon is no better at social engineering than is any other government agency. 

Americans across the political spectrum agree that something is wrong, that the status quo is no good. But they disagree on the remedy.

However, the answer shouldn’t be that hard to discern. The definition of insanity, runs the old adage, is to keep doing the same thing while expecting different results. 

By that definition, Washington policymakers are insane. The economy is slowing, people are falling behind economically, freedoms are being lost, and security fears are rising? No problem. Roll out the usual failed nostrums.

We know what the effect of these policies will be. All we have to do is look around the world and see what has happened.

It is this reality, not new personalities or generations, that is creating a libertarian moment. The 20th century killed off communism and fascism as serious alternatives. The chief competitor to this systems was not laissez-faire capitalism, as some suggested, but highly regulated and monumentally expensive welfare states. They were freer and more prosperous than their geopolitical antagonists—even a little capitalism goes a long way—but the erosion of liberty and prosperity has been constant. 

Perhaps more debilitating was the corrosive effect on the foundational principles of a free society, such as independence, self-reliance, responsibility, accountability, and more. This assault in America continues with, for instance, the federal government turning health care into another massive entitlement, highlighted by pervasive regulation and income redistribution. 

The obvious—and only—alternative to more government, which has failed so badly, is less government. Lower tax rates and rationalize complex tax systems. Cut the wasteful looting and pillaging that is a hallmark of today’s transfer society. Repeal unnecessary and relax unnecessarily stringent regulations, while making legitimate rules more market-friendly. Model liberty, prosperity, tolerance, and peace for others, allowing individual Americans going abroad to be America’s best ambassadors.

Has the libertarian moment arrived? The tyranny of the status quo, as Milton Friedman termed it, remains omnipresent and powerful. As a result, I point out in the Freeman, “the libertarian moment will not ‘arrive.’  It will have to be brought forward by those committed to a better and freer America.”

Doug Bandow

SHENYANG, CHINA—China-Korean relations are in a state of flux.  The People’s Republic of China and South Korea have exchanged presidential visits.  Trade statistics suggest that the PRC did not ship any oil to the North during the first quarter of the year.  Chinese academics openly speak of Beijing’s irritation with its long-time ally.

The cold feelings are reciprocated.  Last year North Korea’s Kim Jong-un sent an envoy to the PRC to unsuccessfully request an invitation to visit.  In December Kim had his uncle, Jang Song-taek, the North’s most intimate interlocutor with China, executed.

These circumstances suggest the possibility of a significant foreign policy shift in Beijing away from the North and toward the Republic of Korea.  Washington hopes for greater Chinese willingness to apply economic pressure on Pyongyang.  However, the PRC remains unwilling to risk instability by undermining the Kim dynasty. 

I recently visited China and held scholarly meetings amid excursions to long-missed tourist sites (such as Mao’s Mausoleum!).  I also made it to Shenyang, where relations with the North are of great interest because the city is about a two hour drive from the Yalu River.

I met one senior scholar who indicated that there was no doubt that Beijing-Pyongyang relations had changed since Kim came to power.  The two nations “have a different relationship now and it is becoming colder than ever before.” 

However, Jang’s execution had been “weighed too heavily by Western researchers,” he indicated.  In fact, economic relations had continued.  Jang’s fate was a matter of internal North Korea politics, “the result of the natural struggle for power.” 

This doesn’t mean Beijing was happy about Jang’s fate.  However, Jang’s ouster “is not the reason for the DPRK’s and China’s bad relations.” 

Rather, the principal barrier is the North’s continued development of nuclear weapons.  Kim Jong-un wants to visit China.  But it is “unimaginable for Chinese officials to invite him when he’s doing nuclear tests.  Impossible.”

In return, the North is unhappy over Beijing’s refusal to accommodate Kim as well as the end of oil shipments.  “Also, the DPRK is quite angry over the quick development of Chinese relations with South Korea.” 

This has made Pyongyang “eager to make contact with the U.S.,” an effort which so far has gone nowhere.  This is why the Kim regime “took American citizens as hostages” and invited Dennis Rodman to visit, but these tactics “are not working.” 

The North eventually “shifted the focal point of its foreign relations to Japan.”  For the same reason, though “less importantly the DPRK made contact with Russia.”

The PRC is quite interested in U.S.-DPRK relations and Washington’s view of Japan’s move toward Pyongyang.  “One of the uniform convictions for both the U.S. and China is no nuclear weapons in the DPRK,” he emphasized. 

However, in Beijing’s view the solution is not more sanctions which “everyone has been putting on the DPRK,” but revival of the Six-Party Talks.  This is where agreement between the U.S. and China breaks down. 

The PRC wants more negotiations, preceded by an American willingness to reduce tensions and Pyongyang’s perceived need for a nuclear arsenal.  The U.S. wants the North to make concessions beforehand lest the latest round fail like the many previous efforts.

This clash reflects an even deeper disagreement over competing end states.  Both Washington and Beijing oppose a nuclear North Korea.  However, the U.S., in contrast to China, would welcome a DPRK collapse, even if messy, and favor reunification with the South.

As I write in China-U.S. Focus, It isn’t impossible for American and Chinese policymakers to work through their differences.  However, it will require understanding the other party’s perspective and offering meaningful concessions to make the deal a positive for both parties.

Patrick J. Michaels and Paul C. "Chip" Knappenberger

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.


When it comes to global warming, facts often take a back seat to fiction. This is especially true with proclamations coming from the White House. But who can blame them, as they are just following the lead from Big Green groups (aka, “The Green Blob”), the U.S. Climate Change Research Program (responsible for the U.S. National Climate Assessment Report), and of course, the U.N.’s Intergovernmental Panel on Climate Change (IPCC).

We have documented this low regard for the facts (some might say, deception) on many occasions, but recently we have uncovered  a particularly clear example where the IPCC’s ideology trumps the plain facts, giving the impression that climate models perform a lot better than they actually do. This is an important façade for the IPCC to keep up, for without the overheated climate model  projections of future climate change, the issue would be a lot less politically interesting (and government money could be used for other things…or simply not extorted from us in the first place).

The IPCC is given deference when it comes to climate change opinion at all Northwest Washingon DC cocktail parties (which means also by the U.S. federal government) and other governments around the world. We tirelessly point out why this is not a good idea. By the time you get to the end of this post, you will see that the IPCC does not seek to tell the truth—the inconvenient one being that it dramatically overstated the case for climate worry in its previous reports. Instead, it continues to obfuscate.

This extracts a cost. The IPCC is harming the public health and welfare of all mankind as it pressures governments to seek to limit energy choice instead of seeking ways to help expand energy availability (or, one would hope, just stay out of the market).

Everyone knows that global warming (as represented by the rise in the earth’s average surface temperature) has stopped for nearly two decades. As historians of science have noted, scientists can be very  creative when defending the  paradigm that pays.  In fact, there are  already  several dozen explanations

Climate modellers are scrambling to try to save their precious children’s  reputations—because the one thing that they do not want to have to admit is that they exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the projected impacts that derive from the model projections—and that would be a disaster for all those pushing for regulations limiting our use of fossil fuels for energy. Its safe to say the number of people employed by creating, legislating, lobbying, and enforcing these regulations is huge, as in “The Green Blob.”

In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, the IPCC  pays brief attention to the  recent divergence between model simulations and real-world observations:

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

But, lest you foolishly  think that there may be some problem with the climate models, the IPCC clarifies:

“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”

Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.

The IPCC references its “Box9.2” in support of the statements quoted above.

In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.


Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)

As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends.  The IPCC describes this as:

…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble

This gives rise to the IPCC SPM statement (quoted above) that

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

No kidding!

Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.

The IPCC describes the situation depicted there as:

Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…

This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.


The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC.  From there, you can assess 108 (or the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.

We do this in our Figure 2.  However, we adjust both axes of the graph such that all the data are shown and that you can see the inconvenient details.


Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 emissions scenario results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).

What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).

So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,”  “medium,”  “high,” or  “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

OK. You got your answer?

Our answer is, maybe, “medium”, and there is plenty of room for improvement.

The model range should be much tighter, indicating that the models were in better agreement with one another as to what the simulate trend should have been.  As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (with  the observed trend at 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.

Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.

What would lower our confidence?

The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world  (or that natural variability was very large over the period of trend analysis).  Or,  the observed trend could move further from the center point of the model trend distribution.  This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012).

Unfortunately, that’s what is happening.

Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.


Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.

After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop.  Clearly, as anyone can see, this “tendency within a  trend” (which Casey Stengel or Yogi Berra would have doubtlessly  called the “trendency”) is looking bad for the models as the level of agreement with observations is steadily decreasing with time.

In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results.  In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.

So, just how far away from either of these situations are we?

It all depends on how the earth’s average surface temperature evolves in the near future.

We explore three different scenarios  between now and the year 2030.

Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.

Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.

Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.

Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend (beginning  in 1951)would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.


Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 greenhouse gas scenario thereafter.

It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest yet observed (Scenario 3) still leads to complete model failure within two decades.

So let’s review.

1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.

2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.

3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.

4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.

So with all this information in hand, we’ll give you a moment to revisit your initial response to this question:

On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,”  “medium,”  “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Got your final answer?

OK, let’s compare that to the IPCC’s assessment of the situation.

The IPCC gave it “very high confidence”—the highest level of confidence that they assign.

Do we hear stunned silence?

This in a nutshell sums up the IPCC process.  The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC tells us with “very high confidence” that models agree with observations, and therefore are a reliable indicator of future climate changes.

Taking the IPCC at its word is not a good idea.

[This is major revision of a post that first appeared at Watts Up With That on April 16, 2014.]

Steve H. Hanke

Last week, President Obama hosted the U.S.-Africa Leaders Summit in Washington, D.C. He welcomed over 40 African heads of state and their outsized entourages to what was a festive affair. Indeed, even the Ebola virus in West Africa failed to dampen spirits in the nation’s capital. Perhaps it was the billions of dollars in African investment, announced by America’s great private companies, that was so uplifting.

Good cheer was also observed in the advertising departments of major newspapers. Yes, many of the guest countries paid for lengthy advertisements–page turners–in the newspapers of record. That said, the substantive coverage of this gathering was thin. Neither the good, the bad, nor the ugly, received much ink.

What about the good? Private business creates prosperity, and prosperity is literally good for your health. My friend, the late Peter T. Bauer, documented the benefits of private trade in his classic 1954 book West African Trade. In many subsequent studies, Lord Bauer refuted conventional wisdom with detailed case studies and sharp economic reasoning. He concluded that the only precondition for private trade and prosperity to flourish was individual freedom reinforced by security for person and property.

More recently, Ann Bernstein, a South African, makes clear that the establishment and operation of private businesses does a lot of economic good (see: The Case for Business in Developing Countries, 2010). Yes, businesses create jobs, supply goods and services, spread knowledge, pay taxes, and so forth. Alas, in the Leaders Summit reportage that covered the multi-billion dollar investments by the likes of Coca-Cola, General Electric, and Ford Motor Co., the benefits of the humdrum activity of business and trade were nowhere to be found. But, as they say, “that’s not the president’s thing.”

Let’s move from the good to the bad and the ugly, and focus on the profound misery in Sub-Saharan Africa. I measure misery with a misery index. It is the simple sum of inflation, unemployment, and the bank lending interest rate, minus year on year GDP per capita growth. Using this metric, the countries for Sub-Saharan Africa are ranked in the accompanying table for 2012.

As I discussed in my recent Globe Asia column, index scores of around 10 or below indicate a relatively free economy, and countries with scores around 20 indicate considerable dysfunction, requiring serious structural (read: free market) reforms. The Sub-Saharan rankings show that the region goes from bad to ugly. For most of these countries to be hospitable for private businesses and the prosperity they bring, huge structural reforms will have to be undertaken.

Can the governments govern? Are they up to the basic tasks which include the maintenance of law and order, the effective management of monetary and fiscal affairs, and the provision of suitable institutions to support private activities?

For governments in Sub-Saharan Africa, the ability to produce timely economic data sheds some light on these questions. For 2013, only 6 countries reported the data required to construct a misery index (see the accompanying table). For the other 42 countries in Sub-Saharan Africa, the basic economic data required to produce a misery index are 1.5 years out of date.

So, even if there is a will to tackle the enormous structural economic problems facing most of Africa’s countries, do they have the capacity to deliver, or is everyone just pretending?

Walter Olson

[cross-posted and slightly adapted from Overlawyered]

Why armored vehicles in a Midwestern inner suburb? Why would cops wear camouflage gear against a terrain patterned by convenience stores and beauty parlors? Why are the authorities in Ferguson, Mo. so given to quasi-martial crowd control methods (such as bans on walking on the street) and, per the reporting of Riverfront Times, the firing of tear gas at people in their own yards? (“ ‘This my property!’ he shouted, prompting police to fire a tear gas canister directly at his face.”) Why would someone identifying himself as an 82nd Airborne Army veteran, observing the Ferguson police scene, comment that “We rolled lighter than that in an actual warzone”?

As most readers have reason to know by now, the town of Ferguson, Mo. outside St. Louis, numbering around 21,000 residents, is the scene of an unfolding drama that will be cited for years to come as a what-not-to-do manual for police forces. After police shot and killed an unarmed black teenager on the street, then left his body on the pavement for four hours, rioters destroyed many local stores. Since then, police have refused to disclose either the name of the cop involved or the autopsy results on young Michael Brown; have not managed to interview a key eyewitness even as he has told his story repeatedly on camera to the national press; have revealed that dashcams for police cars were in the city’s possession but never installed; have obtained restrictions on journalists, including on news-gathering overflights of the area; and more.

The dominant visual aspect of the story, however, has been the sight of overpowering police forces confronting unarmed protesters who are seen waving signs or just their hands.

If you’re new to the issue of police militarization, which Overlawyered has covered occasionally over the past few years, the key book is Radley Balko’s, discussed at this Cato forum:

Federal grants drive police militarization. In 2012, as I was able to establish in moments through an online search, St. Louis County (of which Ferguson is a part) got a Bearcat armored vehicle and other goodies this way. The practice can serve to dispose of military surplus (though I’m told the Bearcat is not military surplus, but typically purchased new) and it sometimes wins the gratitude of local governments, even if they are too strapped for cash to afford more ordinary civic supplies (and even if they are soon destined to be surprised by the high cost of maintaining gear intended for armed combat).

As to the costs, some of those are visible in Ferguson, Mo. this week.


K. William Watson

If you were looking for an example to show just how awful the legislative process is in Washington, the ongoing saga over catfish inspection is just perfect.  On its face, the 2008 law requiring the U.S. Department of Agriculture to inspect catfish facilities seems relatively benign.  Who doesn’t want safer catfish?  In reality, though, the law has nothing to do with food safety and everything to do with supporting the Southern catfish industry at everyone else’s expense.

Switching catfish inspection from the FDA (where it is now) to the USDA won’t make catfish any safer.  This isn’t really a controversial point, either.  The USDA itself has said that catfish is a low risk food and can’t explain how its inspections will reduce that risk in any meaningful way.  The Government Accountability Office has advised Congress to repeal the program.  

The new inspection regime is slated to cost taxpayers $14 million more per year than the current one.  But there’s actually a much greater harm being done here.

Aside from the cost, the main impact of the new inspection regime—and its actual purpose—is that foreign catfish producers will be banned from the U.S. market until they can show equivalence to U.S. production standards.  Regardless of how they produce the catfish, showing equivalence will take years.  In the meantime, U.S. consumers will be left with nothing but domestic catfish at hugely inflated prices.

The good news is that a growing, bipartisan group of legislators has been trying to kill the program since its stealthy insertion into the 2008 farm bill.  Most recently, Rep. Vicky Hartzler (R-MO) announced that she will propose an amendment to the 2015 Agriculture Appropriations Bill to defund the new inspection program.  This may be the last chance to kill the program before it finally goes into effect, exposing the United States to retaliatory action for violating our trade obligations.

The amendment will probably succeed, as similar amendments have in the past, but—just as before—that may not be enough.  The program exists not because half of our illustrious legislators have been fooled into supporting it but because Thad Cochran (R-MS) has seniority on key committees.  He and a handful of other legislators in Mississippi, Louisiana, and Arkansas are the only ones pushing for this program.  

I talk more with Caleb Brown about Thad Cochran’s Crony Catfish in this Cato podcast:

Doug Bandow

On April 2, 1917, President Woodrow Wilson called for a Declaration of War against Germany. His unreasonable policies regarding submarine warfare had made America’s entry well-nigh inevitable.

When President Barack Obama first spoke to the nation about Iraq, he sounded reluctant to be the fourth straight president to intervene militarily.  However, the conditions he set on Washington’s participation guarantee a much broader and longer campaign.

President Wilson implemented a policy which ensured that war would result if Germany used the only maritime weapon it possessed capable of contesting London’s overwhelming naval advantage. Great Britain’s passenger liners carried munitions and were ordered to ram submarines which surfaced to inspect their cargoes. Germans started sinking passenger ships without notice. 

Wilson’s position was that Americans had an absolute right to book passage on belligerent vessels carrying munitions through a war zone. The position was ludicrous. In January 1917 Berlin decided to unleash unlimited submarine warfare against London and Wilson got his casus belli.

President Obama appears to be heading down the same path. In his first televised speech on Iraq, the president indicated that the airstrikes would be limited to protecting U.S. personnel and vulnerable refugees. 

This reasonable-sounding rationale offered an obvious bootstrap strategy to war by putting Americans in the path of the Islamic State of Iraq and Syria. The first airstrike occurred on artillery that threatened not Americans, but Kurds. 

Explained Pentagon spokesman Rear Adm. John Kirby, ISIS “was using this artillery to shell Kurdish forces defending Erbil, where U.S. personnel are located.”  Islamic radicals were not attacking Americans, American operations, or even Erbil.  Rather, ISIL was threatening those protecting the city in which Americans and American facilities were located. 

But Erbil is not the only de facto sanctuary protected by U.S. arms. The president explained: “We have an embassy in Baghdad, we have a consulate in Erbil, and we have to make sure that they are not threatened.” 

He later broadened this approach:  “We intend to stay vigilant, and take action if these terrorist forces threaten our personnel or facilities anywhere in Iraq, including our consulate in Erbil and our embassy in Baghdad.”  Anywhere in Iraq.

Of course, no law of nature requires the United States to keep its people in harm’s way  On Sunday, the State Department said it had shifted some employees from Erbil and Baghdad to Basra, Iraq, and Amman, Jordan. 

With Erbil under immediate threat, the administration could bring out the rest of American personnel stationed there. However, said the president, “we’re not moving our embassy anytime soon. We’re not moving our consulate anytime soon.”

Contrast this with administration policy in Libya.  At the end of July, factional violence escalated in Tripoli. State closed the embassy and removed the staff. 

No doubt, the administration is reluctant to suspend diplomatic and military operations.  However, as I point out on National Interest online:  “entering the Iraqi conflict obviously is not necessary to protect U.S. personnel.  In this case the administration appears to be choosing war, with safeguarding Americans the excuse.”

Imagine if in October 1941 the Roosevelt Administration had announced that it planned to launch airstrikes against German forces if they advanced closer to the Soviet capital of Moscow, in which the U.S. embassy and staff were located. Obviously American personnel could be evacuated. This policy would be entering the war against Berlin.

President Obama should level with the American people. If he plans to initiate aggressive military action against ISIS (or “engage in some offense,” as he put it), he should be forthright.

Instead, he apparently hopes to make U.S. participation inevitable through a time-honored bootstrap: keep Americans at risk and then intervene to save them.  Woodrow Wilson would be proud.

Nicole Kaeding

Created in 2000 as part of the Community Renewal Tax Relief Act, the federal New Markets Tax Credit (NMTC) program provides tax credits to “spur new or increased investments into operating businesses and real estate projects in low-income areas.” Two new reports, one from the Government Accountability Office (GAO) and the second from Senator Tom Coburn’s office, question the effectiveness of NMTC in accomplishing that goal.

The program provides tax credits to investors in low-income neighborhood development projects equaling 39 percent of the investment value over seven years. For example, a $1 million investment provides a $390,000 tax credit to the investor—a healthy sum. Congress has provided $40 billion in tax credits since 2003 with banks and other financial institutions receiving “nearly 40 percent of all NMTC[s]” since 2007.

But the program’s structure is flawed. According to GAO, the Treasury Department—which oversees the program—does not have adequate oversight of the program. For instance, the Treasury is unable to determine if a project has failed even after receiving seven years of tax credits. Treasury’s reporting on numerous aspects of the program is incomplete and missing.

Like many federal programs originally premised on helping low-income areas, the NMTC program now spreads the subsidies widely. In fact, the program’s structure results in “virtually all of the country’s census tracts” being eligible for the program according to the Congressional Research Service.  

NMTC projects are heavily subsidized. They frequently receive additional government funding from other programs. Sixty-six percent of projects from 2010 to 2012 received funding from other federal, state, or local sources, with 33 percent receiving additional federal funds. This program is one of 23 community development tax programs and 80 discretionary economic development programs.

Projects often receive NMTCs, historic tax credits, and benefits from tax-exempt bond issuances, which adds up to heavy subsidization. For instance, a streetcar project in St. Louis received $15 million from NMTC allocations, $25 million from a federal Urban Circulator grant, a Surface Transportation Program grant, and a grant from the Congestion Mitigation & Air Quality Improvement Program. “The trolley’s total cost of $43 million is almost completely paid for through federal funding,” according to Coburn’s office.

We are used to superb investigative reports from Senator Coburn’s office profiling waste in government. His staff has done it again with the new NMTC report, which provides numerous silly and wasteful examples of NMTC projects. Many tax credits have benefited wealthy investors for low-value projects or projects that should have been funded privately or by local governments.

The NMTC program funded the expansion of the world’s largest aquarium in Atlanta. The new dolphin exhibit, with ticket prices of $65, received $40 million. Money was spent producing an original music score to accompany the performance. Project supporters publicly acknowledge that private funding was available for the aquarium’s expansion.  

The program also funded a classic car museum in Washington state,  a baseball stadium in Kentucky, and a NFL Youth Center in Indianapolis as part of the city’s Super Bowl bid. These projects are surely not the low-income development Congress envisioned when starting the program.

Unfortunately, these reports are not the first to document the NMTC program’s failings. GAO has issuance reports in 2004, 2007, and 2010 highlighting the program’s numerous flaws. Yet, Congress continues to reauthorize the program wasting billions of dollars.

The GAO report suggests that the Treasury Department should increase monitoring of the program. This is a good, first step to reforming the program. Ideally, Congress should follow Senator Coburn’s proposal and let this unneeded program expire.

David Boaz

Young journalists are told, “If your mother says she loves you, check it out.” Every day journalists follow that advice, protecting us all from reading rumors and unconfirmed stories in the morning papers (though of course the increasing pressure to be first with the news is threatening this rule).

But journalists are still too quick to take the word of special interests without seeking other viewpoints, especially in stories about things the taxpayers need to spend money on. Take this morning’s story about water infrastructure on Marketplace Radio:

Following the expensive water-main break that flooded UCLA’s campus, Los Angeles officials say they’re trying to aggressively fix the city’s aging infrastructure. 

The costs are daunting. It’s going to take the city of Los Angeles billions of dollars to fix.

“They estimate some over 20 millions of gallons of water were lost and of course it wound up on that new floor at the Pauley Pavilion Basketball Arena,” says Greg DiLoreto, former president of the American Society of Civil Engineers. “We have some 240,000 water main breaks a year in this country. And the age of our water infrastructure continues to get older and older and older.”

DiLoreto says the country needs something like $84 billion dollars in water infrastructure investments between now and 2020.

Carolyn Berndt, program director at the National League of Cities, says local governments haven’t had the access to the kind of capital they need to make these upgrades.

“The traditional method has been through the state revolving loan funds,” Berndt says. “Those numbers have been declining in recent years.”

Berndt says if cities are going fix their leaky pipes, they’ll need more financing than just a drop in the bucket.

That’s the whole story. And maybe them’s the facts, though Chris Edwards would beg to differ. But the information comes entirely from the National League of Cities, speaking for cities that want more money, and the American Society of Civil Engineers, the people who would be called on to design and build new or improved infrastructure. Journalists shouldn’t rely entirely on the oil industry for the facts on the Keystone pipeline, or the teachers union for the facts about education, and they shouldn’t rely entirely on civil engineers or asphalt manufacturers for the facts on infrastructure.

Ilya Shapiro

What’s worse than a public policy debate that turns bitter and impolite? Well, for one, having the courts step into the marketplace of ideas to judge which side of a debate has the best “facts.”

Yet that’s what Michael Mann has invited the D.C. court system to do. In response to some scathing criticism of his methodologies and an allegation of scientific misconduct, the author of the infamous “hockey stick” models of global warming – because they resemble the shape of a hockey stick, with temperatures rising drastically beginning in the 1900s – has taken the global climate change debate to a record low by suing the Competitive Enterprise Institute, National Review, and two individual commentators. The good Dr. Mann claims that some blogposts alleging his work to be “fraudulent” and “intellectually bogus” were libelous. (For more background on the matter, see this excellent summary by NR’s editor Rich Lowry; linking to that post is partly what led Mann to target CEI.)

The D.C. trial court rejected the defendants’ motion to dismiss this lawsuit, holding that their criticism could be taken as a provably false assertion of fact because the EPA, among other bodies, have approved of Mann’s methodologies. In essence, the court seems to cite a consensus as a means of censoring a minority view. The defendants appealed to the D.C. Court of Appeals (the highest court in the District of Columbia).

Cato has now filed a brief, joined by three other think tanks, in which we urge the court to stay out of the business of refereeing scientific debates. (And if you liked our “truthiness” brief, you’ll enjoy this one.)

We argue that the First Amendment demands that failing to leave room for the marketplace of ideas to operate stifles academic and scientific progress, and that judges are ill-suited to officiate policy disputes – as history has shown time and again. The lower court clearly got it wrong here – and there are numerous cases where courts have more judiciously treated similarly harsh assertions for what they really are: expressions of disagreement on public policy that, even if hyperbolic, are among the forms of speech most deserving of constitutional protection. 

The point in this appeal is that courts should not be coming up with new terms like “scientific fraud” to squeeze debate over issues impacting government policy into ordinary tort law. Dr. Mann is not like a corner butcher falsely accused of putting his thumb on the scale or mixing horsemeat into the ground beef. He is a vocal leader in a school of scientific thought that has had major impact on government policies.

Public figures must not be allowed to use the courts to muzzle their critics. Instead, as the U.S. Supreme Court has repeatedly taught, open public debate resolves these sorts of disputes. The court here should let that debate continue outside the judicial system.