Would Linda Greenhouse Apply the Same Interpretive Method She Uses in Halbig to Habeas Corpus Cases?
Michael F. Cannon
Yale law professor Linda Greenhouse is a former New York Times Supreme Court correspondent and now writes a legal column for the Times. Today, she writes about Halbig v. Burwell. For my latest on Halbig and similar cases, see here. Now Greenhouse, who argues these cases are just about gutting the Patient Protection and Affordable Care Act:
To be clear, I’m not suggesting that there is anything wrong with turning to the courts to achieve what politics won’t deliver; we all know that litigation is politics by other means. (Think school desegregation. Think reproductive rights. Think, perhaps, same-sex marriage.) Nor is the creativity and determination of the Affordable Care Act’s opponents any great revelation — not after they came within a hairsbreadth of getting the law’s individual mandate thrown out on a constitutional theory that would have been laughed out of court not too many years ago.
Boy, are they ever determined.
I accept the compliment, with one proviso. The stakes in the Halbig cases are much bigger than the PPACA. The IRS is subjecting those plaintiffs to taxes from which, as Greenhouse implicitly admits, the operative language of the statute would exempt them. The plaintiffs have a right not to be taxed unless Congress expressly grants the IRS that power. A federal judge whom Greenhouse respects (Thomas Griffith) surveyed the IRS’s rationales for subjecting tens of millions of Americans to those taxes, found those rationales to be meritless, and essentially ruled that the IRS is violating the law on a massive scale. If preventing the executive branch from exceeding its lawful powers is just “politics by other means,” then so are the habeas corpus cases Greenhouse approvingly cites.
Unfortunately, when Greenhouse takes the government’s side in Halbig, it seems to be on the basis that, “Of course there are ambiguities and inconsistencies in a 900-page bill that never went to a conference committee for a final stitching together of its many provisions.” That probably is true, but it does not follow that the statute is ambiguous or inconsistent with regard to the question presented in Halbig. The government certainly has asserted such ambiguities and inconsistencies exist. Yet a closer look at the government’s arguments shows that the specific provisions it cites are all quite consistent with the language authorizing subsidies only to those who buy coverage “through an Exchange established by the State.”
Greenhouse also commits an error as well as her own inconsistency. She claims the phrase “through an Exchange established by the State” appears only once in the subsidy-eligibility rules. In fact, it appears explicity twice: one mention appeared in the first draft of those rules; Senate Democrats added the second just before final Senate passage (which all by itself suggests they knew exactly what they were doing). Moreover, that phrase appears seven more times by cross-reference. And the subsidy-eligibility rules do not use any other language – at all – to describe the Exchanges through which the law authorizes subsidies. All of which evince a clear meaning and purpose: to offer subsidies only in states that comply with Congress’ desire that they should establish Exchanges.
Greenhouse’s inconsistency occurs when she (incorrectly) claims, “the two [Halbig] judges trained a laser focus on a single section, indeed on a single word, in the massive statute…ignor[ing] the broader context, in which Congress clearly intended to make insurance affordable[.]” The habeas corpus cases with which Greenhouse apparently agrees also focused on a single phrase – one could argue, a single word – in the Constitution. Would she criticize those cases for failing to uphold the overarching purpose of the Constitution – which appears right there in the preamble – to “insure domestic Tranquillity” and “provide for the common defense”?
I wrote Greenhouse to thank her for her column, which was far more respectful and gracious than many Halbig critics have been. I thought it might be fruitful to offer to debate these cases with her. She respectfully declined, but noted there is a movement afoot to bring my coauthor Jonathan Adler to New Haven for that purpose. Watch this space for development.
Reading through this Newsweek article on the troubled relations between police and residents in Ferguson, Mo. before this month’s blowup, this passage jumped out at me:
“Despite Ferguson’s relative poverty, fines and court fees comprise the second largest source of revenue for the city, a total of 2,635,400,” according to the ArchCity Defenders report. And in 2013, the Ferguson Municipal Court issued 24,532 arrest warrants and 12,018 cases, “or about 3 warrants and 1.5 cases per household.”
My first reaction – maybe yours too – was “is that a misprint?” Three arrest warrants per household in Ferguson last year?
Now let’s stipulate that some of those warrants were written against out-of-towners, especially in matters arising from traffic offenses, tickets being a key revenue source for many municipalities in St. Louis’s North County. Yet here’s a second statistic some will find surprising: while reported property-crime rates in Ferguson have run well above the national average for years, violent-crime rates have not. After a high period that lasted through 2008, they have declined steadily to a point where last year Ferguson had about the same rate of violent crime as the nation generally.
What seems clear at this point is that Ferguson – while in some ways a nicer and safer town than some have imagined – does suffer from a unusual degree of antagonism between police and residents, an antagonism that crucially involves race (the town is an extreme outlier in its now-famous extent of black underrepresentation in elected office) and yet has other vital dimensions as well. The town gets nearly a quarter of its municipal revenue from court fees – the figure in some neighboring towns is even higher – and according to the ArchCity Defenders report quoted in Newsweek, Ferguson’s municipal court is among the very worst in the way it adds its own hassle factor to the collection of petty fines:
ArchCity Defenders, which has tracked ticketing of St. Louis area residents for five years and focused primarily on vehicle violations, started a court-watching program because so many of its clients complained of traffic prosecution wreaking havoc on their lives. Defendants routinely alleged that a racially-motivated traffic stop led to their being jailed due to inability to pay traffic fines, which in turn prompted people to “los[e] jobs and housing as a result of the incarceration.” … One resident quoted in the study said, “It’s ridiculous how these small municipalities make their lifeline off the blood of the people who drive through the area.”
Racial antagonism between residents and law enforcement is bad no matter what, but it’s worse when residents wind up interacting constantly with law enforcement because of a culture of petty fines. (If you doubt that law enforcement in Ferguson has been touched by a culture of petty fines, read this Daily Beast account of how the town sought to charge a jail inmate for property damage for bleeding on its officers’ uniforms – even though the altercation with jailers arose after the town had picked up the wrong guy on a warrant issued on a common name.)
In recent years scholars and journalists have been developing a literature on how petty fines and low-level law enforcement can snowball into life-changing consequences for persons not by nature inclined toward criminality – recent entries include On the Run: Fugitive Life in an American City by Alice Goffman (“web of warrants”) and The New Jim Crow by Michelle Alexander (“a devastating account of a legal system doing its job perfectly well”). Libertarians have participated actively in this literature, especially through the work of Radley Balko, and in June I brought together some links from Cato and Overlawyered in connection with a Cato podcast.
It seems so random and meaningless that a legal offense as minor as walking on the roadway would set in motion what was to prove the fatal confrontation between officer Darren Wilson and Michael Brown. But in the wider scheme of how Ferguson came to have its problem with policing, it may be neither random nor meaningless.
What If We Applied the IRS's Reasoning in Halbig & King to the Patriot Act or RFRA, Instead of the ACA?
Michael F. Cannon
Over at Darwin’s Fool, I posted a critique of the Fourth Circuit’s opinion in King v. Burwell. Unlike the D.C. Circuit’s ruling in Halbig v. Burwell, the Fourth Circuit held that the IRS has the authority to issue subsidies in states with federal exchanges, despite the fact that the Patient Protection and Affordable Care Act repeatedly says subsidy recipients must enroll in coverage “through an Exchange established by the State.” I reproduce here my response to a commenter to that post, as his argument parallels those of many others who have been critical of the Obamacare challenges.
My commenter objected that a plain-text reading “must include the entire text of the bill,” which “makes clear that the goal of the bill was to provide health care to all Americans who needed it and could not, at that time get it.” Moreover, “It would be illogical for Congress to establish a national health care system that is based on subsidies and then not include those subsidies in all aspects,” thus “it is entirely reasonable to interpret that one sentence to mean that Congress intended the subsidies for all participants.” My reply:
Sir, I’m afraid you have things exactly backward.
The overall context of the PPACA presents no difficulty for the plaintiffs in King v. Burwell, Halbig v. Burwell, or the other cases challenging subsidies in federal exchanges. The text of the eligibility rules for those subsidies clearly and repeatedly limit eligibility to those who enroll in coverage “through an Exchange established by the State.” There is nothing in the broader context of the statute to suggest that Congress understood the words “established by the State” to have any meaning other than their usual meaning. There isn’t even any statutory language that conflicts with that plain meaning. Jonathan Adler and I addressed (almost) all of these supposed anomalies here.
On the contrary, it is the Obama administration and its supporters for whom both the text and context present difficulties. (We can no longer call them supporters of the PPACA, given how adamantly opposed they are to implementing the law as Congress intended.) The subsidy-eligibility rules are the only place where Congress spoke directly to the question at issue. Those rules flatly contradict the administration’s position. Congress did not throw the phrase “established by the State” around loosely. They referred to exchanges “established by the State” when they meant exchanges established by the states. They referred generically to “an Exchange” when they meant either a state-established or a federal exchange. And they referred to state-established and federally established exchanges separately within a single provision, which shows they saw a difference between the two. Congress also did the exact same thing – withholding subsidies from residents of uncooperative states – in the PPACA’s other massive new entitlement program, the Medicaid expansion." title="<--break-->">
I somehow doubt you or anyone who supports the PPACA’s overarching goal would be comfortable with federal courts adopting a rule that a statute’s purpose should trump the precise means Congress chose to advance that purpose. The PATRIOT Act’s ostensible purpose was to protect Americans from terrorism. Should the president be allowed to do whatever advances that goal, even if his actions exceed the limits Congress placed on the powers created by that statute? The purpose of the Religious Freedom Restoration Act is to protect the freedoms of conscience and exercise of religion. Does that mean courts should interpret the RFRA to allow anyone with a religious objection to opt out of not just the PPACA’s individual mandate, but the statute in its entirety? Should courts allow those with religious objections to opt out of paying taxes?
The problem with your method of statutory interpretation is that Congress never legislates with only one purpose in mind. If it did, then the Occupation Safety and Health Act of 1970 would have devoted 100 percent of U.S. GDP, and conscripted every U.S. resident, to the cause of occupational safety and health – for exactly two days, at which point the Clean Air Act of 1970 would have devoted the nation’s entire stock of human, financial, and physical capital to the cause of clean air. When Congress enacted the PPACA, its purposes included subsidizing health insurance, having states establish and operate exchanges, and using the former as an inducement to the latter. Your recommendation that the executive and the judiciary should vitiate the clear, repeated, and uncontradicted terms of the statute in the name of just one of the legislative branch’s purposes would ironically frustrate Congress’ purpose, not advance it.
Lots more on King, Halbig, and other cases challenging those illegal subsidies, etc., here.
Michael F. Cannon
The Patient Protection and Affordable Care Act’s Independent Payment Advisory Board has been called a “death panel,” though I’ve argued one could just as legitimately call it a “life panel.” Either way, it is the most absurdly unconstitutional part of the PPACA.
Adler’s otherwise excellent summary neglects to mention IPAB’s most unconstitutional feature. Diane Cohen and I describe it here:
The Act requires the Secretary of Health and Human Services to implement [IPAB’s] legislative proposals without regard for congressional or presidential approval. Congress may only stop IPAB from issuing self-executing legislative proposals if three-fifths of all sworn members of Congress pass a joint resolution to dissolve IPAB during a short window in 2017. Even then, IPAB’s enabling statute dictates the terms of its own repeal, and it continues to grant IPAB the power to legislate for six months after Congress repeals it. If Congress fails to repeal IPAB through this process, then Congress can never again alter or reject IPAB’s proposals…
Congress may amend or reject IPAB proposals, subject to stringent limitations, but only from 2015 through 2019. If Congress fails to repeal IPAB in 2017, then after 2019, IPAB may legislate without any congressional interference.
Like I said, absurdly unconstitutional. But that’s ObamaCare for you.
Medicare spends more than $600 billion annually, but not all of that money is spent wisely. Yesterday, I wrote about the Washington Post’s expose on motorized wheelchair fraud. Records suggest that 80 percent of motorized wheelchair claims are “improper,” amounting to billions in waste. Unfortunately for taxpayers, this is just the tip of the iceberg on Medicare fraud.
The Government Accountability Office estimated that Medicare’s “improper payments” amounted to $44 billion, or 8 percent of total expenditures, in 2012. GAO considers Medicare a “high risk” program for its “vulnerabilities to fraud, waste, abuse, and mismanagement.” GAO criticized Medicare for its inability to control the problem saying that Medicare “has yet to demonstrate sustained progress in lowering the rates [of improper payments].”
Other experts believe that GAO undercounts examples of fraud in Medicare. Malcolm Sparrow of Harvard University estimates that closer to 20 percent of claims–or $120 billion annually are improper.
Medicare’s lax oversight of its payment system perpetuates the issue. Millions of claims come in daily and are paid without review or analysis. Scammers know that Medicare payments will not be scrutinized; the chance of getting caught is quite low. Scammers simply adapt and continue finding ways to game the system.
Just yesterday, the Department of Justice announced that an individual in Louisiana was sentenced to prison for submitting “unnecessary or never provided” claims to Medicare. The federal government’s Medicare Fraud Strike Force “has charged nearly 1,900 defendants who have collectively billed the Medicare program for more than $6 billion” since 2007 illustrating just how widespread the issue is.
Even with the threat of prosecutions, scammers know that Medicare is slow to act. According to John Warren, a former employee in Medicare’s anti-fraud office, Medicare is hesitant to deny claims. It risks denying coverage to a legitimate claim, creating a backlog and potential outrage. Even though the scam cost taxpayers billions, Warren told the Washington Post “looking back, I think we did pretty good.”
These various forces illustrate that Medicare will not be able to control the problem of fraud without serious reform. As my colleague Chris Edwards wrote in 2010,
Efforts to combat Medicare fraud frequently fail, and they can involve a vicious cycle. Cracking down on fraud may open new opportunities for fraud. And fighting fraud often involves new layers of complex regulations that may “discourage organizational innovation and market entry, and [ensnare] innocent providers.” To get out of the vicious cycle of government health care fraud, we should move toward a consumer-driven system where patients and providers would have strong incentives to be frugal with health care dollars and crack down on waste.
Medicare might have slowed the motorized wheelchair scam, but as long as the vulnerabilities in Medicare exist, scammers will surely try to benefit.
[cross-posted from Overlawyered]
One consequence of the events in Ferguson, Mo. is that people are talking with each other across ideological lines who usually don’t, a symbol being the attention paid on both left and right to Sen. Rand Paul’s op-ed last week in Time. And one point worth discussing is how the problem of police militarization manifests itself similarly these days in local policing and in the enforcement of federal regulation.
At BuzzFeed, Evan McMorris-Santoro generously quotes me on the prospects for finding common ground on these issues. The feds’ Gibson Guitar raid — our coverage of that here — did much to raise the profile of regulatory SWAT tactics, and John Fund cited others in an April report:
Many of the raids [federal paramilitary enforcers] conduct are against harmless, often innocent, Americans who typically are accused of non-violent civil or administrative violations.
Take the case of Kenneth Wright of Stockton, Calif., who was “visited” by a SWAT team from the U.S. Department of Education in June 2011. Agents battered down the door of his home at 6 a.m., dragged him outside in his boxer shorts, and handcuffed him as they put his three children (ages 3, 7, and 11) in a police car for two hours while they searched his home. The raid was allegedly intended to uncover information on Wright’s estranged wife, Michelle, who hadn’t been living with him and was suspected of college financial-aid fraud.
The year before the raid on Wright, a SWAT team from the Food and Drug Administration raided the farm of Dan Allgyer of Lancaster, Pa. His crime was shipping unpasteurized milk across state lines to a cooperative of young women with children in Washington, D.C., called Grass Fed on the Hill. Raw milk can be sold in Pennsylvania, but it is illegal to transport it across state lines. The raid forced Allgyer to close down his business.
Fund goes on to discuss the rise of homeland-security and military-surplus programs that have contributed to the rapid proliferation of SWAT and paramilitary methods in local policing. He cites Radley Balko’s Rise of the Warrior Cop, which similarly treats both manifestations of paramilitary policing as part of the same trend.
As McMorris-Santoro notes in the BuzzFeed piece, Rep. Chris Stewart (R-Utah) has introduced a bill called the Regulatory Agency Demilitarization Act, citing such unsettling developments as a U.S. Department of Agriculture solicitation for submachine guns. 28 House Republicans have joined as sponsors, according to Ryan Lovelace at National Review.
There has already been left-right cooperation on the issue, as witness the unsuccessful Grayson-Amash amendment in June seeking to cut off the military-surplus 1033 program. As both sides come to appreciate some of the common interests at stake in keeping law enforcement as peaceful and proportionate as situations allow, there will be room for more such cooperation.
Benjamin H. Friedman
“Pleikus are like streetcars.” That’s how McGeorge Bundy, President Johnson’s national security advisor, explained what the escalation of U.S. bombing of North Vietnam in February 1965 to had to do with the administration’s justification for it, which was a Vietcong attack on U.S. bases near Pleiku. Johnson had already decided to increase bombing, but he wanted a pretext that would make it seem defensive. Bundy meant that, absent the Pleiku attack, another incident would have come along shortly to justify additional bombing. A similar bait-and-switch is occurring today in U.S. Iraq policy.
On August 7, President Obama explained that we were bombing Iraq again to defend U.S. personnel in Erbil and rescue ten of thousands of Yazidi civilians stranded on Mount Sinjar (really mountains) and surrounded by murderous militiamen of Islamic State of Iraq and the Levant (ISIL). Now, it turns out there were far fewer Yazidis on the mountain than the administration claimed; they are mostly out of harm’s way, and the threat to Erbil has ebbed. With the two goals he set for bombing achieved, the President quickly offered a third. In the letter sent to Congress Sunday (pursuant to the War Powers Resolution, which he flouts when it’s inconvenient) the President offered a third rationale for bombing: that U.S. bombing would help “Iraqi forces” retake the Mosul dam. Kurdish Peshmerga and Iraqi Special Forces have now done that.
Monday, the President again broadened the bombing’s objectives. The airstrikes against ISIS still protect U.S. personnel and serve humanitarian purposes, he said, but now, it seems, those are general goals that ongoing bombing serves. The President also suggested that ISIS is a security threat to the United States. Not for the first time, he said that once the new Iraqi government forms, we will “build up” Iraqi military power against ISIS. Only the speed of this slide down a slippery slope is surprising. As I recently noted, the humanitarian case for protecting the Yazidi easily becomes a case for continual bombing of ISIL and resumed counterinsurgency war in Iraq. Their danger to civilians was never limited to Sinjar. And as in Syria, the major humanitarian threat in Iraq is civil war.
Americans, the president included, need to admit being out of Iraq potentially means letting it burn. The collapse of the fiction that U.S. forces stabilized Iraq before exiting forces us to confront the unpleasant contradictions in U.S. goals there. We want to avoid the tragic costs of U.S. forces trying to suppress Iraq’s violence. We want a stable Iraqi federal government and we want Iraqis to live peacefully. Each of those goals conflicts with the others. Even if the new Prime Minister is amenable to Sunni demands, U.S. bombing is unlikely to allow Iraqis to destroy ISIL and its allies. Large-scale violence will likely continue. Suppressing insurgency will likely require resumption of U.S. ground operations. And even that, we know, may not help much. Centrifugal forces in Iraq will remain strong, especially now that we are arming the Kurds. Vesting federal power in Prime Ministers that are inevitably Shi’ite makes continual sectarian fights likely. We should know by now that we lack the ability to stabilize Iraq at acceptable cost. We should also know that the primary threat to U.S. security in Iraq is the temptation to try to forcefully run it. Knowing these things means accepting some tragedy in Iraq.
Trade policy people spend most of their time talking about free trade between countries. But there is still some work to do on free trade within countries. Some Canadians are making a push right now, as Canadian business groups are calling for Canada’s leaders “to dismantle internal trade barriers and ensure the free movement of goods, services, capital and labour between all parts of the country.”
If that sounds odd, don’t get the wrong idea. It’s not as though Canadian provinces are imposing tariffs on each other. Rather, this is part of a more advanced notion of free trade, where you have a “single market” for goods and services. So for example, these groups complain that:
Different regulations and standards means that manufacturers may need to adapt their machinery in order to produce containers such as dairy creamers, butter and drinkable yogurts for sale nationally across all provincial jurisdictions.
massage therapy is regulated in some provinces but not all, meaning that a professional would have to become certified in order to be allowed to practice.
These issues are difficult to address between sovereign nations, although people are trying, most notably in the Transatlantic Trade and Investment Partnership negotiations. But within countries, it seems like this is something that could and should be dealt with. Here in the U.S., we have the famous problem of not being able to buy health insurance across state lines. The current effort in Canada seems like a valuable one; it might be useful to have a similar review of internal trade barriers here in the U.S.
Today, Education Next released its latest survey results on education policy. As with the Friedman Foundation’s survey earlier this year and previous Education Next surveys, scholarship tax credits (STCs) remain the most popular form of private educational choice. STCs garnered support from 60% of respondents compared to 50% support for universal school vouchers and only 37% support for low-income vouchers.
The Friedman Foundation’s survey found the strongest support for educational choice among younger Americans. While Americans aged 55 and up favored STCs by a 53%-33% margin, Americans aged 18-34 supported STCs by a whopping 74%-14% margin. While it’s possible that younger Americans are more likely to support educational choice because they’re more likely to have school-aged children, it could also be evidence of growing support for educational choice generally. The series of Education Next surveys provides strong support for the latter interpretation, as shown in the chart below. (Note: the 2013 Education Next survey did not ask about STCs.)
While support for STCs was only 46% in 2009, it has grown to 60% this year. Over the same time, opposition has fallen from 27% to 24%, with a low of 16% in 2012. If support among millennials merely remains constant, overall support for educational choice will continue to grow in the coming years, making the adoption and expansion of such programs increasingly likely.
[See here for Neal McCluskey’s dissection of the Education Next survey questions concerning Common Core.]
The annual Education Next survey is out, and its headliner is the Common Core. Unfortunately, it features basically the same incomplete, answer-skewing question it employed last year, and reports the same dubious finding of majority support. But even with that, the direction in which opinion has moved speaks volumes about the serious trouble the Core is in.
Just like last year, the question gives a misleading description of either the Core or national standards generically—pollsters asked a version that did not mention the Core by name—and got high rates of support. Here’s the question, with the parts that were omitted, for half the respondents, in brackets:
As you may know, in the last few years states have been deciding whether or not to use [the Common Core, which are] standards for reading and math that are the same across the states. In the states that have these standards, they will be used to hold public schools accountable for their performance. Do you support or oppose the use of these [the Common Core] standards in your state?
Like last year, the question completely ignores major federal coercion behind states’ adopting the Core, as well as the fact that the Core itself is only part of what’s necessary to “hold public schools accountable.” Tests, and consequences for performance on them, are needed for accountability, and those are driven by federally demanded testing and sanctions. Oh, and Washington selected and paid for specific Core-aligned tests. Meanwhile, generic common standards would in no way have to be used to hold schools accountable; they could just be toothless measuring devices. And how many people would come out against something as seemingly positive as holding schools “accountable”? The devil is in how, exactly, that would be done.
So what results did these formulations get? With the Core specifically mentioned, 53 percent supported it, 26 percent opposed, and 21 percent had no opinion. Among teachers, 46 percent supported the Core, 40 percent opposed, and 14 percent had no opinion. Pull the specific reference to the Core, and public support leapt to 68 percent.
Given the question, those results almost certainly overstate Core support, perhaps markedly. What they likely don’t do—though there are slight variations in the question between last year and this year—is mask the trend in Core support: it is tumbling. Favorable responses from the general public dropped from 65 percent in 2013 to 53 percent this year, while teacher support free-fell from 76 percent to 46 percent.
Unfortunately, it seems the pollsters may have been looking to nail a specific culprit for this. In addition to the main question, this year they asked respondents whether three statements about the Core were true or false, and found that most of the general public got them wrong. But, as we shall see, that may well be because some people are aware of important details and realities missed by the simple true/false options with which they were presented.
Statement #1: “The federal government requires all states to use the Common Core standards.” Only 36 percent identified this as false—the “right” answer—but only in the strictest sense is it so. Washington can’t, say, send troops to make your state use the Core, but it can and did make it nearly impossible to win part of $4 billion in Race to the Top funds—money taxpayers, who live in states, had to fork over—if a state didn’t adopt. It also made it harder to get a waiver from the hidebound, reviled rules of No Child Left Behind. Indeed, Washington can’t require states to follow NCLB either, but it can make people pay taxes and then return billions of those dollars to states only if they accept federal rules. And you hear almost no one seriously say that following NCLB isn’t required.
Statement #2: “In states using the Common Core standards, the federal government will receive detailed data on individual students’ test performance.” Only 15 percent of people thought this false, but states using the Core do not, in fact, have to send detailed student-level data to D.C. That said, Race to the Top demands heightened state-level data collection; the feds have been pushing collection of specific data for years; and maybe some people have heard enough about the NSA to reasonably think, “Sure seems likely the feds will eventually grab student data, especially since they are the ones driving much more in-depth and centralized collection.”
Statement #3: “Under the Common Core standards, states and local school districts can decide which textbooks and educational materials to use in their schools.” Some 48 percent of respondents said that this is true, and certainly states and districts can decide on their materials. But to succeed on federally pushed Common Core they’d better be Core-aligned materials. And after, say, King Lear and area models appear on Core tests for a few years, they’d better start teaching King Lear and area models. More directly, the Core recommends many specific texts. So sure, states and districts can decide. But they’d better decide on things that go with the Core.
Unfortunately, the pollsters seem to use the responses to these statements to explain away Core opposition, in much the same way Core fans have been dismissing opponents for years: portraying them as ignorant or confused. The pollsters write that the responses “may indicate that opposition to the Common Core is driven, in part, by misconceptions.”
Okay. Or perhaps the answers indicate that Common Core reality is far more complicated than simplistic survey questions can handle.
Thankfully, there is a lot more tackled in the latest Education Next survey than the Core, and most of it is not nearly as vexing. Stay tuned for coverage of that!
Yesterday’s Washington Post has an in depth—and very depressing—piece about Medicare fraud. The piece focuses on scammers taking advantage of Medicare’s payment systems to buy unnecessary motorized wheelchairs and scooters for Medicare enrollees and stick American taxpayers with the bill.
Medicare’s payment system is designed to pay bills within 30 days of receipt; the system receives 5 million claims daily. Due to the huge volume of payments, Medicare only reviews a very small percentage, 3 percent, before the payment is made. Instead, payments are reviewed after they are processed, but even then not all are subject to oversight and review.
That system design invites fraud and scammers are able to take advantage. The Washington Post describes it as an “honor system.” The lack of upfront investigations costs taxpayers billions annually in fraud and wasteful payments.
But even worse than Medicare’s lax oversight is that officials knew about the fraud regarding wheelchairs and still didn’t act. According to the Washington Post,
Now, the golden age of the wheelchair scam is probably over.
But, while it lasted, the scam illuminated a critical failure point in the federal bureaucracy: Medicare’s weak defenses against fraud. The government knew how the wheelchair scheme worked in 1998. But it wasn’t until 15 years later that officials finally did enough to significantly curb the practice.
This problem was widespread. Medicare has spent $8.2 billion on power wheelchairs since 1999 for an ever-increasing proportion of enrollees. Records suggest “that at least 80 percent of claims were ‘improper.’”
Before the fraud had taken off, the chairs were rare: One study estimated that in 1994, only 1 in 9,000 beneficiaries got a new wheelchair.
By 2000, it was 1 in 479.
By 2001, it was 1 in 362.
By 2002, it was 1 in 242.
In 2012 up to 219,000 Medicare recipients received motorized wheelchairs, 1 in 235 patients, worse than in 2002. In 2013 only 124,000 individuals, 1 in 400 patients, received power wheelchairs from Medicare.
Medicare is slowly getting the issue under control; it is just 15 years too late.
U.S. foreign policy is a bipartisan fiasco. George W. Bush gave the American people Iraq, the gift that keeps on giving. Barack Obama is a slightly more reluctant warrior, but he is taking the country back into Iraq.
Hillary Clinton, the unannounced Democratic front-runner for 2016, supported her husband’s misbegotten attempt at nation-building in Kosovo and led the drive for war in Libya, which is violently unraveling. Most of Clinton’s potential GOP opponents share Washington’s bomb, invade, and occupy consensus.
The only exception is Kentucky Sen. Rand Paul. He stands alone advocating a foreign policy which reflects the bitter, bloody lessons of recent years.
The Islamic State of Syria and the Levant is the latest result of Washington’s incessant and counterproductive meddling in the Middle East. But the usual suspects are calling for more intervention, more war. This time, they promise, everything will go well.
This is the Obama administration’s position in Iraq and Syria. However, Hillary Clinton has begun maneuvering for 2016 by running to Obama’s right. While she mocked the president’s mantra of “Don’t do stupid stuff,” she spent her career doing just that.
Instead of offering an alternative leading Republicans are all in for war, more war, forever war. Senators John McCain and Lindsey Graham, naturally have been advocating that America intervene more in both Syria and Iraq.
Most plausible Republican candidates are running toward the interventionist sideline. They blame Obama for Iraq even though it was George W. Bush who invaded that nation and failed to win Iraqi approval for a permanent U.S. garrison.
New Jersey’s Gov. Chris Christie has ostentatiously joined the most hawkish GOP elements. Former Arkansas Gov. Mike Huckabee accused President Obama of guessing wrong in Egypt, Iran, Libya, and Syria, even though the president acted on the traditional Republican script in all four cases.
Florida’s Marco Rubio advocated military action against ISIL, after supporting the usual plethora of interventionist disasters: war in Libya, more involvement in Syria, and now combat in Iraq. Texas Sen. Ted Cruz also pushes a strongly hawkish agenda, though he at least opposed bombing Syrian government forces.
Last month Texas Gov. Rick Perry attacked Paul as an isolationist and advocating that the U.S. go back to war in Iraq. Michael Goldfarb approvingly said of Perry “you have to assume he’d shoot first and ask questions later.”
Dramatically misguided was the latter’s contention that “isolationism”—in contrast to the promiscuous interventionism of the last three decades which has spawned so many vicious attacks—threatened to increase terrorism.
Underlying the torrent of Republican criticism of Paul is fear. The American people are tired of incessant war-mongering by the Washington elite. Paul rightly noted that “The country is moving in my direction.” That’s scary if your political future is tied to policies that have failed so flagrantly and frequently.
Paul is more cautious than his father, former Rep. Ron Paul. Nevertheless, Paul fils recently noted that “The let’s-intervene-and-consider-the-consequences-later crowd left us with more than 4,000 Americans dead, over two million refugees and trillions of dollars in debt.”
In citing President Ronald Reagan’s maxim of “peace through strength,” Paul noted some Republicans “have forgotten the first part of the sentence: That peace should be our goal even as we build our strength.” As I note in my latest Forbes online column, people are tired of young Americans “being treated as gambit pawns in an endless series of global chess games, to be sacrificed whenever folks in Washington dream up a grand new crusade.”
Hillary Clinton represents today’s foreign policy consensus—of constant intervention and war. Nominating someone who advocates the same failed policy would seem to be the best way for Republicans to lose in 2016. Will anyone join Rand Paul in charting a different course?
The New York Times reported Thursday:
Mr. Obama is fast becoming the past, not the future, for donors, activists and Democratic strategists. Party leaders are increasingly turning toward Mrs. Clinton and her husband, former President Bill Clinton, as Democrats face difficult races this fall in states where the president is especially unpopular, and her aides are making plain that she has no intention of running for “Obama’s third term.”
Which put me in mind of this statement famously attributed to another woman who had “the heart and stomach of a king” and the will to rule, Queen Elizabeth I:
I know the inconstancy of the English people, how they ever mislike the present government and have their eyes fixed upon that person who is next to succeed. More people adore the rising sun than the setting sun.
Which is why Elizabeth never designated a successor. Every incumbent president probably wishes he had that power.
Christopher A. Preble
It is good news that Nuri Kamal al-Maliki has decided to step down as Iraq’s prime minister. This means that, for the first time in Iraq’s modern history, there is the prospect of a peaceful transition of power, based on democratic principles and without the heavy hand of the U.S. military seeming to tip the scales to one party or group.
But don’t pop the champagne just yet. As the New York Times notes today, the new prime minister, Haider al-Abadi—like Maliki, a Shiite and member of the Dawa Party—will likely face many of the same challenges that Maliki did. Abadi will need to find a way to form an inclusive coalition government, one that protects the rights of Sunnis and appeases the Kurds’ desire for autonomy, while maintaining support from Iraqi Shiites.
This is a tall order. Many in the Shiite community that was terrorized for so long by the Sunni minority harbor deep resentment toward their former oppressors. Meanwhile, the Sunnis who held power want desperately to get it back, or at least to be able to protect themselves from reprisals. Some Sunnis are so distrustful of the central government that they’ve thrown their lot with the Islamic State in Iraq and Syria (ISIS), whose barabarism seems almost limitless. It is not clear how Abadi will bridge this trust gap.
Americans should wish Iraq’s new leader well, but policymakers should resist the urge to try to micromanage political events in Iraq. Even the appearance of U.S. influence over Abadi will undermine his legitimacy and thus could be counterproductive. Besides, it isn’t obvious that U.S. action—and only U.S. action—is essential to turning things around in Iraq. One suspects that the most vocal critics of President Obama’s Iraq policy have broader concerns. As I explain in today’s Orange County Register:
[W]hen the hawks screech that Obama isn’t doing enough, what they really worry about is that others might actually be able to do without us, or with only minimal assistance. A newly energized Kurdish militia already appears to have reversed some of ISIS’s recent gains. Syria’s Bashar al-Assad might begin rolling back ISIS fighters there. And a new government in Baghdad might finally be able to fashion a credible military force. At a minimum, even modest political reforms—or the prospect of them—could convince more Sunni Iraqis to fight against ISIS instead of for them.
Any time Iraq is in the news, Americans are reminded about those who pushed for war there in the first place. It should provide an opportunity to revisit our broader foreign policy goals. Instead, U.S. policymakers and elites still call for action without any obvious sense that they’ve learned anything.
It’s time to reconsider U.S. military intervention broadly, as well as the specific advice of those who confidently, yet incorrectly, predicted that there would be no civil war in Iraq or Libya, or who called for assisting the Syrian opposition (some members of whom are now waging war in Iraq—with U.S. weaponry no less).
And while President Obama’s approval rating goes down, including for his handling of foreign policy, it isn’t obvious that the GOP can turn this to its advantage, as Pew’s Andrew Kohut noted earlier this year. A bipartisan consensus inside Washington pushed the Iraq war, and Democrats and Republicans continue to push foolish military interventions that the public wants no part of.
People outside the Beltway bubble are seeking a real change of course, not just more of the same. If they don’t get it from those who have held power for so long, they’ll look elsewhere.
On Monday, Cato is hosting a briefing on Capitol Hill about congressional Wikipedia editing. Over a recent 90-day period, there were over 400,000 hits on Wikipedia articles about bills pending in Congress. If congressional staff were to contribute more to those articles, the amount of information available to interested members of the public would soar. Data that we produce at Cato go into the “infoboxes” on dozens and dozens of Wikipedia articles about bills in Congress.
A popular Twitter ‘bot called @congressedits recently created a spike in interest about congressional Wikipedia editing. It puts a slight negative spin on the practice because it tracks anonymous edits coming from Hill IP addresses, which are more likely to be inappropriate. But Congress can do a lot of good in this area, so Cato intern Zach Williams built a Twitter ‘bot that shows all edits to articles about pending federal legislation. This should draw attention to the beneficial practice of informing the public before bills become law. Meet @Wikibills!
Also, as of this week, Cato data are helping to inform some 26 million visitors per year to Cornell Law’s Legal Information Institute about what Congress is doing. Thanks to Tom Bruce and Sarah Frug for adding some great content to the LII site.
Let’s say you’re interested in 18 U.S. Code § 2516, the part of the U.S. code that authorizes interception of wire, oral, or electronic communications. Searching for it online, you’ll probably reach the Cornell page for that section of the code. In the right column, a box displays “Related bills now in Congress,” linking to relevant bills in Congress.
Those hyperlinks are democratic links, letting people know what Congress is doing, so people can look into it and have their say. Does liberty automatically break out thanks to those developments? No. But public demands of all types—including for liberty and limited government—are frustrated now by the utter obscurity in which Congress acts. We’re lifting the curtain, providing the data that translates into a better informed public, a public better equipped to get what it wants.
The path to liberty goes through transparency, and transparency is breaking out all over!
Daniel J. Ikenson
It was good of the Washington Post Editorial Board to raise questions yesterday about the veracity of the “jobs-created-by-Export-Import-Bank-policies” claims proffered by the Bank’s supporters. I just wonder whether the editorial pulled its punches where a reporter on assignment or a more inquisitive journalist would have delivered an unabashed blow to the credibility of the Bank’s primary reauthorization argument: that its termination will lead to a reduction in U.S. exports and jobs.
Kudos to the Post for raising an eyebrow at the Bank’s claims of “jobs created” or “jobs supported” by Ex-Im financing:
[W]hen it comes to jobs, well, just how rigorous are [Ex-Im’s] estimates, really? Congress ordered a study of that very question when it last reauthorized Ex-Im in 2012. In May 2013, the Government Accountability Office (GAO) produced its verdict: Meh.”
“GAO noted that Ex-Im must speak vaguely of “jobs supported,” rather than concretely of jobs created, since its methodology cannot really distinguish between new employment and retained employment. To get a number for “jobs supported,” which includes both a given firm and that firm’s suppliers, Ex-Im multiplies the dollar amount of exports it finances in each industry by a “jobs ratio” (calculated by the Bureau of Labor Statistics).
Using that approach, Ex-Im estimates an average of 6,390 jobs are “supported” by every billion dollars of exports financed. The Post is right to note the GAO’s conclusion:
These figures do not differentiate between full-time and part-time work and, crucially, provide no information about what might have happened to employment at the firms in question, or others, if the resources marshaled by Ex-Im had flowed elsewhere in the economy.
But, of course, what happens to the subsidized and nonsubsidized companies in the absence of Ex-Im is exactly what the Bank wants to conceal because it is the hyped-up specter of job loss that it relies upon to gain support for reauthorization. Realistically, in the short term, Ex-Im benefits some U.S. companies (those whose exports are subsidized) at the expense of other U.S. companies (those whose exports are not subsidized). Termination of Ex-Im will roughly reverse those fortunes by re-leveling the playing field between U.S. companies (to borrow and alter a metaphor) while freeing up the resources Ex-Im controls for more efficient uses.
Alas, the editorial fails to ponder this shuffling-of-resources-from-outsiders-to-insiders function that Ex-Im dutifully serves. Instead it gives an excerpt from the GAO report (in a manner slightly altered from the original) that has the effect of presenting Ex-Im in a more favorable light:
GAO found nothing fraudulent about any of this, nor do we.
Fraud? We weren’t considering fraud. We were evaluating whether the claim that Ex-Im creates or supports jobs is a credible one. Intentionally or not, exonerating Ex-Im from fraud seems to serve the purpose of rendering all other concerns about Ex-Im claims secondary by giving the false impression that the important thrust of the inquiry has been completed. Here’s the full paragraph:
GAO found nothing fraudulent about any of this, nor do we. The watchdog agency simply noted the rather crucial assumptions and limitations embedded in Ex-Im’s methodology and urged the bank to be more transparent about them—because “Congressional and public stakeholders may not fully understand what the jobs number that Ex-Im reports represents and the extent to which Ex-Im’s financing may have affected U.S. employment.” (Emphasis added)
Again, the context in which the editorial reports this GAO finding seems to change the tone and thrust of GAO’s message. Here is the text from the GAO report on this point:
Because of a lack of reporting on the assumptions and limitations of its methodology and data, Congressional and public stakeholders may not fully understand what the jobs number that Ex-Im reports represents and the extent to which Ex-Im’s financing may have affected U.S. employment.
The emphasis in the editorial’s portrayal implies that the Ex-Im data may be chock full of compelling evidence of the Bank’s importance, but without adequate explanation that evidence is too complex for Congress and public stakeholders to fully comprehend. However, the clear meaning of the GAO report is that the Ex-Im data are limited in their utility as evidence of the Bank’s importance to jobs, exports, and the economy, and that using the data for that purpose is misrepresentative and misleading.
Had the Editorial Board dug a little deeper into the GAO report, it might have found, among other limitations to “Employment Requirements Tables” (ERTs, used by Ex-Im to project its jobs-supported figures), is that “the employment data are a count of jobs, not of persons employed… Persons who hold multiple jobs show up multiple times in the employment data.” Basically, it is job “functions” that are counted, not jobs, and—despite the best efforts of organized labor—it is quite common for one worker to perform multiple job functions.
Moreover, Ex-Im’s “jobs-supported” numbers derive from ERTs that themselves derive from input-output analysis conducted by the Bureau of Economic Analysis and roughly follows—with some customized adjustments—the approach of other Commerce Department studies on the relationship between exports and jobs. In the “Methodology and Caveats” section of the 2010 Commerce Department study “Exports Support American Jobs,” there is this disclaimer:
Averages derived from IO analysis should not be used as proxies for change. They should not be used to estimate the net change in employment that might be supported by increases or decreases in total exports, in the exports of selected products or in the exports to selected countries or regions.
Of course, that is precisely what Ex-Im proponents are doing. Those important caveats have not deterred pro-reauthorization lobbyists from warning members of Congress of how many jobs are imperiled in their states and districts in the absence of reauthorization. In most cases, the actual effect of Ex-Im authorizations on particular states and districts is so small relative to the economy and relative to overall exports that creative arithmetic features prominently.
For instance, yesterday the Chamber of Commerce tweeted:
In Ohio alone, Ex-Im has supported 15,300 jobs and financed $2.4 billion worth of exports since 2007.” (Emphasis added)
Let’s parse: We are talking about a seven-year period here, so on an annual basis Ex-Im has financed an average of $343 million in Ohio exports, supporting 2,186 jobs. (The jobs figure is based on Ex-Im’s estimate of 6,390 jobs supported per $1 billion of exports financed, and as mentioned above, these are really job “functions.”) But Ohio has exported an average of $47 billion per year since 2007 and employs 5.4 million workers. I suppose tweeting, “In Ohio alone, Ex-Im has supported 0.04% of all jobs and financed 0.7% of all exports” wouldn’t convey the same sense of urgency to Ohio’s congressional caucus.
In the end, the editorial seems to diminish the importance of its own inquiry by giving a nod to the way things are done inside the Beltway, offering a faintly exasperated, but more tongue-in-cheek “lobbyists will be lobbyists” excuse instead of speaking out against the continued use of propaganda in the policy debate.
The New York Times wonders if the libertarian moment has arrived. Unfortunately, there’ve been false starts before.
Ronald Reagan’s election seemed the harbinger of a new freedom wave. His rhetoric was great, but actual accomplishments lagged far behind.
So, too, with the 1994 Republican takeover of Congress. Alas, the GOP in office behaved little different than many Democrats.
Since then there’s been even less to celebrate—in America, at least. George W. Bush was an avid proponent of “compassionate,” big-government conservatism. Federa outlays rose faster than under his Democratic predecessor. Barack Obama has continued Uncle Sam’s bailout tradition, promoting corporate welfare, pushing through a massive “stimulus” bill for the bank accounts of federal contractors, and seizing control of what remained private in the health care system.
Over the last half century, members of both parties took a welfare state that was of modest size despite the excesses of Franklin Delano Roosevelt’s New Deal and put it on a fiscally unsustainable basis as part of the misnamed “Great Society.” Economist Lawrence Kotlikoff figures government’s total unfunded liability at around $220 trillion.
The national government has done no better with international issues. Trillions went for misnamed “foreign aid” that subsidized collectivism and autocracy. Trade liberalization faces determined resistance and often is blocked by countries that would gain great benefits from global commerce.
Even worse has been foreign policy. The joy people felt from the collapse of the Berlin Wall a quarter century ago has been forgotten.
The defense budget has turned into a new form of foreign aid for America’s populous and prosperous allies. The United States has been constantly at war, repeatedly proving that the Pentagon is no better at social engineering than is any other government agency.
Americans across the political spectrum agree that something is wrong, that the status quo is no good. But they disagree on the remedy.
However, the answer shouldn’t be that hard to discern. The definition of insanity, runs the old adage, is to keep doing the same thing while expecting different results.
By that definition, Washington policymakers are insane. The economy is slowing, people are falling behind economically, freedoms are being lost, and security fears are rising? No problem. Roll out the usual failed nostrums.
We know what the effect of these policies will be. All we have to do is look around the world and see what has happened.
It is this reality, not new personalities or generations, that is creating a libertarian moment. The 20th century killed off communism and fascism as serious alternatives. The chief competitor to this systems was not laissez-faire capitalism, as some suggested, but highly regulated and monumentally expensive welfare states. They were freer and more prosperous than their geopolitical antagonists—even a little capitalism goes a long way—but the erosion of liberty and prosperity has been constant.
Perhaps more debilitating was the corrosive effect on the foundational principles of a free society, such as independence, self-reliance, responsibility, accountability, and more. This assault in America continues with, for instance, the federal government turning health care into another massive entitlement, highlighted by pervasive regulation and income redistribution.
The obvious—and only—alternative to more government, which has failed so badly, is less government. Lower tax rates and rationalize complex tax systems. Cut the wasteful looting and pillaging that is a hallmark of today’s transfer society. Repeal unnecessary and relax unnecessarily stringent regulations, while making legitimate rules more market-friendly. Model liberty, prosperity, tolerance, and peace for others, allowing individual Americans going abroad to be America’s best ambassadors.
Has the libertarian moment arrived? The tyranny of the status quo, as Milton Friedman termed it, remains omnipresent and powerful. As a result, I point out in the Freeman, “the libertarian moment will not ‘arrive.’ It will have to be brought forward by those committed to a better and freer America.”
SHENYANG, CHINA—China-Korean relations are in a state of flux. The People’s Republic of China and South Korea have exchanged presidential visits. Trade statistics suggest that the PRC did not ship any oil to the North during the first quarter of the year. Chinese academics openly speak of Beijing’s irritation with its long-time ally.
The cold feelings are reciprocated. Last year North Korea’s Kim Jong-un sent an envoy to the PRC to unsuccessfully request an invitation to visit. In December Kim had his uncle, Jang Song-taek, the North’s most intimate interlocutor with China, executed.
These circumstances suggest the possibility of a significant foreign policy shift in Beijing away from the North and toward the Republic of Korea. Washington hopes for greater Chinese willingness to apply economic pressure on Pyongyang. However, the PRC remains unwilling to risk instability by undermining the Kim dynasty.
I recently visited China and held scholarly meetings amid excursions to long-missed tourist sites (such as Mao’s Mausoleum!). I also made it to Shenyang, where relations with the North are of great interest because the city is about a two hour drive from the Yalu River.
I met one senior scholar who indicated that there was no doubt that Beijing-Pyongyang relations had changed since Kim came to power. The two nations “have a different relationship now and it is becoming colder than ever before.”
However, Jang’s execution had been “weighed too heavily by Western researchers,” he indicated. In fact, economic relations had continued. Jang’s fate was a matter of internal North Korea politics, “the result of the natural struggle for power.”
This doesn’t mean Beijing was happy about Jang’s fate. However, Jang’s ouster “is not the reason for the DPRK’s and China’s bad relations.”
Rather, the principal barrier is the North’s continued development of nuclear weapons. Kim Jong-un wants to visit China. But it is “unimaginable for Chinese officials to invite him when he’s doing nuclear tests. Impossible.”
In return, the North is unhappy over Beijing’s refusal to accommodate Kim as well as the end of oil shipments. “Also, the DPRK is quite angry over the quick development of Chinese relations with South Korea.”
This has made Pyongyang “eager to make contact with the U.S.,” an effort which so far has gone nowhere. This is why the Kim regime “took American citizens as hostages” and invited Dennis Rodman to visit, but these tactics “are not working.”
The North eventually “shifted the focal point of its foreign relations to Japan.” For the same reason, though “less importantly the DPRK made contact with Russia.”
The PRC is quite interested in U.S.-DPRK relations and Washington’s view of Japan’s move toward Pyongyang. “One of the uniform convictions for both the U.S. and China is no nuclear weapons in the DPRK,” he emphasized.
However, in Beijing’s view the solution is not more sanctions which “everyone has been putting on the DPRK,” but revival of the Six-Party Talks. This is where agreement between the U.S. and China breaks down.
The PRC wants more negotiations, preceded by an American willingness to reduce tensions and Pyongyang’s perceived need for a nuclear arsenal. The U.S. wants the North to make concessions beforehand lest the latest round fail like the many previous efforts.
This clash reflects an even deeper disagreement over competing end states. Both Washington and Beijing oppose a nuclear North Korea. However, the U.S., in contrast to China, would welcome a DPRK collapse, even if messy, and favor reunification with the South.
As I write in China-U.S. Focus, It isn’t impossible for American and Chinese policymakers to work through their differences. However, it will require understanding the other party’s perspective and offering meaningful concessions to make the deal a positive for both parties.
Patrick J. Michaels and Paul C. "Chip" Knappenberger
The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.
When it comes to global warming, facts often take a back seat to fiction. This is especially true with proclamations coming from the White House. But who can blame them, as they are just following the lead from Big Green groups (aka, “The Green Blob”), the U.S. Climate Change Research Program (responsible for the U.S. National Climate Assessment Report), and of course, the U.N.’s Intergovernmental Panel on Climate Change (IPCC).
We have documented this low regard for the facts (some might say, deception) on many occasions, but recently we have uncovered a particularly clear example where the IPCC’s ideology trumps the plain facts, giving the impression that climate models perform a lot better than they actually do. This is an important façade for the IPCC to keep up, for without the overheated climate model projections of future climate change, the issue would be a lot less politically interesting (and government money could be used for other things…or simply not extorted from us in the first place).
The IPCC is given deference when it comes to climate change opinion at all Northwest Washingon DC cocktail parties (which means also by the U.S. federal government) and other governments around the world. We tirelessly point out why this is not a good idea. By the time you get to the end of this post, you will see that the IPCC does not seek to tell the truth—the inconvenient one being that it dramatically overstated the case for climate worry in its previous reports. Instead, it continues to obfuscate.
This extracts a cost. The IPCC is harming the public health and welfare of all mankind as it pressures governments to seek to limit energy choice instead of seeking ways to help expand energy availability (or, one would hope, just stay out of the market).
Everyone knows that global warming (as represented by the rise in the earth’s average surface temperature) has stopped for nearly two decades. As historians of science have noted, scientists can be very creative when defending the paradigm that pays. In fact, there are already several dozen explanations.
Climate modellers are scrambling to try to save their precious children’s reputations—because the one thing that they do not want to have to admit is that they exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the projected impacts that derive from the model projections—and that would be a disaster for all those pushing for regulations limiting our use of fossil fuels for energy. Its safe to say the number of people employed by creating, legislating, lobbying, and enforcing these regulations is huge, as in “The Green Blob.”
In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, the IPCC pays brief attention to the recent divergence between model simulations and real-world observations:
“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”
But, lest you foolishly think that there may be some problem with the climate models, the IPCC clarifies:
“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”
Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.
The IPCC references its “Box9.2” in support of the statements quoted above.
In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.
Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)
As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends. The IPCC describes this as:
…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble
This gives rise to the IPCC SPM statement (quoted above) that
“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”
Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.
The IPCC describes the situation depicted there as:
Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…
This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):
The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.
Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.
The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC. From there, you can assess 108 (or the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.
We do this in our Figure 2. However, we adjust both axes of the graph such that all the data are shown and that you can see the inconvenient details.
Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 emissions scenario results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).
What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).
So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:
The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.
OK. You got your answer?
Our answer is, maybe, “medium”, and there is plenty of room for improvement.
The model range should be much tighter, indicating that the models were in better agreement with one another as to what the simulate trend should have been. As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (with the observed trend at 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.
Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.
What would lower our confidence?
The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world (or that natural variability was very large over the period of trend analysis). Or, the observed trend could move further from the center point of the model trend distribution. This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012).
Unfortunately, that’s what is happening.
Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.
Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.
After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop. Clearly, as anyone can see, this “tendency within a trend” (which Casey Stengel or Yogi Berra would have doubtlessly called the “trendency”) is looking bad for the models as the level of agreement with observations is steadily decreasing with time.
In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results. In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.
So, just how far away from either of these situations are we?
It all depends on how the earth’s average surface temperature evolves in the near future.
We explore three different scenarios between now and the year 2030.
Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.
Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.
Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.
Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend (beginning in 1951)would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.
Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 greenhouse gas scenario thereafter.
It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest yet observed (Scenario 3) still leads to complete model failure within two decades.
So let’s review.
1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.
2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.
3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.
4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.
So with all this information in hand, we’ll give you a moment to revisit your initial response to this question:
On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:
The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.
Got your final answer?
OK, let’s compare that to the IPCC’s assessment of the situation.
The IPCC gave it “very high confidence”—the highest level of confidence that they assign.
Do we hear stunned silence?
This in a nutshell sums up the IPCC process. The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC tells us with “very high confidence” that models agree with observations, and therefore are a reliable indicator of future climate changes.
[This is major revision of a post that first appeared at Watts Up With That on April 16, 2014.]
Steve H. Hanke
Last week, President Obama hosted the U.S.-Africa Leaders Summit in Washington, D.C. He welcomed over 40 African heads of state and their outsized entourages to what was a festive affair. Indeed, even the Ebola virus in West Africa failed to dampen spirits in the nation’s capital. Perhaps it was the billions of dollars in African investment, announced by America’s great private companies, that was so uplifting.
Good cheer was also observed in the advertising departments of major newspapers. Yes, many of the guest countries paid for lengthy advertisements–page turners–in the newspapers of record. That said, the substantive coverage of this gathering was thin. Neither the good, the bad, nor the ugly, received much ink.
What about the good? Private business creates prosperity, and prosperity is literally good for your health. My friend, the late Peter T. Bauer, documented the benefits of private trade in his classic 1954 book West African Trade. In many subsequent studies, Lord Bauer refuted conventional wisdom with detailed case studies and sharp economic reasoning. He concluded that the only precondition for private trade and prosperity to flourish was individual freedom reinforced by security for person and property.
More recently, Ann Bernstein, a South African, makes clear that the establishment and operation of private businesses does a lot of economic good (see: The Case for Business in Developing Countries, 2010). Yes, businesses create jobs, supply goods and services, spread knowledge, pay taxes, and so forth. Alas, in the Leaders Summit reportage that covered the multi-billion dollar investments by the likes of Coca-Cola, General Electric, and Ford Motor Co., the benefits of the humdrum activity of business and trade were nowhere to be found. But, as they say, “that’s not the president’s thing.”
Let’s move from the good to the bad and the ugly, and focus on the profound misery in Sub-Saharan Africa. I measure misery with a misery index. It is the simple sum of inflation, unemployment, and the bank lending interest rate, minus year on year GDP per capita growth. Using this metric, the countries for Sub-Saharan Africa are ranked in the accompanying table for 2012.
As I discussed in my recent Globe Asia column, index scores of around 10 or below indicate a relatively free economy, and countries with scores around 20 indicate considerable dysfunction, requiring serious structural (read: free market) reforms. The Sub-Saharan rankings show that the region goes from bad to ugly. For most of these countries to be hospitable for private businesses and the prosperity they bring, huge structural reforms will have to be undertaken.
Can the governments govern? Are they up to the basic tasks which include the maintenance of law and order, the effective management of monetary and fiscal affairs, and the provision of suitable institutions to support private activities?
For governments in Sub-Saharan Africa, the ability to produce timely economic data sheds some light on these questions. For 2013, only 6 countries reported the data required to construct a misery index (see the accompanying table). For the other 42 countries in Sub-Saharan Africa, the basic economic data required to produce a misery index are 1.5 years out of date.
So, even if there is a will to tackle the enormous structural economic problems facing most of Africa’s countries, do they have the capacity to deliver, or is everyone just pretending?