Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Daniel J. Mitchell

I’ve argued that we’ll get better government if we make it smaller.

And Mark Steyn humorously observed, “our government is more expensive than any government in history – and we have nothing to show for it.”

But can these assertions be quantified?

I had an email exchange last week with a gentleman from Texas who wanted to know if I had any research on the efficiency of government. He specifically wanted to know the “ratio of federal tax dollars collected to the actual delivery of the service.”

That was a challenge. If he simply wanted examples of government waste, I could have overloaded his inbox.

But he wanted an efficiency measure, which requires apples-to-apples comparisons to see which jurisdictions are delivering the most output (government services) compared to input (how much is spent on those services).

My one example was in the field of education, where I was ashamed to report that the United States spends more per student than any other nation, yet we get depressingly mediocre results (though that shouldn’t be a surprise for anyone who has looked at this jaw-dropping chart comparing spending and educational performance).

But his query motivated me to do some research and I found an excellent 2003 study from the European Central Bank. Authored by Antonio Afonso, Ludger Schuknecht, and Vito Tanzi, the study specifically examines the degree to which governments are providing value, and at what cost.

The objective of this paper is to provide a proxy for measuring public sector performance and efficiency. To do this we will put together a number of performance indicators in the government’s core functions. …We will set these indicators in relation to the costs of achieving them. We will, hence, derive simple performance and efficiency indicators for 1990 and 2000 for the public sectors of 23 industrialised OECD countries. …As a first step, we define 7 sub-indicators of public performance. The first four look at administrative, education, health, and public infrastructure outcomes. …The three other sub-indicators reflect the “Musgravian” tasks for government. These try to measure the outcomes of the interaction with and reactions to the market process by government. Income distribution is measured by the first of these indicators. An economic stability indicator illustrates the achievement of the stabilisation objective. The third indicator tries to assess allocative efficiency by economic performance.

Here’s a flowchart showing how they measured public sector performance.

I should explain, at this point, that I’m not a total fan of the PSP measure. Most of the indicators are fine, but some rub me the wrong way.

I think an even distribution of income is a nice theoretical concept, for instance, but I don’t think it can be mandated by government (unless the goal is to make everybody poor). Economic stability also isn’t necessarily a proper goal. I’d much rather live in a society that oscillates between 7 percent growth and -2 percent growth if the only other alternative was a society that had very stable 1 percent growth.

But enough nit-picking on my part. What did the study find when looking at public sector performance?

Indicators suggest notable but not extremely large differences in public sector performance across countries… Looking at country groups, small governments (industrialised countries with public spending below 40 percent of GDP in 2000) on balance report better economic performance than big governments (public spending above 50 percent of GDP) or medium sized governments (spending between 40 and 50 percent of GDP).

These are remarkable findings. Nations with small governments achieve better outcomes. And that’s including some indicators that I don’t even think are properly defined!

But what’s most amazing is that the above findings are simply based on an examination of outputs.

So what happens if we also look at inputs to gauge the degree to which governments are delivering a lot of bang for the buck?

Public expenditure, expressed as a share of GDP, can be assumed to reflect the opportunity costs of achieving the public sector performance estimated in the previous section. …Public expenditures differ considerably across countries. Average total spending in the 1990s ranged from around 35 percent of GDP in the US to 64 percent of GDP in Sweden. The difference is mainly due to more or less extensive welfare programs. …we now compute indicators of Public Sector Efficiency (PSE). We weigh performance (as measured by the PSP indicators) by the amount of relevant public expenditure, PEX, that is used to achieve a given performance level.

And what did the experts discover? Just below is a chart showing the results. There’s a lot of data, particularly if you’re looking at individual countries. To see the bottom-line results, look at the numbers circled in red.

As you can see, countries with small governments are far more productive and efficient.

We find significant differences in public sector efficiency across countries. Japan, Switzerland, Australia, the United States and Luxembourg show the best values for overall efficiency. Looking at country groups, “small” governments post the highest efficiency amongst industrialised countries. Differences are considerable as “small” governments on average post a 40 percent higher scores than “big” governments. …This illustrates that the size of government may be too large in many industrialised countries, with declining marginal products being rather prevalent.

The conclusion of the study makes some very important observations.

Unsurprisingly, countries with small public sectors report the “best” economic performance… Countries with small public sectors report significantly higher PSE indicators than countries with medium-sized or big public sectors. All these findings suggest diminishing marginal products of higher public spending. …Spending in big governments could be, on average, about 35 per cent lower to attain the same public sector performance.

Though I can’t help but wonder what the results would have been if Hong Kong and Singapore also were added to the mix.

After all, I don’t consider the United States to have a “small” government. Same for Japan, Switzerland, and Australia. Those are simply nations where government isn’t as big and bloated as it is in France, Italy, Sweden, and Greece.

Imagine the results if you could measure public sector performance and public sector efficiency for the United States and other developed nations in the pre-World War I era, back when the burden of government spending averaged less than 10 percent of economic output.

I strongly suspect we got far more “bang for the buck” when government was genuinely small.

But I don’t want to make the perfect the enemy of the good, so let’s focus on the results of the study. The clear message is that big governments spend a lot more and deliver considerably less.

And that’s a very worrisome message since the burden of government is projected to increase substantially in the United States thanks to demographic changes and poorly designed entitlement programs.

So at the very least, we should do everything possible to reform those programs to keep America from becoming Greece.

And once we achieve that goal, then we can try to reduce the size and scope of government so we’re more like Hong Kong and Singapore, with only about 20 percent of GDP diverted to government.

Then, in my libertarian fantasy world, we can cut, prune, privatize, and eliminate until government once again only consumes about 10 percent of economic output.

Walter Olson

Even by his standards, Paul Krugman uses remarkably ugly and truculent language in challenging the good faith of those who take a view opposed to his on the case of King v. Burwell, just granted certiorari by the Supreme Court following a split among lower courts. Krugman claims that federal judges who rule against his own position on the case are “corrupt, willing to pervert the law to serve political masters.” Yes, that’s really what he writes – you can read it here.

A round of commentary on legal blogs this morning sheds light on whether Krugman knows what he’s talking about. 

“Once upon a time,” Krugman claims, “this lawsuit would have been literally laughed out of court.” [Citation needed, as one commenter put it] The closest Krugman comes to acknowledging that a plain-language reading of the statute runs against him is in the following:

But if you look at the specific language authorizing those subsidies, it could be taken — by an incredibly hostile reader — to say that they’re available only to Americans using state-run exchanges, not to those using the federal exchanges.

New York City lawyer and legal blogger Scott Greenfield responds

If by “incredibly hostile reader,” Krugman means someone with a basic familiarity with the English language, then he’s right.  That’s what the law says. … There is such a thing as a “scrivener’s error,” that the guy who wrote it down made a mistake, left out a word or put in the wrong punctuation, and that the error was not substantive even though it has a disproportionate impact on meaning.  A typo is such an error.  I know typos. This was not a typo. This was not a word misspelled because the scribe erred.  This was a structural error in the law enacted. Should it be corrected? Of course, but that’s a matter for Congress.

While some ObamaCare proponents may now portray the provision as a mere slip in need of correction, as I noted at Overlawyered in July, “ObamaCare architect Jonathan Gruber had delivered remarks on multiple 2012 occasions suggesting that the lack of subsidies for federally sponsored exchanges served the function (as critics had contended it did) of politically punishing states that refuse to set up exchanges.”

Josh Blackman, meanwhile, points out something incidental yet revealing about Krugman’s column: its homespun introductory anecdote about how his parents discovered that they had been stuck with a mistaken deed to their property, fixed (“of course”) by the town clerk presumably with a few pen strokes and a smile, couldn’t possibly have happened the way Krugman said it did. Property law, much more so than statutory construction, is super-strict about these matters.

If your deed is incorrect, you cannot simply get the “town clerk” to “fix the language”. … Mistakes are enforced by courts. That’s why [everyone] should purchase title insurance. … 

So this is the exact opposite example of what Krugman would want to use to illustrate why King is “frivolous.” If courts applied property doctrine to the construction of statutes, this case would be over in 5 seconds. The government loses. 

To be sure, there may be better arguments with which to defend the Obama administration’s side of the King case. But do not look for them in Paul Krugman’s commentary, which instead seems almost designed to serve the function of pre-gaming a possible defeat in King by casting the federal judiciary itself as “corrupt” and illegitimate.  

 

 

Patrick J. Michaels and Paul C. "Chip" Knappenberger

You Ought to Have a Look is a feature from the Center for the Study of Science, posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

Leaving the election results aside (noting that they were bad for the Obama administration’s ill-founded and executive-ordered climate policies), we highlight a couple (among the many) interesting climate change–related tidbits scattered among the intertubes.

The first is an analysis of what was left out of the latest (final?) report from the United Nations’ Intergovernmental Panel on Climate Change (IPCC), conducted by Marcel Crok, a Dutch journalist who covers climate change with a somewhat skeptical eye.

Crok recently partnered up with climate researcher Nic Lewis to produce a major analysis of climate sensitivity—one of the key parameters in helping to understand how much influence human activities will have on the future climate—for the United Kingdom’s Global Warming Policy Foundation  (another site that you’ll surely be hearing from in these pages from time to time). Lewis and Crok found that the IPCC greatly overestimated the climate sensitivity based on a critical review of the extant scientific literature on the topic.

In a post this week on his blog (which is sometimes written in Dutch), Crok compares how the IPCC treatment of climate sensitivity changed from being-front-and-center in its 2007 Fourth Assessment Report to being nearly buried in its 2014 Fifth Assessment Report.  

Why the change? Because the more people look at climate sensitivity, the less it looks like the IPCC produced a very good “assessment” of it. Virtually the entirety of their reports are premised on a climate sensitivity of around 3.5°C. A much more realistic value is around  2.0°C—a difference so large as to consign most of the IPCC reports to the dustbin of climate history.

In his article “IPCC Bias In Action,” Crok writes:

The IPCC was saddled with a dilemma. A lot of conclusions in the report are based on the output of models and admitting that the models’ climate sensitivity is about 40% too high was apparently too … inconvenient. So IPCC decided not to mention climate sensitivity anymore in the SPM of the Synthesis Report. It decided to give the world a prognosis which it knows is overly pessimistic. One may wonder why. Did it want to hide the good news?

We could hardly have said it better ourselves!

Another site worth clicking on from time to time is a blog called The Blackboard, run by Lucia Liljegren, and independent climate researcher and all-around busybody. Previously, we have teamed up with Lucia to examine how the observed evolution of the global average temperature compares with expectations of the behavior as produced by climate models. Our results indicated that climate models were not faring too well. While everyone knows this now, 4–5 years ago this was cutting edge, and the establishment wanted nothing to do with it.  Thus, our paper was never published (rejected by several journals). Nevertheless, it was a scientifically robust work that was a harbinger of things to come. It  is available here.

Lucia continues to keep a tab on climate model performance. Recently, she updated her analysis to check to see if reports of the death of the global warming “hiatus” were accurate. Like Mark Twain before her, she found them to be greatly exaggerated. Lucia reports:

Anyway: I’m rather unconvinced ‘the hiatus’ is over. That said: it’s a bit difficult to say for sure because the definition of ‘hiatus’ is rather vague. It does seem to me we are going to need to see quite a bit of warming to overcome the doubts of those who think models are not well suited to predicting warming over periods as long as 20 or 30 years.

The reason for this is simple. It will take several years of warming to establish a significant trend since 1997. For example, if warming began in 2014, at the same rate that was established between the late 1970s and the late 1990s, the “hiatus” would extend to 24 years (using annual data) before the post-1997 trend became significant.

And finally, we’d be remiss if we didn’t point out that level-headed science/science policy researcher Roger Pielke Jr. of the University of Colorado has a new book out that should be of interest to anyone who seeks the truth behind the (lack of) identifiable linkages between extreme weather and human greenhouse gas emissions. His book is called The Rightful Place of Science: Disasters and Climate Change, and it is available from Amazon at the insanely cheap price of only $5. For more info and to see what folks have to say about it, you ought to have a look here.

Ilya Shapiro

While the Supreme Court’s decision last month not to take up the same-sex marriage cases that had accumulate over the summer surprised some (but not all), that “decision not to decide” was easily explained by the absence of a conflict in the lower courts. All of the federal courts of appeal to have ruled had held traditional state definitions of marriage to be unconstitutional. As of this past Thursday, however, that’s no longer the case.

In case you’ve been overly focused on the last few days’ other big legal news, the Cincinnati-based U.S. Court of Appeals for the Sixth Circuit ruled 2-1 in favor of the state marriage laws of Michigan, Ohio, Kentucky, and Tennessee (cases in which Cato filed several briefs). Judge Jeffrey Sutton – whose previous turn in the national spotlight came when he voted to uphold Obamacare’ individual mandate before the Supreme Court got that case – wrote a magisterial opinion rejecting the challengers arguments regarding the Fourteenth Amendment. While I disagree with it for reasons spelled out in Cato’s various briefs, it’s seriously the best possible legal articulation of why states should remain free to restrict marriage licenses to opposite-sex couples. Sutton’s elegant and well-crafted opinion, though ultimately wrong, puts to shame many of the opinions that nevertheless correctly struck down state marriage laws – most notably Seventh Circuit Judge Richard Posner’s, which reads like a stream-of-consciousness college-sophomore sociology paper.

And this development wasn’t surprising. The conventional wisdom was that Sutton would be the swing vote on the panel and that he would invoke Baker v Nelson – the Supreme Court’s 1972 dismissal of a gay-marriage lawsuit “for want of a substantial federal question” – as binding lower courts’ hands notwithstanding Windsor v. United States and other legal developments. Ilya Somin makes an astute observation comparing Sutton’s approach to what he did in the Obamacare case:

Some of the flaws in Sutton’s analysis in the same-sex marriage case bear a surprising resemblance to those of his most famous previous opinion: his concurrence upholding the Obamacare individual health insurance mandate. In that case, he relied on an idiosyncratic interpretation of the distinction between facial and as-applied challenges that went against Supreme Court precedent, and was not adopted by any of the other judges who considered the issue on either the Supreme Court or the lower courts (including the many who voted to uphold the mandate on other grounds). Both opinions combine strong rhetorical statements about the humility required of lower court judges – especially when it comes to deferring to the Supreme Court – with neglect or significant misunderstanding of relevant Supreme Court precedent.

The practical question now is whether the cert-petition process will be completed quickly enough for the Court to consider these cases this term or whether it’s pushed to next fall (meaning a ruling as late as June 2016). Dale Carpenter and Josh Blackman sketch out the twists and turns we can expect, ultimately concluding that it’ll be very close, given that generally only cases the Court takes by early January make it onto the argument calendar for the same term. The challengers will be filing their cert petition(s) this very week, which makes an argument in late April still theoretically possible. 

My bet is that Chief Justice Roberts maneuvers behind the scenes in such a way that argument won’t be until next term begins in October but the ruling will come by Christmas 2015. Of course, if Justice Ginsburg retires or is otherwise unable to perform her duties at any point in this process, the case/ruling will be held up, thus setting up a presidential election in which same-sex marriage figures much more prominently than any we’ve had.

Pages