Feed aggregator

Nicole Kaeding

ObamaCare gives states the option to expand Medicaid to cover all individuals below 138 percent of the federal poverty level, which is approximately $33,500 a year for a family of four. To encourage states to expand, the federal government agreed to fund 100 percent of expenditures for the newly-eligible participants until 2016, and then slowly decrease the match to 90 percent in 2020 and into the future.

Democratic and Republican governors alike are showing their penchant for “free” federal dollars by supporting expanded Medicaid roles in their state. Republicans governors—who often say they dislike Obamacare—are in many cases pushing their legislatures to expand Medicaid to take advantage of this windfall.

GOP Governor Bill Haslam in Tennessee announced that he would support Medicaid expansion. His administration promoted the plan by saying, “Insure Tennessee will leverage the enhanced federal funding which will pay for between 90 and 100 percent of the cost and in doing so will bring federal tax dollars Tennesseans are already paying back to the state.”

To help minimize the state’s contribution and maximize federal funding, Haslam decided to expand the state’s health provider tax. Under a provider tax, a state agrees to increase Medicaid reimbursements to the providers paying the tax, such as hospitals. The higher reimbursement level draws a higher federal contribution. So state politicians and hospitals win, but federal taxpayers lose.  

In this case, luckily, Tennessee’s legislature denied Haslam’s  expansion attempts.

Governor Mike Pence in Indiana is pushing for Medicaid expansion, dubbing the program “Healthy IN Plan 2.0.” Governor Pence received an “A” in our Fiscal Policy Report Card on America’s Governors last year for his tax-and-spending restraint. But his decision to expand Medicaid to include working-aged, able-bodied, childless adults sends a very different signal.

Governors Pence and Haslam aren’t the only two Republicans wanting to expand Medicaid. Wyoming Governor Matt Mead said that by rejecting Medicaid expansion the legislature is “rejecting $120 million dollars meant for Wyoming.” Governor Gary Herbert of Utah has said that Medicaid expansion allows “Utah [to bring] taxpayer dollars back to our state.” More than 10 Republican governors support Medicaid expansion, many using this same sort of rhetoric.

These governors justify their actions by claiming that it will return tax dollars to their states. But Medicaid spending is not a fixed pie. The more that each state expands its program, the more that the nation’s taxpayers will be hit.  Federal expenditures are funded based on the matching percentage. It’s not true to say that if Tennessee doesn’t expand, that the money goes to California. Instead, if Tennessee doesn’t expand, then the money isn’t spent and taxpayers keep more of their earnings.

As I’ve discussed before, expanding Medicaid is also a risky proposition for state budgets, which some Republican governors do not seem to understand. They boast their fiscal conservatism, but their recent actions on Medicaid expansion come at the expense of a larger burden on the nation’s taxpayers.

Michael F. Cannon

The plaintiffs in King v. Burwell claim the Patient Protection and Affordable Care Act only offers premium subsidies, as the statute says, “through an Exchange established by the State.” Members of Congress who voted for the PPACA – most recently Sen. Bob Casey (D-PA) and former Sen. Ben Nelson (D-NE) – now swear it was never their intent to condition Exchange subsidies on state cooperation.

Ironically, Casey’s and Nelson’s decision to wade into the King debate demonstrates why, when a statute is clear, courts traditionally assign no weight to what members of Congress claim they intended a law to say – especially if, as here, those claims come after a clear provision has proven problematic. While he claims he never intended to condition subsidies on states establishing Exchanges, Casey repeatedly voted to condition Exchange subsidies on state cooperation, has misrepresented what Congress intended the PPACA to do, and continues to misrepresent the PPACA on his Senate web site. Nelson’s claims about what Congress intended should likewise be taken with a grain of salt. In an unguarded moment in 2013, Nelson admitted that in 2009 he paid no attention to “details” such as whether the PPACA authorized subsidies in federal Exchanges.

All Sides Agree: Casey Supported Conditional Exchange Subsidies

Casey and Nelson exchanged correspondence exactly one day before amicus briefs supporting the government were due to be filed with the Supreme Court. Casey asked for Nelson’s recollection of whether, in 2009, Nelson or anyone else suggested the PPACA’s subsidies would only be available in states that established Exchanges. Perhaps more than anyone, Nelson was a pivotal figure in the debate over the PPACA. Not only did he insist on state-based Exchanges rather than a national Exchange run by the federal government, his was the deciding vote that enabled the bill to pass the Senate and become law – and he withheld his vote until his demands were met.

In his letter to Nelson, Casey discussed conditioning Exchange subsidies on state cooperation as if it were a foreign concept:

The plaintiffs in King argue that the law was intentionally designed to deny tax credits to people in states with federally facilitated exchanges in order to “induce” states into operating their own Exchanges…

[A]ccording to the King plaintiffs…residents of a state which did not operate its own Exchange would lose access to premium tax credits intended to ensure that those residents could afford health insurance.

I do not recall you – or any other member of the House or Senate – insisting upon such a structure. I would appreciate any clarification you can offer regarding your role in shaping this important law, as I believe it will be beneficial to the American public and the justices themselves.

Yet conditioning Exchange subsidies on state cooperation is hardly a foreign concept to Casey. In 2009, he supported and voted for another health care bill that even the Obama administration and congressional Democrats acknowledge conditioned Exchange subsidies on state cooperation. That bill was S. 1697, reported by the Senate’s Health, Education, Labor, and Pensions Committee:

As Jonathan Adler and I explained in a brief we filed before the district court in King, every Democrat on the Senate’s HELP Committee voted in favor of S. 1697, and therefore in favor of conditioning Exchange subsidies on state cooperation:

  1. Sen. Jeff Bingaman (D-NM)
  2. Sen. Sherrod Brown (D-OH)
  3. Sen. Bob Casey (D-PA)
  4. Sen. Chris Dodd (D-CT)
  5. Sen. Kay Hagan (D-NC)
  6. Sen. Tom Harkin (D-IA)
  7. Sen. Jeff Merkley (D-OR)
  8. Sen. Barbara Mikulski (D-MD)
  9. Sen. Patty Murray (D-WA)
  10. Sen. Jack Reed (D-RI)
  11. Sen. Bernie Sanders (I-VT)
  12. Sen. Sheldon Whitehouse (D-RI).

In Casey’s words, then, he himself voted for a bill that “included the threat” that residents of uncooperative states “would lose access to premium…credits intended to ensure that those residents could afford health insurance.”

If you were a judge, what would you consider a better indicator of what Casey actually intended: what he repeatedly voted to enact, or what now he says to influence the courts after the clear language he voted to enact has proved problematic?

Casey Continues To Claim “If You Like The Coverage You Have, You Can Keep It”

Before you answer, keep in mind that Casey, like dozens of other Democratic senators and representatives, claimed the PPACA lets everybody keep the health plans they had before the bill became law:

To this day, Casey still claims on his official Senate web site, “If you like the coverage you have, you can keep it; the government will not force you to change it.” This tells us either (A) Casey does not understand the legislation he voted to enact into law, or (B) he is willing to dissemble to advance his policy preferences. Personally, I think it’s (A).

Either way, if you were a judge, which would you think more accurately represents what Casey intended: what he repeatedly voted to enact, or what he now says to influence the courts after what he voted to enact has proved problematic?

Nelson’s Letter: The Irrelevant “Bombshell”

Nelson’s response to Casey received most of the attention, however. Here’s the key excerpt:

In either scenario—a state or federal exchange—our purpose was clear: to provide states the tools necessary to deliver affordable healthcare to their citizens, and clearly the subsidies are a critical component of that effort regardless of which exchange type a state chooses. I always believed that tax credits should be available in all 50 states regardless of who built the exchange. The final law also reflects that belief as well.

Doug Kendall, who filed the amicus brief with members of Congress who enacted the PPACA in which the Casey-Nelson letters first appeared, calls Nelson’s comments “a bit of a bombshell.” Not so much. Kendall and others don’t seem to understand, and therefore misrepresent, the plaintiffs’ argument about how Nelson fits into the story.

Kendall, the congressional amici, and the Huffington Post’s Jonathan Cohn accuse the petitioners of claiming that the language conditioning subsidies on state cooperation was inserted into the PPACA at Nelson’s request. That is simply not true. Neither the plaintiffs, nor Adler, nor I have ever claimed that Nelson even suggested, much less insisted, that the PPACA condition Exchange subsidies on state cooperation. (Nor did he need to: this feature appeared in the HELP bill, the Finance Committee’s bill, and the PPACA with or without his suggestion.)

What the plaintiffs, Adler, and I actually argue is that Nelson matters because, and only because, (1) he insisted on state-run Exchanges rather than a single, nationwide Exchange, and (2) his vote was crucial to get a bill through the Senate, and, since Congress cannot force states to implement federal programs, (3) the PPACA’s drafters therefore needed some way to states to establish Exchanges – a part of the Act that has turned out to be very costly, difficult, and fraught with political peril. So what the PPACA’s drafters do? They adopted a wacky, hair-brained, far-out idea that has been proposed only on numerous occasions by multiple Congresses as well as Presidents Johnson, Nixon, Clinton (more than twice), and Bush. They created an incentive for states to implement federal priorities by conditioning federal benefits on state cooperation.

Kendall, Cohn, and the congressional amici either (A) don’t understand the plaintiffs’ arguments, or (B) are deliberately misrepresenting them. Personally, I think it’s (A). Kendall writes, “The petitioners’ assertion that Sen. Nelson insisted on conditional tax subsidies is itself pure speculation without a shred of support in the record.” That assertion is moot, because Kendall’s straw man is pure invention, without a shred of support in the briefs.  

The real significance of Nelson’s response to Casey is not how much Nelson says, but how little. He says he wanted subsidies in both state-established and federally established Exchanges. Okay, that’s great. But it doesn’t tell us what Nelson intended, because it offers no insight into what he voted to enact into law. In his last sentence, opines that the PPACA reflects his preference for subsidies in federal Exchanges. But that’s the source of the dispute in King, and Nelson offers no evidence to help us resolve what the law says.

In 2013, Nelson Admitted He Didn’t Know What The Bill Said

Nor does Nelson deserve to be considered an authority on what the PPACA says about subsidies in federal Exchanges, because in 2013 he admitted he didn’t pay attention.

Thanks to a handful of intrepidresearchers and the North Dakota Department of Insurance, I happened to find audio of a press conference Nelson gave in January 2013, upon being appointed CEO of the insurance-regulators lobby in Washington, D.C.. As luck would have it, a reporter asked him about subsidies in federal Exchanges. Here’s part one of the press conference, but the relevant part is part two (at 8:20). When discussing negotiations over the crafting of the PPACA, Nelson described federal Exchanges as an afterthought, and admits he voted for the bill without paying any attention to whether the bill actually authorized subsidies in federal Exchanges:

NELSON (8:20): This is Ben Nelson again. I might add that I don’t know what everyone who voted for the health care act was thinking. But I can tell you that the discussions for having state-based Exchanges as an option for the states was to assure that the states would have that role. There was never really any intent for the federal government to assume any role, except by default or at the request of the states. So there was no way that the federal government was to have an initiative in this direction. It was more of a backup, fallback situation, should the states decide that they didn’t want, or a state decided it didn’t want for establish a state-based Exchange, but preferred to do it with a federal FFE, as it’s called, or join together on a multi-state basis for an Exchange. As many options as possible, but the goal was to be as far away from any kind of federal preemption as possible.

REPORTER (9:32): Was there, was the discussion along the lines of, we don’t want the subsidies to go through the federal Exchange? I’m sure you’re aware of that issue. Was that part of the thinking? And why did they go the way, they write the law the way [inaudible].

NELSON (9:43): I don’t think it ever got quite that specific, at least not during any time that I was involved in discussions. But when the discussion about an Exchange occurred, it was always, once [we] got over the hurdle of saying yes, states first, federal second, that it was clear that there was no real pre-emption, we didn’t get into, unfortunately, the details, because now they have to be fleshed out. So there are some levels of uncertainty.

I know of no evidence that calls into question Nelson’s claim that he always wanted subsidies in federal Exchanges. But these comments tell us (1) he never insisted on subsidies in federal Exchanges, (2) he never inquired about subsidies in federal Exchanges, (3) he never paid attention to whether the bill authorized subsidies in federal Exchanges, and (4) voted for the PPACA anyway. In an unguarded moment, Nelson admitted that whether the PPACA authorized subsidies in federal Exchanges just wasn’t that important to him. He admitted the issue now “ha[s] to be fleshed out” because there is “uncertainty” about whether he had indeed voted to authorize subsidies in federal Exchanges. In other words, if we want to know what Nelson actually intended to become law, asking Ben Nelson is not an option. Our only option is to read the bill.

Again, if you were a judge, which would you think more accurately captures Nelson’s intent: the clear language he voted to enact, or what now he says to influence the courts after the clear language he voted to enact – which he admitted was not a high priority for him – has proved problematic?

Conclusion

The King plaintiffs’ case does not depend on Casey or Nelson or any PPACA supporters consciously knowing that they were voting to condition Exchange subsidies on state cooperation. The fact that PPACA supporters voted to enact clear statutory language conditioning subsidies on states establishing Exchanges is enough. It means that statutory language is both the law and Congress’ intent – even if no members of Congress actually harbored such thoughts. The facts that some of them repeatedly voted to condition Exchange subsidies on state cooperation, and that others were indifferent, merely strengthens the plaintiffs’ case. 

Jason Bedrick

School choice is safe in the Granite State.

This morning, the New Hampshire Senate Education Committee voted 3-2 along party lines against SB 204, a bill to repeal New Hampshire’s trailblazing scholarship tax credit law, which was the first in the nation to include homeschoolers. The repeal bill is likely to be rejected in a vote of the entire state senate later this week. Thus far, no state has legislatively repealed a school choice law.

Last month, the Cato Institute released a short documentary on the fight for school choice in the “Live Free or Die” state, titled “Live Free and Learn: Scholarship Tax Credits in New Hampshire.” You can watch the film here:

Live Free and Learn: Scholarship Tax Credits in New Hampshire

David Boaz

In The Libertarian Mind, which is officially published today, I have a chapter titled “What Big Government Is All About” that aspires to be applied Public Choice analysis. Much of it relates to what I think Jonathan Rauch first called “the parasite economy,” the part of the economy that involves getting through government what you can’t get through voluntary market processes. Reason.com has just published an excerpt from that chapter, with a few recent examples added, such as these all-too-typical stories:

Lobbying never stops. One week in December, the Kaiser Health News reported that “growth opportunities from the federal government have increasingly come not from war but from healing.” That is, “business purchases by the Department of Health and Human Services have doubled to $21 billion annually in the past decade.” And who showed up to collect some of the largesse? Well, General Dynamics was having trouble making ends meet with defense contracting, so suddenly it managed to become the largest contractor to Medicare and Medicaid. “For traditional defense contractors,” wrote Kaiser Health, “health care isn’t the new oil. It’s the new F-35 fighter.”

Of course, the old F-35, despite a decade or more of running behind schedule and over budget, is still doing pretty well. That same week Congress passed the $1.1 trillion “Cromnibus” spending bill, including $479 million for four F-35 fighters from Lockheed that even the Pentagon didn’t want. The Wall Street Journal reported that the bill “sparked a lobbying frenzy from individual companies, industries and other special interests”—pretty much the same language you could have read in earlier stories about Porkulus and Obamacare. Every provision in the bill—from the $94 billion in Pentagon contracting to $120 million for the Chicago subway to an Obamacare exemption for Blue Cross and Blue Shield—has a lobbyist or several shepherding it through the secretive process.

And I also talked about the parasite economy on John Stossel’s television show last Friday night:

For more on the parasite economy, and everything else you wanted to know about libertarianism, read The Libertarian Mind.

Chris Edwards

Are federal government employees “public servants,” who faithfully execute the laws and aim at the broad public good? Do they match the Progressive-era ideal of neutral and selfless experts free of political bias?

Perhaps many federal workers do. But a story in GovExec suggests that other motivations are also in play:

Lawmakers from both parties addressing unionized federal employees at a conference Monday pledged more support and respect for the civil service, but the union itself promised to “whoop [the] ass” of Congress if it stood in the group’s way.

At its annual legislative gathering, the American Federation of Government Employees vowed to combat any congressional efforts to shrink the federal workforce, cut pay and benefits or weaken unions. While Congress has succeeded in slashing agency rolls and freezing pay, union leaders said, those actions have better positioned the union to prevent similar efforts in the future.

Every time the “fools” in Congress try to hurt the federal workforce, said AFGE National President J. David Cox in a passionate address to his members, “We get bigger. We get stronger and we fight harder.”

He added: “We are a force to be reckoned with and we are a force that will open up the biggest can of whoop ass on anyone” who votes against the union’s interests…

The union chief called on each of those [AFGE] members to help push its agenda. “I’m begging you,” he said, “I’m pleading with you: Get in the fight.”

Maybe it is no surprise that federal workers and their unions fight for themselves. But can we count on federal legislators to stand up for taxpayers and citizens and check union power? Maybe not:

Lawmakers who addressed the attendees emphasized they would not be alone in that struggle; the lawmakers promised to bring the message of the positive and essential work feds do back to their colleagues and into the public sphere.

Freshman Congressman Don Beyer, D-Va., promised to be a “champion” for federal employees, adding the “critical question” for the workforce is how to change the perception of civil servants. He pledged to mention the positive work feds do in every speech he gives, suggested creating public service announcements highlighting federal employees and even proposed someone write a movie in which an “anonymous civil servant” is the hero.

“We have a great, great story to tell,” Beyer said of the federal workforce. “We just have to find every possible way to tell it.”

Beyer and his fellow Virginian, Republican Rep. Rob Wittman, agreed one crucial step to demonstrating that support is to repeal the across-the-board budget cuts known as sequestration.

I think we can see who is the real boss in Washington today. Beyer and Wittman have figured it out, and they are standing firmly in line. AFGE chief, David Cox, barked the orders: “If I meet one more politician who tells me we need to tighten our belts, I’m going to take my belt off and I’m going to whoop his ass.”

Jim Harper

You’ve probably heard some version of the joke about the chemist, the physicist, and the economist stranded on a desert island. With a can of food but nothing to open it, the first two set to work on ingenious technical methods of accessing nutrition. The economist declares his solution: “Assume the existence of a can opener!”…

There are parallels to this in some U.S. state regulators’ approaches to Bitcoin. Beginning with the New York Department of Financial Services six months ago, regulators have put proposals forward without articulating how their ideas would protect Bitcoin users. “Assume the existence of public interest benefits!” they seem to be saying.

When it issued its “BitLicense” proposal last August, the New York DFS claimed “[e]xtensive research and analysis” that it said “made clear the need for a new and comprehensive set of regulations that address the novel aspects and risks of virtual currency.” Yet, six months later, despite promises to do so under New York’s Freedom of Information Law, the NYDFS has not released that analysis, even while it has published a new “BitLicense” draft.

Yesterday, I filed comments with the Conference of State Bank Supervisors (CSBS) regarding their draft regulatory framework for digital currencies such as Bitcoin. CSBS is to be congratulated for taking a more methodical approach than New York. They’ve issued an outline and have called for discussion before coming up with regulatory language. But the CSBS proposal lacks an articulation of how it addresses unique challenges in the digital currency space. It simply contains a large batch of regulations similar to what is already found in the financial services world.

The European Banking Authority took a welcome tack in its report on Bitcoin last July, submitting itself to the rigor of risk management. The EBA sought to identify the risks that digital currency poses to consumers, merchants, and a small variety of other interests. The EBA report did not apply risk management as well as it could have, and it came to unduly conservative results in terms of integrating Bitcoin into the European financial services system, but the small number of genuine risks it identified can form the basis of discussion about solutions.

It is very hard to assess a batch of solutions put forward without an articulation of the problems they are intended to solve, as the draft model regulatory framework unfortunately does. Hopefully, future iterations of CSBS’s work will include needed articulation.

My comment spends some time on the assumption that state-by-state licensing for financial services providers has benefits that justify its large costs. “The public interest benefits of licensing obviously do not increase arithmetically with each additional license,” I wrote, assuming correctly, I hope, how the world works. CSBS is in a unique position to streamline the licensing regime.

I also caution CSBS about the assumption that making our finances “transparent to law enforcement” is an appropriate regulator’s role. The Supreme Court has been moving away from the Fourth Amendment doctrine under which some 1970s cases appeared to take constitutional protection away from our financial activities. Financial services regulators should take the side of law-abiding consumers on the question of financial privacy.

Sincerely, the CSBS effort is a fair one, and I think the organization is in a good position to steer its members away from technology-specific regulation like we saw from New York. I look forward to continued, deeper discussion with CSBS and to more work that integrates Bitcoin into the U.S. financial services system.

Nicole Kaeding

The Department of Energy (DOE) is admitting that it failed. Last week, it announced that it will stop development of FutureGen 2.0, a federally-financed, coal-fired power plant in Illinois. FutureGen, and its successor FutureGen 2.0, wasted millions of tax dollars, and was beset with multiple delays and cost overruns.

FutureGen was one of many federal energy projects experimenting in so-called “clean coal” technology. FutureGen sought to demonstrate the technical capabilities of carbon capture and sequestration (CCS) technology. CCS attempts to capture carbon dioxide emissions from coal-fired power plants and store it underground, eliminating an increase in atmospheric carbon dioxide.

FutureGen was launched in 2003 by the George W. Bush administration as a public-private partnership to demonstrate CCS with a site chosen in Illinois. Costs would be shared among the federal government and 12 private energy companies. The project’s estimated cost grew from $1 billion to $1.8 billion by 2008, when it was cancelled due to the cost overruns.  

In 2010 the Obama administration revived the project using stimulus funding. The new project, FutureGen 2.0, was allotted $1 billion from the federal government, with private investors supposed to be providing additional funding.

The project was plagued with problems. Estimated costs grew quickly, rising from $1.3 billion to $1.65 billion. The Congressional Research Service cited “ongoing issues with project development, [and] lack of incentives for investment from the private sector.” Private investors were unwilling to invest in the project. As of August 2014, the FutureGen Alliance had yet to raise the $650 billion in private debt and equity needed. There were additional concerns about the legality of a $1 a month surcharge to subsidize the project that would have been added to the electricity bills of all Illinois residents. Late last year, the Illinois Supreme Court agreed to hear the case.

Now, DOE announced that it will suspend funding for the project. Energy Secretary Ernest Moniz told reporters, “frankly, the project has got a bunch of challenges remaining,” which is a startling admission from the administration. DOE said that the project failed to make enough progress to keep it alive and would not meet a September 30, 2015 deadline for spending the remaining stimulus funds that it had been allotted.

The project spent $202.5 million of the $1 billion before being cancelled. Together, the two iterations of FutureGen ended up costing taxpayers $378 million.

A related issue is that proposed regulations from the Obama administration would functionally require CCS for all new coal-fired power plants in the United States. But with the failure of FutureGen, the federal government has not demonstrated that it works properly. DOE’s other CCS demonstration project in Mississippi is experiencing delays as well. Some experts question if CCS is technologically possible at a cost-effective price.

FutureGen and FutureGen 2.0 are part of a long list of DOE failures. Repeating mistakes made during the Bush administration, DOE reopened FutureGen, which put millions more tax dollars at risk. DOE should stop trying to centrally plan technological advances, and instead let entrepreneurs experiment and the market guide the nation’s energy progress.

Patrick J. Michaels

Matt Drudge has been riveting eyeballs by highlighting a London Telegraph piece calling the “fiddling” of raw temperature histories “the biggest science scandal ever.” The fact of the matter is some of the adjustments that have been tacked onto some temperature records are pretty alarming—but what do they really mean?

One of the more egregious ones has been the adjustment of the long-running record from Central Park (NYC). Basically it’s been flat for over a hundred years but the National Climatic Data Center, which generates its own global temperature history, has stuck a warming trend of several degrees in it during the last quarter-century, simply because it doesn’t agree with some other stations (which also don’t happen to be in the stable urban core of Manhattan).

Internationally, Cato Scholar Ross McKitrick and yours truly documented a propensity for many African and South American stations to report warming that really isn’t happening.  Some of those records, notably in Paraguay and central South America, have been massively altered.

At any rate, Chris Booker, author of the Telegraph article, isn’t the first person to be alarmed at what has been done to some of the temperature records.  Others, such as Richard Muller, from UC-Berkeley, along with Steven Mosher, were so concerned that they literally re-invented the surface temperature history from scratch. In doing so, both of them found the “adjustments” really don’t make all that much difference when compared the larger universe of data. While this result has been documented  by the scientific organization Berkeley Earth, it has yet to appear in one of the big climate journals, a sign that it might be having a rough time in the review process.

That’s quite different than what was found in 2012 by two Greek hydrologists, E. Steirou and D. Koutsoyiannis, who analyzed a sample of weather stations used to calculate global temperature and found the adjustments were responsible for about half of the observed warming, when compared to the raw data. Their work was presented at the annual meeting of the European Geosciences Union, but has not been published subsequently in the scientific literature. That’s not necessarily a knock on it, given the acrimonious nature of climate science, but it seems if it were an extremely robust, definitive paper, that it would have seen the light of day somewhere.

But, before you cry “science scandal” based upon the Greek results, it’s a fact that one of the adjustments that has been commonly used—taking into account the biases introduced by the time of day in which the high and low temperatures for the previous 24 hours are recorded—in fact does induce warming into most records, a change that in fact is scientifically justified.

In sum, I’d hold fire about “the biggest science scandal ever.” The facts are:

  • when the global temperature records were reworked by people as skeptical as yours truly, nothing much emerged;
  • some of the data have been mangled, like the Central Park record—and there are serious problems over some land areas in the Southern Hemisphere; and
  • some of the adjustments for measurement biases introduce scientifically defensible warming trends.

David Boaz

I’m delighted to announce that my new book, The Libertarian Mind: A Manifesto for Freedom, goes on sale today. Published by Simon & Schuster, it should be available at all fine bookstores and online book services.

I’ve tried to write a book for several audiences: for libertarians who want to deepen their understanding of libertarian ideas; for people who want to give friends and family a comprehensive but readable introduction; and for the millions of Americans who hold fiscally responsible, socially tolerant views and are looking for a political perspective that makes sense. 

The Libertarian Mind covers the intellectual history of classical liberal and libertarian ideas, along with such key themes as individualism, individual rights, pluralism, spontaneous order, law, civil society, and the market process. There’s a chapter of applied public choice (“What Big Government Is All About”), and a chapter on contemporary policy issues. I write about restoring economic growth, inequality, poverty, health care, entitlements, education, the environment, foreign policy, and civil liberties, along with such current hot topics as libertarian views of Bush and Obama; America’s libertarian heritage as described by leading political scientists; American distrust of government; overcriminalization; and cronyism, lobbying, the parasite economy, and the wealth of Washington.

The publisher is delighted to have this blurb from Senator Rand Paul: 

“They say the libertarian moment has arrived. If you want to understand and be part of that moment, read David Boaz’s The Libertarian Mind where you’ll be drawn into the ‘eternal struggle of liberty vs. power,’ where you’ll learn that libertarianism presumes that you were born free and not a subject of the state. The Libertarian Mind belongs on every freedom-lover’s bookshelf.”

I am just as happy to have high praise from legal scholar Richard Epstein:

“In an age in which the end of big government is used by politicians as a pretext for bigger, and worse, government, it is refreshing to find a readable and informative account of the basic principles of libertarian thought written by someone steeped in all aspects of the tradition. David Boaz’s Libertarian Mind unites history, philosophy, economics and law—spiced with just the right anecdotes—to bring alive a vital tradition of American political thought that deserves to be honored today in deed as well as in word.” 

Find more endorsements here from such distinguished folks as Nobel laureate Vernon Smith, John Stossel, Peter Thiel, P. J. O’Rourke, Whole Foods founder John Mackey, and author Jonathan Rauch. And please: buy the book. Then like it on Facebook, retweet it from https://twitter.com/David_Boaz, blog it, buy more copies for your friends.

 

Chris Edwards

In recent decades, the Democratic Party has moved far to the left on economic policy. I have discussed the leftward shift on tax policy, which was illustrated once again by President Obama’s generally awful proposals in his new budget (see here, here, and here).

What about regulations? Consider the following statement by President Jimmy Carter on his signing a landmark railroad deregulation bill in 1980. Have you ever heard President Obama express such views or push for similar sorts of legislation?

Today I take great pleasure in signing the Staggers Rail Act of 1980. This legislation builds on the railroad deregulation proposal I sent to Congress in March 1979. It is vital to the railroad industry and to all Americans who depend upon rail services.

By stripping away needless and costly regulation in favor of marketplace forces wherever possible, this act will help assure a strong and healthy future for our Nation’s railroads and the men and women who work for them. It will benefit shippers throughout the country by encouraging railroads to improve their equipment and better tailor their service to shipper needs. America’s consumers will benefit, for rather than face the prospect of continuing deterioration of rail freight service, consumers can be assured of improved railroads delivering their goods with dispatch …

This act is the capstone of my efforts over the past 4 years to get the Federal Government off the backs of private industry by removing needless, burdensome regulation which benefits no one and harms us all. We have deregulated the airlines, a step that restored competitive forces to the airline industry and allowed new, innovative services. We have freed the trucking industry from archaic and inflationary regulations, an action that will allow the startup of new companies, encourage price competition, and improve service. We have deregulated financial institutions, permitting banks to pay interest on checking accounts and higher interest to small savers and eliminating many restrictions on savings institutions loans.

Where regulations cannot be eliminated, we have established a program to reform the way they are produced and reviewed. By Executive order, we have mandated regulators to carefully and publicly analyze the costs of major proposals. We have required that interested members of the public be given more opportunity to participate in the regulatory process. We have established a sunset review program for major new regulations and cut Federal paperwork by 15 percent. We created a Regulatory Council, which is eliminating inconsistent regulations and encouraging innovative regulatory techniques saving hundreds of millions of dollars while still meeting important statutory goals. And Congress recently passed the Regulatory Flexibility Act, which converts into law my administrative program requiring Federal agencies to work to eliminate unnecessary regulatory burdens on small business. I am hopeful for congressional action on my broad regulatory reform proposal now pending, to help complete congressional action on my regulatory reform proposals.

Today these efforts continue with deregulation of the railroad industry and mark the past 4 years as a time in which the Congress and the executive branch stepped forward together in the most significant and successful deregulation program in our Nation’s history. We have secured the most fundamental restructuring of the relationship between industry and government since the time of the New Deal.

In recent decades the problems of the railroad industry have become severe. Its 1979 rate of return on net investment was 2.7 percent, as compared to over 10 percent for comparable industries. We have seen a number of major railroad bankruptcies and the continuing expenditure of billions of Federal dollars to keep railroads running. Service and equipment have deteriorated. A key reason for this state of affairs has been overregulation by the Federal Government. At the heart of this legislation is freeing the railroad industry and its customers from such excessive control.

Steve H. Hanke

In my misery index, I calculate a ranking for all countries where suitable data exist. My misery index — a simple sum of inflation, lending rates, and unemployment rates, minus year-on-year per capita GDP growth — is used to construct a ranking for 108 countries. The table below is a sub-index of all Latin American countries presented in the world misery index.

A higher score in the misery index means that the country, and its constituents, are more miserable. Indeed, this is a table where you do not want to be first.

Venezuela and Argentina, armed with aggressive socialist policies, end up the most miserable in the region. On the other hand, Panama, El Salvador, and Ecuador score the best on the misery index for Latin America. Panama, with roughly one tenth the misery index score of Venezuela, has used the USD as legal tender since 1904. Ecuador and El Salvador are also both dollarized (Ecuador since 2000 and El Salvador since 2001) – they use the greenback, and it is clear that the embrace of the USD trumps all other economic policies.

The lesson to be learned is clear: the tactics which socialist governments like Venezuela and Argentina employ yield miserable results, whereas dollarization is associated with less misery.

Chris Edwards

President Obama proposed an expansive spending plan for highways, transit, and other infrastructure in his 2016 budget.

Here are some of the problems with the president’s approach:

  • Misguided Funding Source. The president proposes hitting U.S. corporations with a special 14 percent tax on their accumulated foreign earnings to raise $238 billion. This proposal is likely going nowhere in Congress, and it is bad economic policy. The Obama administration seems to view the foreign operations of U.S. companies as an enemy to be punished, but in fact foreign business operations generally complement U.S. production and help boost U.S. exports.
  • Increases Spending. The Obama six-year transportation spending plan of $478 billion is an increase of $126 billion above current spending levels. Instead of increasing federal spending on highways and transit, we should be cutting it, as it is generally less efficient that state-funded spending. To close the Highway Trust Fund (HTF) gap, we should cut highway and transit spending to balance it with current HTF revenues, which mainly come from gas and diesel taxes.
  • Increases Central Control. The Obama plan would increase federal subsidies for freight rail and “would require development of state and regional freight transportation plans,” according to this description. But freight rail has been a great American success story since it was deregulated by President Jimmy Carter in 1980. So let’s not reverse course and start increasing federal intervention again. Let’s let Union Pacific and the other railroads make their own “plans;” we don’t need government-mandated plans.
  • Undermines User Pays. For reasons of both fairness and efficiency, it is a good idea to fund infrastructure with charges on infrastructure users. In recent decades, the HTF has moved away from the original user-pays model of gas taxes funding highways, as funds have been diverted to mass transit, bicycle paths, and other activities. Obama would move further away from user pays, both with his corporate tax plan and with his proposed replacement of the HTF with a broader Transportation Trust Fund.
  • Expands Mass Transit Subsidies. The Obama plan would greatly increase spending on urban bus and rail systems. But there is no proper federal role in providing subsidies for such local activities. Indeed, federal transit subsidies distort efficient local decision making—the lure of “free” federal dollars induces local politicians to make unwise and wasteful choices. Arlington, Virginia’s million-dollar bus stop is a good example.

For background on the transportation battle heating up in Congress, see my articles here and here. And see the writings of Randal O’Toole, Robert Poole, Emily Goff, and Ken Orski.

And you can check out the writings of Robert Puentes of Brookings, who joined me on C-Span today to discuss these issues.

David Boaz

Both Jeb Bush and Rand Paul are talking about broadening the appeal of the Republican Party as they move toward presidential candidacies. Both say Republicans must be able to compete with younger voters and people of all racial backgrounds. Both have talked about the failure of welfare-state programs to eliminate urban poverty. But they don’t always agree. Bush sticks with the aggressive foreign policy that came to be associated with his brother’s presidency, while Paul wants a less interventionist approach. Bush calls for “smarter, effective government” rather than smaller government, while Paul believes that smaller government would be smarter. Perhaps most notoriously, Bush strongly endorses the Common Core educational standards, building on George W. Bush’s policy of greater federal control of schooling.

Meanwhile, Paul promises to bring in new audiences by talking about foreign policy and civil liberties. As Robert Costa reported from an Iowa rally this weekend:

Turning to civil liberties, where he has quarreled with hawkish Republicans, Paul chastised the National Security Agency for its surveillance tactics. “It’s none of their damn business what you do on your phone,” he said. 

“Got to love it,” said Joey Gallagher, 22, a community organizer with stud earrings, as he nursed a honey-pilsner beer. “It’s a breath of fresh air.”

But the rest of Paul’s nascent stump speech signaled that as much as he wants to target his father’s lingering network, he is eager to be more than a long-shot ideologue.

Paul cited two liberals, Sen. Bernard Sanders (I-Vt.) and Rep. Alan Grayson (D-Fla.), during his Friday remarks and said he agrees with outgoing Attorney General Eric H. Holder Jr. on curbing federal property seizures and softening sentencing laws for nonviolent drug offenders — all a nod to his efforts to cast himself as a viable national candidate who can build bipartisan relationships and expand his party’s political reach.

“Putting a kid in jail for 55 years for selling marijuana is obscene,” Paul said.

Alan Grayson and Eric Holder? That’s pushing the Republican comfort zone. And what was the reception?

“Just look at who’s here,” said David Fischer, a former Iowa GOP official, as he surveyed the crowd at Paul’s gathering Friday at a Des Moines winery. “He is actually bringing women, college students and people who are not white into the Republican Party.”

That’s his plan. It’s a real departure from the unsuccessful candidacies of old, hawkish John McCain and old, stuffy Mitt Romney. It just might create the kind of excitement that Kennedy, Reagan, and Obama once brought to presidential politics. The question is whether those new audiences will show up for Republican caucuses and primaries to join the small-government Republicans likely to be Paul’s base.

Patrick J. Michaels and Paul C. "Chip" Knappenberger

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

Some folks are just slow to get it.

There is no way on God’s greening earth that international negotiators are going to achieve the emissions reductions that climate models tell them are necessary to keep the rise in the planet’s average surface temperature to less than 2°C above the pre-industrial value.

At the United Nations climate meeting held in Cancun back in 2012, after kicking around the idea for several years, negotiators foolishly adopted 2°C as the level associated with a “dangerous interference” with the climate system—what everyone agreed to try to avoid way back in 1992 under the Rio Treaty.

Bad idea—it won’t happen. Even the folks at the U.N. are starting to realize it.

According to an article in this week’s The Guardian titled “Paris Climate Summit: Missing Global Warming Target ‘Would Not Be Aailure’”:

EU climate chief and UN’s top climate official both play down expectations that international climate talk pledges will help hit 2C target… “2C is an objective,” Miguel Arias Canete, the EU climate chief, said. “If we have an ongoing process you can not say it is a failure if the mitigation commitments do not reach 2C.”

…In Brussels, meanwhile, the UN top climate official, Christiana Figueres, was similarly downplaying expectations, telling reporters the pledges made in the run-up to the Paris meeting later this year will “not get us onto the 2°C pathway”.

There’s so much backpeddling and spinning going on, that you’re motion sick reading the article. While we certainly did see this coming, we didn’t expect the admissions were going to start at this early date.

There is actually one way in which the global temperature rise may stay beneath 2°C, at least for the next century or so, even under the U.N.’s mid-range greenhouse gas emissions scenarios—that is, if the earth’s climate sensitivity is a lot lower than that currently adopted by the U.N. and characteristic of their ensemble of climate models.

Most climate negotiators and climate activists are loathe to admit this might be the case, as that would be the end of first-class travel to various hot spots to (yet again) euchre our monies.

But scientific evidence continues to mount, maybe even enough to send them to the back of the plane. Instead of the earth’s equilibrium climate sensitivity—how much the earth’s average surface temperature will ultimately rise given a doubling of the atmospheric concentration of carbon dioxide—being somewhere around 3°C (as the U.N. has determined), the latest scientific research is starting to center around a value of 2°C, with strong arguments for an even lower value (closer to 1.6°C).

If the earth’s response to greenhouse gas emissions is to warm at only one-half to two-thirds the rate negotiators currently assume, it means it is going to take longer (and require more emissions) to ultimately reach a temperature rise of 2.0°C. This buys the negotiators more time—something about which negotiators, conference organizers, and associated service industries should be ecstatic!

As an example of how much time would be bought by a lower climate sensitivity, researchers Joeri Rogelj, Malte Meinshousen, Jan Sedlacek, and Reto Knutti—whose work has been telling us all along that 2°C was basically impossible—ran their models incorporating some recent estimates of low sensitivity in place of the IPCC preferred sensitivity assessment. 

What they found is available via the open access literature and worth taking a look at. Figure 1 sums it up. Using a lower climate sensitivity (pink) reduces the end-of-the-century temperature rise (left) and increases the quantity of carbon dioxide emissions before reaching various temperature thresholds (right).

 

Figure 1. Temperature evolution over time (left) and in association with cumulative carbon dioxide emissions (right) from models run under different assumptions for the equilibrium climate sensitivity. The black and dark blue colors use the U.N. values; green represents a higher climate sensitivity;  and pink a lower one (from Rogelj et al., 2014). Of note is a massive gaffe—which is to implicitly attribute all warming since 1861 to greenhouse gases. The fact is that the sharp warming of the early 20th century occurred before significant emissions.

This information will surely be useful in Paris this December for those countries who seek less stringent emissions reduction timetables.

And finally, it was announced this week that the homework that the U.N. handed down to each country at the end of the last year’s climate meeting in Lima is due in draft form on February 13th, which was for every nation to publish a target and timetable for reducing carbon dioxide emissions and a plan as to how that is to be achieved.

It ought to be interesting to see what grades everyone receives on their assignments.

No doubt the U.N. officials have already seen some that were handed in early, which is why they announced that they are going to be  grading on a curve. What should have been “F”s (for failure to meet the 2° target) will now surely be given “A”s (for effort). As everyone knows, grade inflation is a worldwide phenomenon.

Paul C. "Chip" Knappenberger and Patrick J. Michaels

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature. These items may not have received the media attention that they deserved or have been misinterpreted in the popular press.

Posted Wednesday in the Washington Post’s new online “Energy and Environment” section is a piece titled “No, Climate Models Aren’t Exaggerating Global Warming.” That’s a pretty “out there” headline considering all the evidence to the contrary.

We summed up much of the contrary evidence in a presentation at the annual meeting of the American Geophysical Union last December.  The take-home message—that climate models were on the verge of failure (basically the opposite of the Post headline)—is self-evident in Figure 1, adapted from our presentation.

Figure 1. Comparison of observed trends (colored circles according to legend) with the climate model trends (black circles) for periods from 10 to 64 years in length. All trends end with data from the year 2014 (adapted from Michaels and Knappenberger, 2014).

The figure shows (with colored circles) the value of the trend in observed global average surface temperatures in lengths ranging from 10 to 64 years and in all cases ending in 2014 (the so-called “warmest year on record”). Also included in the figure (black circles) is the average trend in surface temperatures produced by a collection of climate models for the same intervals. For example, for the period 1951–2014 (the leftmost points in the chart, representing a trend length of 64 years) the trend in the observations is 0.11°C per decade and the average model projected trend is 0.15°C per decade. During the most recent 10-year period (2005–2014, rightmost points in the chart), the observed trend is 0.01°C per decade while the model trend is 0.21°C per decade.

Clearly, over the period during which human-caused greenhouse gases have risen the fastest (basically any period ending in 2014), climate models consistently predict that the earth’s surface temperature should have warmed much faster than it did.

Given our results (and plenty like them), we were left scratching our heads over the headline of the Post article. The article was reporting on the results of a paper that was published last week in the British journal Nature by researchers Jochem Marotzke and Piers Forster, and pretty much accepted uncritically what Marotzke and Forster concluded.

The “accepted uncritically” is critical to the article’s credibility.

Figure 2 shows the results that Marotzke and Forster got when comparing observed trends to model-predicted trends of lengths of 15 years for all periods beginning from 1900 (i.e., 1900–1914) to 1998 (1998–2012). Marotzke and Forster report that overall, the model trends only depart “randomly” from the observed trends—in other words, the model results aren’t biased.

But this claim doesn’t appear to hold water.

During the first half of the record, when greenhouse gas emissions were relatively small and had little effect on the climate, the differences between the modeled and observed temperatures seem pretty well distributed between positive and negative—a sign that natural variability was the driving force between the differences. However, starting in about 1960, the model trends show few negative departures from the observations (i.e., they rarely predict less warming than was observed). This was partially due to the model mishandling of two large volcanic eruptions (Mt. Agung in 1963 and Mt. Pinatubo in 1992), but also it is quite possibly a result of the models producing too much warming as a result of increasing greenhouse gas emissions. It seems that the models work better, over the short term (say 15 years), when they are not being forced by a changing composition of the atmosphere.

Figure 2. Comparison of observed trends (black) with the climate model average trend (red) for periods of 15 years in length during the period 1900–2012 (adapted from Marotzke and Forster, 2015).

But the models appear to do worse over long periods.

Figure 3 is also from the Marotzke and Forster paper. It shows the same thing as Figure 2, but this time for 62-year-long trends. In this case, the models show a clear and persistent inability to capture the observed warming that took place during the first half of the 20th century (the models predict less warming than was observed over all 62-year periods beginning from 1900 through 1930). Then, after closely matching the observed trend for a while, the models began to overpredict the warming beginning in about 1940 and progressively do worse up through the present. In fact, the worst model performance, in terms to predicting too much warming, occurs during the period 1951–2012 (the last period examined).

Figure 3. Comparison of observed trends (black) with the climate model average trend (red) for periods of 62 years in length during the period 1900–2012 (adapted from Marotzke and Forster, 2015).

This behavior indicates that over longer periods (say, 62 years), the models exhibit systematic errors and do not adequately explain the observed evolution of the earth’s surface temperature since the beginning of the 20th century.

At least that is how we see it.

But perhaps we are seeing it wrong.

Over at the website ClimateAudit.org, Nic Lewis (of low climate sensitivity fame) has taken a very detailed (and complicated) look at the statistical methodology used by Marotzke and Forster to arrive at their results. He does not speak in glowing terms of what he found:

“I was slightly taken aback by the paper, as I would have expected either one of the authors or a peer reviewer to have spotted the major flaws in its methodology.”

“Some statistical flaws are self evident. Marotzke’s analysis treats the 75 model runs as being independent, but they are not.”

“However, there is an even more fundamental problem with Marotzke’s methodology: its logic is circular.”

Lewis ultimately concluded:

“The paper is methodologically unsound and provides spurious results. No useful, valid inferences can be drawn from it. I believe that the authors should withdraw the paper.”

Not good.

So basically no matter how* you look at the Marotzke and Forster results—taking the results at face value or throwing them out altogether—their conclusion are not well-supported. And certainly, they are no savior for poorly performing climate models.

 

*No matter how, that is, except if you are looking to try to make it appear that the growing difference between climate model projections and real world-temperature change poses no threat to aggressive measures attempting to mitigate climate change.

References:

Marotzke, J., and P. Forster, 2015. “Forcing, Feedback and Internal Variability in Global Temperature Trends.” Nature, 517, 565–570, doi:10.1038/nature14117.

Michaels, P.J., and P.C. Knappenberger, 2014. “Quantifying the Lack of Consistency Between Climate Model Projections and Observations of the Evolution of the Earth’s Average Surface Temperature since the Mid-20th Century.” American Geophysical Union Fall Meeting, San Francisco, CA, Dec. 15–19, Paper A41A-3008.

Alex Nowrasteh

The latest issue of The Economist has a good article about allowing American states to set their own migration policies.

Last spring, Cato published a policy analysis on this very topic by Brandon Fuller and Sean Rust, entitled “State-Based Visas: A Federalist Approach to Reforming U.S. Immigration Policy.” Cato’s policy analysis explores the legalities, economics, and practical hurdles of implementing a state-based visa system in addition to the existing federal system. Cato even had an event in March 2014 (video available) where critic Reihan Salam and supporter Shikha Dalmia explored the idea.

The Economist article lays out the case well. Canada and Australia have state- and provincial-based visa systems that complement their federal immigration policies. The results have been positive for those local jurisdictions because they have more information and incentive to produce a better visa policy than a distant federal government does. American states could similarly experiment with less restrictive migration policies, attracting workers of any or all skill types.

The economic impact of immigration is positive, so the downsides of decentralized immigration policy would be small. Most importantly, The Economist echoes a point that Fuller and Rust made in their policy analysis: these migrant workers should eventually be able to move around the country for work. An unrestricted internal labor market is positive for the American economy; a freer international labor market would be too.

Please read The Economist piece, Cato’s policy analysis, and watch Cato’s event on this topic.

David Boaz

At TIME I write about the rise of libertarianism, Rand Paul, and my forthcoming book (Tuesday!) The Libertarian Mind:

Tens of millions of Americans are fiscally conservative, socially tolerant, and skeptical of American military intervention….

Whether or not Rand Paul wins the presidency, one result of his campaign will be to help those tens of millions of libertarian-leaning Americans to discover that their political attitudes have a name, which will make for a stronger and more influential political faction.

In my book The Libertarian Mind I argue that the simple, timeless principles of the American Revolution—individual liberty, limited government, and free markets—are even more important in this world of instant communication, global markets, and unprecedented access to information than Jefferson or Madison could have imagined. Libertarianism is the framework for a future of freedom, growth, and progress, and it may be on the verge of a political breakout.

Read the whole thing. Buy the book.

Julian Sanchez

Proponents of network neutrality regulation are cheering the announcement this week that the Federal Communications Commission will seek to reclassify Internet Service Providers as “common carriers” under Title II of the Telecommunications Act. The move would trigger broad regulatory powers over Internet providers—some of which, such as authority to impose price controls, the FCC has said it will “forbear” from asserting—in the name of “preserving the open internet.”

Two initial thoughts:

First, the scope of the move reminds us that “net neutrality” has always been somewhat nebulously defined and therefore open to mission creep. To the extent there was any consensus definition, net neutrality was originally understood as being fundamentally about how ISPs like Comcast or Verizon treat data packets being sent to users, and whether the companies deliberately configured their routers to speed up or slow down certain traffic. Other factors that might affect the speed or quality of service—such as peering and interconnection agreements between ISPs and large content providers or backbone intermediaries—were understood to be a separate issue. In other words, net neutrality was satisfied so long as Comcast was treating packets equally once they’d reached Comcast’s network. Disputes over who should bear the cost of upgrading the connections between networks—though obviously relevant to the broader question of how quickly end-users could reach different services—were another matter.

Now the FCC will also concern itself with these contracts between corporations, giving content providers a fairly large cudgel to brandish against ISPs if they’re not happy with the peering terms on offer. In practice, even a “treat all packets equally” rule was going to be more complicated than it sounds on face, because the FCC would still have to distinguish between permitted “reasonable network management practices” and impermissible “packet discrimination.” But that’s simplicity itself next to the problem of determining, on a case by case basis, when the terms of a complex interconnection contract between two large corporations are “unfair” or “unreasonable.”

Second, it remains pretty incredible to me that we’re moving toward a broad preemptive regulatory intervention before we’ve even seen what deviations from neutrality look like in practice. Nobody, myself included, wants to see the “nightmare scenario” where ISPs attempt to turn the Internet into a “walled garden” whose users can only access the sites of their ISP’s corporate partners at usable speeds, or where ISPs act to throttle businesses that might interfere with their revenue streams from (say) cable television or voice services. There are certainly hypothetical scenarios that could play out where I’d agree intervention was justified—though I’d also expect targeted interventions by agencies like the Federal Trade Commission to be the most sensible first resort in those cases.

Instead, the FCC is preparing to impose a blanket regulatory structure—including open-ended authority to police unspecified “future conduct” of which it disapproves—in the absence of any sense of what deviations from neutrality might look like in practice. Are there models that might allow broadband to be cheaper or more fairly priced for users—where, let’s say, you buy a medium-speed package for most traffic, but Netflix pays to have high-definition movies streamed to their subscribers at a higher speed? I don’t know, but it would be interesting to find out. Instead, users who want any of their traffic delivered at the highest speed will have to continue paying for all their traffic to be delivered at that speed, whether they need it or not. The extreme version of this is the controversy over “zero-rating” in the developing world, where the Orthodox Neutralite position is that it’s better for those who can’t afford mobile Internet access to go without rather than let companies like Facebook and Wikipedia provide poor people with subsidized free access to their sites. 

The deep irony here is that “permissionless innovation” has been one of the clarion calls of proponents of neutrality regulation. The idea is that companies at the “edge” of the network introducing new services should be able to launch them without having to negotiate with every ISP in order to get their traffic carried at an acceptable speed. Users like that principle too; it’s why services like CompuServe and AOL ultimately had to abandon a “walled garden” model that gave customers access only to a select set of curated services.

But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment. It seems to me that the only reason not to want to find out is the fear that some consumers will like the results of at least some of these experiments, making it politically more difficult to entrench the sacred principle of neutrality in law. After all, you’d think that if provider deviations from neutrality in the future prove uniformly and manifestly bad for consumers or for innovation, it will only be easier to make the case for regulation.

As I argued a few years back, common carrier regimes might make sense when you’re fairly certain there’s more inertia in your infrastructure than in your regulatory structure. Networks of highways and water pipes change slowly, and it’s a good bet that a sound rule today will be a sound rule in a few years. The costs imposed by lag in the regulatory regime aren’t outrageously high, because even if someone came up with a better or cheaper way to get water to people’s homes, reengineering physical networks of pipes is going to be a pretty slow process. But wireless broadband is not a network of pipes, or even a series of tubes. Unless we’re absolutely certain we already know the best way to price and route data packets—both through fiber and over the air—there is something perverse about a regulatory approach that precludes experimentation in the name of “innovation.”

Alan Reynolds

The U.S. job market has tightened by many measures – more advertised job openings, fewer claims for initial unemployment insurance, substantial reduction in long-term unemployment and the number of discouraged workers.  Yet the percentage of working-age population that is either working or looking for work (the labor force participation rate) remains extremely low.  This is a big problem, since projections of future economic growth are constructed by adding expected growth of productivity to growth of the labor force.

Why have so many people dropped out of the labor force?  Since they’re not working (at least in the formal economy), how do they pay for things like food, rent and health care?

One explanation answers both questions: More people are relying on a variety of means-tested cash and in-kind benefits that are made available only on the condition that recipients report little or no earned income.   Since qualification for one benefit often results in qualification for others, the effect can be equivalent to a high marginal tax rate on extra work (such as switching from a 20 to 40 hour workweek, or a spouse taking a job).  Added labor income can often result in loss of multiple benefits, such as disability benefits, supplemental security income, the earned income tax credit, food stamps and Medicaid. 

This graph compares annual labor force participation rates with Congressional Budget Office data on means-tested federal benefits as a percent of GDP.  The data appear consistent with work disincentives in federal transfer payments, labor tax rates and refundable tax credits.

Dalibor Rohac

This weekend, after months of animated and often vicious campaigning, Slovaks will vote in a referendum on same-sex marriage, adoptions, and sex education. Interestingly, the referendum has not been initiated by the proponents of gay rights, which are not particularly numerous or well-organized, but rather by the social-conservative group Alliance for Family. The goal is to preempt moves towards the legalization of same-sex unions and of child adoptions by gay couples by banning them before they become a salient issue. Overturning the results of a binding referendum would then require a parliamentary supermajority and would only come at a sizeable political cost.

However, in spite of all the heated rhetoric, it seems unlikely that the threshold for the referendum’s validity will be met. Also, as I wrote in International New York Times some time ago, Slovakia is slowly becoming a more open, tolerant place – something that the referendum will hopefully not undo. However,

[i]n the meantime, the mean-spirited campaigning and frequent disparaging remarks about gays and their “condition” are a poor substitute for serious policy discussions and are making the country a much less pleasant place, and not just for its gay population.

Another disconcerting aspect of the referendum is its geopolitical dimension. For some of the campaigners a rejection of gay rights goes hand in hand with a rejection of what they see as the morally decadent West:

Former Prime Minister Jan Carnogursky, a former Catholic dissident and an outspoken supporter of the referendum, noted recently that “in Russia, one would not even have to campaign for this — over there, the protection of traditional Christian values is an integral part of government policy” and warned against the “gender ideology” exported from the United States.

We will see very soon whether the ongoing cultural war was just a blip in Central Europe’s history or whether it will leave a bitter aftertaste for years to come. Here is my essay on the referendum, written for V4 Revue. I also wrote about the referendum in Slovak, for the weekly Tyzden (paywalled), and discuss it in a video with Pavol Demes (in Slovak).

Pages