Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Chris Edwards

President Obama proposed an expansive spending plan for highways, transit, and other infrastructure in his 2016 budget.

Here are some of the problems with the president’s approach:

  • Misguided Funding Source. The president proposes hitting U.S. corporations with a special 14 percent tax on their accumulated foreign earnings to raise $238 billion. This proposal is likely going nowhere in Congress, and it is bad economic policy. The Obama administration seems to view the foreign operations of U.S. companies as an enemy to be punished, but in fact foreign business operations generally complement U.S. production and help boost U.S. exports.
  • Increases Spending. The Obama six-year transportation spending plan of $478 billion is an increase of $126 billion above current spending levels. Instead of increasing federal spending on highways and transit, we should be cutting it, as it is generally less efficient that state-funded spending. To close the Highway Trust Fund (HTF) gap, we should cut highway and transit spending to balance it with current HTF revenues, which mainly come from gas and diesel taxes.
  • Increases Central Control. The Obama plan would increase federal subsidies for freight rail and “would require development of state and regional freight transportation plans,” according to this description. But freight rail has been a great American success story since it was deregulated by President Jimmy Carter in 1980. So let’s not reverse course and start increasing federal intervention again. Let’s let Union Pacific and the other railroads make their own “plans;” we don’t need government-mandated plans.
  • Undermines User Pays. For reasons of both fairness and efficiency, it is a good idea to fund infrastructure with charges on infrastructure users. In recent decades, the HTF has moved away from the original user-pays model of gas taxes funding highways, as funds have been diverted to mass transit, bicycle paths, and other activities. Obama would move further away from user pays, both with his corporate tax plan and with his proposed replacement of the HTF with a broader Transportation Trust Fund.
  • Expands Mass Transit Subsidies. The Obama plan would greatly increase spending on urban bus and rail systems. But there is no proper federal role in providing subsidies for such local activities. Indeed, federal transit subsidies distort efficient local decision making—the lure of “free” federal dollars induces local politicians to make unwise and wasteful choices. Arlington, Virginia’s million-dollar bus stop is a good example.

For background on the transportation battle heating up in Congress, see my articles here and here. And see the writings of Randal O’Toole, Robert Poole, Emily Goff, and Ken Orski.

And you can check out the writings of Robert Puentes of Brookings, who joined me on C-Span today to discuss these issues.

David Boaz

Both Jeb Bush and Rand Paul are talking about broadening the appeal of the Republican Party as they move toward presidential candidacies. Both say Republicans must be able to compete with younger voters and people of all racial backgrounds. Both have talked about the failure of welfare-state programs to eliminate urban poverty. But they don’t always agree. Bush sticks with the aggressive foreign policy that came to be associated with his brother’s presidency, while Paul wants a less interventionist approach. Bush calls for “smarter, effective government” rather than smaller government, while Paul believes that smaller government would be smarter. Perhaps most notoriously, Bush strongly endorses the Common Core educational standards, building on George W. Bush’s policy of greater federal control of schooling.

Meanwhile, Paul promises to bring in new audiences by talking about foreign policy and civil liberties. As Robert Costa reported from an Iowa rally this weekend:

Turning to civil liberties, where he has quarreled with hawkish Republicans, Paul chastised the National Security Agency for its surveillance tactics. “It’s none of their damn business what you do on your phone,” he said. 

“Got to love it,” said Joey Gallagher, 22, a community organizer with stud earrings, as he nursed a honey-pilsner beer. “It’s a breath of fresh air.”

But the rest of Paul’s nascent stump speech signaled that as much as he wants to target his father’s lingering network, he is eager to be more than a long-shot ideologue.

Paul cited two liberals, Sen. Bernard Sanders (I-Vt.) and Rep. Alan Grayson (D-Fla.), during his Friday remarks and said he agrees with outgoing Attorney General Eric H. Holder Jr. on curbing federal property seizures and softening sentencing laws for nonviolent drug offenders — all a nod to his efforts to cast himself as a viable national candidate who can build bipartisan relationships and expand his party’s political reach.

“Putting a kid in jail for 55 years for selling marijuana is obscene,” Paul said.

Alan Grayson and Eric Holder? That’s pushing the Republican comfort zone. And what was the reception?

“Just look at who’s here,” said David Fischer, a former Iowa GOP official, as he surveyed the crowd at Paul’s gathering Friday at a Des Moines winery. “He is actually bringing women, college students and people who are not white into the Republican Party.”

That’s his plan. It’s a real departure from the unsuccessful candidacies of old, hawkish John McCain and old, stuffy Mitt Romney. It just might create the kind of excitement that Kennedy, Reagan, and Obama once brought to presidential politics. The question is whether those new audiences will show up for Republican caucuses and primaries to join the small-government Republicans likely to be Paul’s base.

Patrick J. Michaels and Paul C. "Chip" Knappenberger

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

Some folks are just slow to get it.

There is no way on God’s greening earth that international negotiators are going to achieve the emissions reductions that climate models tell them are necessary to keep the rise in the planet’s average surface temperature to less than 2°C above the pre-industrial value.

At the United Nations climate meeting held in Cancun back in 2012, after kicking around the idea for several years, negotiators foolishly adopted 2°C as the level associated with a “dangerous interference” with the climate system—what everyone agreed to try to avoid way back in 1992 under the Rio Treaty.

Bad idea—it won’t happen. Even the folks at the U.N. are starting to realize it.

According to an article in this week’s The Guardian titled “Paris Climate Summit: Missing Global Warming Target ‘Would Not Be Aailure’”:

EU climate chief and UN’s top climate official both play down expectations that international climate talk pledges will help hit 2C target… “2C is an objective,” Miguel Arias Canete, the EU climate chief, said. “If we have an ongoing process you can not say it is a failure if the mitigation commitments do not reach 2C.”

…In Brussels, meanwhile, the UN top climate official, Christiana Figueres, was similarly downplaying expectations, telling reporters the pledges made in the run-up to the Paris meeting later this year will “not get us onto the 2°C pathway”.

There’s so much backpeddling and spinning going on, that you’re motion sick reading the article. While we certainly did see this coming, we didn’t expect the admissions were going to start at this early date.

There is actually one way in which the global temperature rise may stay beneath 2°C, at least for the next century or so, even under the U.N.’s mid-range greenhouse gas emissions scenarios—that is, if the earth’s climate sensitivity is a lot lower than that currently adopted by the U.N. and characteristic of their ensemble of climate models.

Most climate negotiators and climate activists are loathe to admit this might be the case, as that would be the end of first-class travel to various hot spots to (yet again) euchre our monies.

But scientific evidence continues to mount, maybe even enough to send them to the back of the plane. Instead of the earth’s equilibrium climate sensitivity—how much the earth’s average surface temperature will ultimately rise given a doubling of the atmospheric concentration of carbon dioxide—being somewhere around 3°C (as the U.N. has determined), the latest scientific research is starting to center around a value of 2°C, with strong arguments for an even lower value (closer to 1.6°C).

If the earth’s response to greenhouse gas emissions is to warm at only one-half to two-thirds the rate negotiators currently assume, it means it is going to take longer (and require more emissions) to ultimately reach a temperature rise of 2.0°C. This buys the negotiators more time—something about which negotiators, conference organizers, and associated service industries should be ecstatic!

As an example of how much time would be bought by a lower climate sensitivity, researchers Joeri Rogelj, Malte Meinshousen, Jan Sedlacek, and Reto Knutti—whose work has been telling us all along that 2°C was basically impossible—ran their models incorporating some recent estimates of low sensitivity in place of the IPCC preferred sensitivity assessment. 

What they found is available via the open access literature and worth taking a look at. Figure 1 sums it up. Using a lower climate sensitivity (pink) reduces the end-of-the-century temperature rise (left) and increases the quantity of carbon dioxide emissions before reaching various temperature thresholds (right).

 

Figure 1. Temperature evolution over time (left) and in association with cumulative carbon dioxide emissions (right) from models run under different assumptions for the equilibrium climate sensitivity. The black and dark blue colors use the U.N. values; green represents a higher climate sensitivity;  and pink a lower one (from Rogelj et al., 2014). Of note is a massive gaffe—which is to implicitly attribute all warming since 1861 to greenhouse gases. The fact is that the sharp warming of the early 20th century occurred before significant emissions.

This information will surely be useful in Paris this December for those countries who seek less stringent emissions reduction timetables.

And finally, it was announced this week that the homework that the U.N. handed down to each country at the end of the last year’s climate meeting in Lima is due in draft form on February 13th, which was for every nation to publish a target and timetable for reducing carbon dioxide emissions and a plan as to how that is to be achieved.

It ought to be interesting to see what grades everyone receives on their assignments.

No doubt the U.N. officials have already seen some that were handed in early, which is why they announced that they are going to be  grading on a curve. What should have been “F”s (for failure to meet the 2° target) will now surely be given “A”s (for effort). As everyone knows, grade inflation is a worldwide phenomenon.

Paul C. "Chip" Knappenberger and Patrick J. Michaels

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature. These items may not have received the media attention that they deserved or have been misinterpreted in the popular press.

Posted Wednesday in the Washington Post’s new online “Energy and Environment” section is a piece titled “No, Climate Models Aren’t Exaggerating Global Warming.” That’s a pretty “out there” headline considering all the evidence to the contrary.

We summed up much of the contrary evidence in a presentation at the annual meeting of the American Geophysical Union last December.  The take-home message—that climate models were on the verge of failure (basically the opposite of the Post headline)—is self-evident in Figure 1, adapted from our presentation.

Figure 1. Comparison of observed trends (colored circles according to legend) with the climate model trends (black circles) for periods from 10 to 64 years in length. All trends end with data from the year 2014 (adapted from Michaels and Knappenberger, 2014).

The figure shows (with colored circles) the value of the trend in observed global average surface temperatures in lengths ranging from 10 to 64 years and in all cases ending in 2014 (the so-called “warmest year on record”). Also included in the figure (black circles) is the average trend in surface temperatures produced by a collection of climate models for the same intervals. For example, for the period 1951–2014 (the leftmost points in the chart, representing a trend length of 64 years) the trend in the observations is 0.11°C per decade and the average model projected trend is 0.15°C per decade. During the most recent 10-year period (2005–2014, rightmost points in the chart), the observed trend is 0.01°C per decade while the model trend is 0.21°C per decade.

Clearly, over the period during which human-caused greenhouse gases have risen the fastest (basically any period ending in 2014), climate models consistently predict that the earth’s surface temperature should have warmed much faster than it did.

Given our results (and plenty like them), we were left scratching our heads over the headline of the Post article. The article was reporting on the results of a paper that was published last week in the British journal Nature by researchers Jochem Marotzke and Piers Forster, and pretty much accepted uncritically what Marotzke and Forster concluded.

The “accepted uncritically” is critical to the article’s credibility.

Figure 2 shows the results that Marotzke and Forster got when comparing observed trends to model-predicted trends of lengths of 15 years for all periods beginning from 1900 (i.e., 1900–1914) to 1998 (1998–2012). Marotzke and Forster report that overall, the model trends only depart “randomly” from the observed trends—in other words, the model results aren’t biased.

But this claim doesn’t appear to hold water.

During the first half of the record, when greenhouse gas emissions were relatively small and had little effect on the climate, the differences between the modeled and observed temperatures seem pretty well distributed between positive and negative—a sign that natural variability was the driving force between the differences. However, starting in about 1960, the model trends show few negative departures from the observations (i.e., they rarely predict less warming than was observed). This was partially due to the model mishandling of two large volcanic eruptions (Mt. Agung in 1963 and Mt. Pinatubo in 1992), but also it is quite possibly a result of the models producing too much warming as a result of increasing greenhouse gas emissions. It seems that the models work better, over the short term (say 15 years), when they are not being forced by a changing composition of the atmosphere.

Figure 2. Comparison of observed trends (black) with the climate model average trend (red) for periods of 15 years in length during the period 1900–2012 (adapted from Marotzke and Forster, 2015).

But the models appear to do worse over long periods.

Figure 3 is also from the Marotzke and Forster paper. It shows the same thing as Figure 2, but this time for 62-year-long trends. In this case, the models show a clear and persistent inability to capture the observed warming that took place during the first half of the 20th century (the models predict less warming than was observed over all 62-year periods beginning from 1900 through 1930). Then, after closely matching the observed trend for a while, the models began to overpredict the warming beginning in about 1940 and progressively do worse up through the present. In fact, the worst model performance, in terms to predicting too much warming, occurs during the period 1951–2012 (the last period examined).

Figure 3. Comparison of observed trends (black) with the climate model average trend (red) for periods of 62 years in length during the period 1900–2012 (adapted from Marotzke and Forster, 2015).

This behavior indicates that over longer periods (say, 62 years), the models exhibit systematic errors and do not adequately explain the observed evolution of the earth’s surface temperature since the beginning of the 20th century.

At least that is how we see it.

But perhaps we are seeing it wrong.

Over at the website ClimateAudit.org, Nic Lewis (of low climate sensitivity fame) has taken a very detailed (and complicated) look at the statistical methodology used by Marotzke and Forster to arrive at their results. He does not speak in glowing terms of what he found:

“I was slightly taken aback by the paper, as I would have expected either one of the authors or a peer reviewer to have spotted the major flaws in its methodology.”

“Some statistical flaws are self evident. Marotzke’s analysis treats the 75 model runs as being independent, but they are not.”

“However, there is an even more fundamental problem with Marotzke’s methodology: its logic is circular.”

Lewis ultimately concluded:

“The paper is methodologically unsound and provides spurious results. No useful, valid inferences can be drawn from it. I believe that the authors should withdraw the paper.”

Not good.

So basically no matter how* you look at the Marotzke and Forster results—taking the results at face value or throwing them out altogether—their conclusion are not well-supported. And certainly, they are no savior for poorly performing climate models.

 

*No matter how, that is, except if you are looking to try to make it appear that the growing difference between climate model projections and real world-temperature change poses no threat to aggressive measures attempting to mitigate climate change.

References:

Marotzke, J., and P. Forster, 2015. “Forcing, Feedback and Internal Variability in Global Temperature Trends.” Nature, 517, 565–570, doi:10.1038/nature14117.

Michaels, P.J., and P.C. Knappenberger, 2014. “Quantifying the Lack of Consistency Between Climate Model Projections and Observations of the Evolution of the Earth’s Average Surface Temperature since the Mid-20th Century.” American Geophysical Union Fall Meeting, San Francisco, CA, Dec. 15–19, Paper A41A-3008.

Alex Nowrasteh

The latest issue of The Economist has a good article about allowing American states to set their own migration policies.

Last spring, Cato published a policy analysis on this very topic by Brandon Fuller and Sean Rust, entitled “State-Based Visas: A Federalist Approach to Reforming U.S. Immigration Policy.” Cato’s policy analysis explores the legalities, economics, and practical hurdles of implementing a state-based visa system in addition to the existing federal system. Cato even had an event in March 2014 (video available) where critic Reihan Salam and supporter Shikha Dalmia explored the idea.

The Economist article lays out the case well. Canada and Australia have state- and provincial-based visa systems that complement their federal immigration policies. The results have been positive for those local jurisdictions because they have more information and incentive to produce a better visa policy than a distant federal government does. American states could similarly experiment with less restrictive migration policies, attracting workers of any or all skill types.

The economic impact of immigration is positive, so the downsides of decentralized immigration policy would be small. Most importantly, The Economist echoes a point that Fuller and Rust made in their policy analysis: these migrant workers should eventually be able to move around the country for work. An unrestricted internal labor market is positive for the American economy; a freer international labor market would be too.

Please read The Economist piece, Cato’s policy analysis, and watch Cato’s event on this topic.

David Boaz

At TIME I write about the rise of libertarianism, Rand Paul, and my forthcoming book (Tuesday!) The Libertarian Mind:

Tens of millions of Americans are fiscally conservative, socially tolerant, and skeptical of American military intervention….

Whether or not Rand Paul wins the presidency, one result of his campaign will be to help those tens of millions of libertarian-leaning Americans to discover that their political attitudes have a name, which will make for a stronger and more influential political faction.

In my book The Libertarian Mind I argue that the simple, timeless principles of the American Revolution—individual liberty, limited government, and free markets—are even more important in this world of instant communication, global markets, and unprecedented access to information than Jefferson or Madison could have imagined. Libertarianism is the framework for a future of freedom, growth, and progress, and it may be on the verge of a political breakout.

Read the whole thing. Buy the book.

Julian Sanchez

Proponents of network neutrality regulation are cheering the announcement this week that the Federal Communications Commission will seek to reclassify Internet Service Providers as “common carriers” under Title II of the Telecommunications Act. The move would trigger broad regulatory powers over Internet providers—some of which, such as authority to impose price controls, the FCC has said it will “forbear” from asserting—in the name of “preserving the open internet.”

Two initial thoughts:

First, the scope of the move reminds us that “net neutrality” has always been somewhat nebulously defined and therefore open to mission creep. To the extent there was any consensus definition, net neutrality was originally understood as being fundamentally about how ISPs like Comcast or Verizon treat data packets being sent to users, and whether the companies deliberately configured their routers to speed up or slow down certain traffic. Other factors that might affect the speed or quality of service—such as peering and interconnection agreements between ISPs and large content providers or backbone intermediaries—were understood to be a separate issue. In other words, net neutrality was satisfied so long as Comcast was treating packets equally once they’d reached Comcast’s network. Disputes over who should bear the cost of upgrading the connections between networks—though obviously relevant to the broader question of how quickly end-users could reach different services—were another matter.

Now the FCC will also concern itself with these contracts between corporations, giving content providers a fairly large cudgel to brandish against ISPs if they’re not happy with the peering terms on offer. In practice, even a “treat all packets equally” rule was going to be more complicated than it sounds on face, because the FCC would still have to distinguish between permitted “reasonable network management practices” and impermissible “packet discrimination.” But that’s simplicity itself next to the problem of determining, on a case by case basis, when the terms of a complex interconnection contract between two large corporations are “unfair” or “unreasonable.”

Second, it remains pretty incredible to me that we’re moving toward a broad preemptive regulatory intervention before we’ve even seen what deviations from neutrality look like in practice. Nobody, myself included, wants to see the “nightmare scenario” where ISPs attempt to turn the Internet into a “walled garden” whose users can only access the sites of their ISP’s corporate partners at usable speeds, or where ISPs act to throttle businesses that might interfere with their revenue streams from (say) cable television or voice services. There are certainly hypothetical scenarios that could play out where I’d agree intervention was justified—though I’d also expect targeted interventions by agencies like the Federal Trade Commission to be the most sensible first resort in those cases.

Instead, the FCC is preparing to impose a blanket regulatory structure—including open-ended authority to police unspecified “future conduct” of which it disapproves—in the absence of any sense of what deviations from neutrality might look like in practice. Are there models that might allow broadband to be cheaper or more fairly priced for users—where, let’s say, you buy a medium-speed package for most traffic, but Netflix pays to have high-definition movies streamed to their subscribers at a higher speed? I don’t know, but it would be interesting to find out. Instead, users who want any of their traffic delivered at the highest speed will have to continue paying for all their traffic to be delivered at that speed, whether they need it or not. The extreme version of this is the controversy over “zero-rating” in the developing world, where the Orthodox Neutralite position is that it’s better for those who can’t afford mobile Internet access to go without rather than let companies like Facebook and Wikipedia provide poor people with subsidized free access to their sites. 

The deep irony here is that “permissionless innovation” has been one of the clarion calls of proponents of neutrality regulation. The idea is that companies at the “edge” of the network introducing new services should be able to launch them without having to negotiate with every ISP in order to get their traffic carried at an acceptable speed. Users like that principle too; it’s why services like CompuServe and AOL ultimately had to abandon a “walled garden” model that gave customers access only to a select set of curated services.

But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment. It seems to me that the only reason not to want to find out is the fear that some consumers will like the results of at least some of these experiments, making it politically more difficult to entrench the sacred principle of neutrality in law. After all, you’d think that if provider deviations from neutrality in the future prove uniformly and manifestly bad for consumers or for innovation, it will only be easier to make the case for regulation.

As I argued a few years back, common carrier regimes might make sense when you’re fairly certain there’s more inertia in your infrastructure than in your regulatory structure. Networks of highways and water pipes change slowly, and it’s a good bet that a sound rule today will be a sound rule in a few years. The costs imposed by lag in the regulatory regime aren’t outrageously high, because even if someone came up with a better or cheaper way to get water to people’s homes, reengineering physical networks of pipes is going to be a pretty slow process. But wireless broadband is not a network of pipes, or even a series of tubes. Unless we’re absolutely certain we already know the best way to price and route data packets—both through fiber and over the air—there is something perverse about a regulatory approach that precludes experimentation in the name of “innovation.”

Alan Reynolds

The U.S. job market has tightened by many measures – more advertised job openings, fewer claims for initial unemployment insurance, substantial reduction in long-term unemployment and the number of discouraged workers.  Yet the percentage of working-age population that is either working or looking for work (the labor force participation rate) remains extremely low.  This is a big problem, since projections of future economic growth are constructed by adding expected growth of productivity to growth of the labor force.

Why have so many people dropped out of the labor force?  Since they’re not working (at least in the formal economy), how do they pay for things like food, rent and health care?

One explanation answers both questions: More people are relying on a variety of means-tested cash and in-kind benefits that are made available only on the condition that recipients report little or no earned income.   Since qualification for one benefit often results in qualification for others, the effect can be equivalent to a high marginal tax rate on extra work (such as switching from a 20 to 40 hour workweek, or a spouse taking a job).  Added labor income can often result in loss of multiple benefits, such as disability benefits, supplemental security income, the earned income tax credit, food stamps and Medicaid. 

This graph compares annual labor force participation rates with Congressional Budget Office data on means-tested federal benefits as a percent of GDP.  The data appear consistent with work disincentives in federal transfer payments, labor tax rates and refundable tax credits.

Dalibor Rohac

This weekend, after months of animated and often vicious campaigning, Slovaks will vote in a referendum on same-sex marriage, adoptions, and sex education. Interestingly, the referendum has not been initiated by the proponents of gay rights, which are not particularly numerous or well-organized, but rather by the social-conservative group Alliance for Family. The goal is to preempt moves towards the legalization of same-sex unions and of child adoptions by gay couples by banning them before they become a salient issue. Overturning the results of a binding referendum would then require a parliamentary supermajority and would only come at a sizeable political cost.

However, in spite of all the heated rhetoric, it seems unlikely that the threshold for the referendum’s validity will be met. Also, as I wrote in International New York Times some time ago, Slovakia is slowly becoming a more open, tolerant place – something that the referendum will hopefully not undo. However,

[i]n the meantime, the mean-spirited campaigning and frequent disparaging remarks about gays and their “condition” are a poor substitute for serious policy discussions and are making the country a much less pleasant place, and not just for its gay population.

Another disconcerting aspect of the referendum is its geopolitical dimension. For some of the campaigners a rejection of gay rights goes hand in hand with a rejection of what they see as the morally decadent West:

Former Prime Minister Jan Carnogursky, a former Catholic dissident and an outspoken supporter of the referendum, noted recently that “in Russia, one would not even have to campaign for this — over there, the protection of traditional Christian values is an integral part of government policy” and warned against the “gender ideology” exported from the United States.

We will see very soon whether the ongoing cultural war was just a blip in Central Europe’s history or whether it will leave a bitter aftertaste for years to come. Here is my essay on the referendum, written for V4 Revue. I also wrote about the referendum in Slovak, for the weekly Tyzden (paywalled), and discuss it in a video with Pavol Demes (in Slovak).

Paul C. "Chip" Knappenberger and Patrick J. Michaels

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

A pre-print of a soon to be published paper in the Journal of Asthma and Clinical Immunology describes a study conducted by researchers at the Johns Hopkins Children’s Center making this provocative finding:

Taking the United States as a whole, living in an urban neighborhood is not associated with increased asthma prevalence.

If it isn’t immediately obvious what this means, Dr. Joseph Perrone, chief science officer at the Center for Accountability in Science, spells it out in his article in The Hill:

It’s a radical finding. The study upends more than half a century of research that assumed outdoor air pollution in cities was to blame for higher asthma rates—a hypothesis repeatedly used by EPA regulators to justify the agency’s regulations.

Perrone goes on to explain:

For years, environmentalists and regulators have cited childhood asthma as an excuse for ever-stricter pollution rules. The U.S. Environmental Protection Agency (EPA), for instance, uses asthma as a pretext for nearly every “clean air” regulation issued since the 1970s.

But what if the assumed link between air pollution and childhood asthma doesn’t actually exist?

New research questions the long-held wisdom on asthma and air pollution, casting doubt over the scientific basis for EPA’s expansive regulatory agenda….

The study still points to air pollution as a cause for asthma, only it’s indoor air pollution—think second hand smoke, rodents, mold, etc.—that may be the main culprit.

This counters EPA’s asthma pretext for “clean air” regulations, as well as their regulations on climate change.

The latter was revealed in a 2009 memo recently obtained via FOIA by Chris Horner of the Competitive Enterprise Institute. In the memo, the EPA talks climate change strategy and discusses the perceived effectiveness of linking health impacts (e.g., “respiratory illness”) to climate change [emphasis added]:

Polar ice caps and the polar bears have become the climate change “mascots,” if you will, and personify the challenges was have in making this issue real for many Americans. Most Americans will never see a polar ice cap, nor will most have the chance to see a polar bear in its natural habitat. Therefore, it is easy to detach from the seriousness of the issue. Unfortunately, climate change in the abstract is an increasingly—and consistently—unpersuasive argument to make. However, if we shift from making this about the polar caps and [sic] about our neighbor with respiratory illness we can potentially bring this issue home to many Americans. As we do so, we must allow the human health argument to take center stage.

This strategy was clearly adopted.

For example, in announcing EPA’s proposed restrictions on carbon dioxide emissions from existing power plants, Administrator McCarthy’s comments were littered with health references, including her opening remarks:

About a month ago, I took a trip to the Cleveland Clinic. I met a lot of great people, but one stood out—even if he needed to stand on a chair to do it. Parker Frey is 10 years old. He’s struggled with severe asthma all his life. His mom said despite his challenges, Parker’s a tough, active kid—and a stellar hockey player.

But sometimes, she says, the air is too dangerous for him to play outside. In the United States of America, no parent should ever have that worry.

That’s why EPA exists. Our job, directed by our laws, reaffirmed by our courts, is to protect public health and the environment. Climate change, fueled by carbon pollution, supercharges risks not just to our health, but to our communities, our economy, and our way of life. That’s why EPA is delivering on a vital piece of President Obama’s Climate Action Plan.”

Later she added, “For the sake of our families’ health and our kids’ future, we have a moral obligation to act on climate.” She went on to explain, “This is not just about disappearing polar bears or melting ice caps. This is about protecting our health and our homes.” And seemingly for good measure: “As a bonus, in 2030 we’ll cut pollution that causes smog and soot 25 percent more than if we didn’t have this plan in place. The first year that these standards go into effect, we’ll avoid up to 100,000 asthma attacks and 2,100 heart attacks—and those numbers go up from there.”

In light of the new Johns Hopkin study, McCarthy’s remarks are questionable as scientific support for these hyperbolic statements is fast receding.

John Perrone hits the nail on the head with his conclusion:

This is science the EPA cannot ignore. If the agency is truly interested in “following the science,” it should spend more time addressing real public health threats than imposing costly rules based on dubious science that may only make us poorer and sicker.

We don’t find the Hopkins study at all surprising.  Smoky, confined indoor environments around the world are associated with major health issues.  Why should it be any different here?

Reference:

Keet, C.A., et al., 2015. Neighborhood poverty, urban residence, race/ethnicity, and asthma: Rethinking the inner-city asthma epidemic. Journal of Asthma and Clinical Immunology , in press.

Steve H. Hanke

Yesterday, China’s Central Bank reduced bank reserve requirements for large banks by 50 basis points to 19.5%. The Chinese know that the nominal level of national income is determined by the magnitude of the money supply. They also know that banks produce the lion’s share of China’s money. Indeed, banks produce 77% of China’s M2 money.

As shown in the accompanying chart, the average annual growth rate of China’s money supply since January 2004 has been 17.45%. At present, the annual growth rate for the money supply has slumped to 11%. China’s reduction in the banks’ reserve requirements is designed to push money growth back up towards the trend rate so that an economic slump is avoided. China has made the right move.

Daniel J. Ikenson

President Obama is presiding over what may prove to be the most significant round of trade liberalization in American history, yet he has never once made an affirmative case for that outcome. Despite various reports of intensifying outreach to members of Congress, the president’s “advocacy” is couched in enough skepticism to create and reinforce fears about trade and globalization.

Politico reports:

On Tuesday, Obama sent a letter directly to Rep. Ruben Gallego (D-Ariz.), arguing that reaching new trade agreements is the only way to stop China from dominating the global markets and letting its lax standards run the world.

“If they succeed, our competitors would be free to ignore basic environmental and labor standards, giving them an unfair advantage against American workers,” Obama wrote Gallego in a letter obtained by POLITICO. “We can’t let that happen. We should write the rules, and level the playing field for the middle class.”

Certainly, playing the China card could help win support for Trade Promotion Authority and, eventually, the Trans-Pacific Partnership, but it needn’t be the first selling point.  Pitching trade agreements as though they were innoculations from an otherwise imminent disease betrays a profound lack of understanding of the benefits of trade. With TPP near completion and the Transatlantic Trade and Investment Partnership talks expected to accelerate, the president’s stubborn refusal to make an affirmative case for his trade initiatives to the public and the skeptics in his party is disconcerting. Bill Watson was troubled by the president’s feeble advocacy of trade liberalization in his SOTU address.

Does Obama really want a legacy as the president who increased Americans’ economic liberties and opportunities when the best case he can muster for his agenda is that if we don’t adopt it we’ll get crushed? I have questioned whether he supports his own trade agenda considering – among other things – his commitment to arresting climate change and growing income inequality, both of which he believes are exacerbated by increased trade.

Never has the president described how the TPP will better integrate U.S. producers, consumers, workers, investors and taxpayers with customers, suppliers, supply-chain collaborators, and investors in Asia and the Americas. Never has he explained that by eliminating tariffs and other monopolistic impediments to trade and investment, the TPP will help increase the scope for economies of scale and specialization, which will help reduce production costs, freeing resources for lower prices, investment, and research and development. Never has he taken the time to point out that competition inspires innovation, which especially benefits companies operating in the United States, which are advantaged with privileged access to research universities and broad and deep capital markets to commercialize innovation. Never has he mentioned that by opening the door to more competition to bid on public procurement projects, the TPP will help ensure higher quality infrastructure, on-time completion, and better use of taxpayer dollars. Never has he touted the advantages to the U.S. economy of tighter integration with the world’s fastest growing region. None of these positive, promising, pioneering aspects of the TPP has been given an ounce of public attention from the president. 

Some Washington insiders will be sure to contact Bill or me to say it doesn’t matter how Obama portrays trade, as long as he gets enough votes. Well, sure, I understand the transactional nature of politics. But if you don’t try to convince anyone of the merits of trade, if you allow to lay unrebutted, to fester and metastasize, the fallacies concocted by the monopolies who benefit from restricting trade, it serves to legitimize those fears and guarantees a continuation of misinformation and discord where there should be much less.

Nicole Kaeding

One of the largest and fastest growing items in President Obama’s new budget is often overlooked. Net interest expenses will skyrocket over the next decade, growing by 250 percent.

The Congressional Budget Office (CBO) continues to warn about the rising burden of federal interest payments. Over the next decade, CBO expects net interest expenses will be the third fastest growing budget item over the next decade. Net interest represents 24 percent of the increase in federal spending during that time period.

Interest expense will increase due to two factors: higher interest rates and larger outstanding debt.

First, federal borrowing rates are currently well below average. In 2014 the 10-year Treasury rate averaged 2.5 percent according to CBO. Since 1990 the 10-year Treasury has averaged approximately 5 percent. Lower-than-normal interest rates are currently keeping the government’s borrowing expenses low, but interest rates are expected to return to historic averages.

Second, the government has added increased the federal debt quickly. Debt held by the public has grown by 120 percent since 2008, and that growth is expected to continue.

As a result of these two factors, CBO predicts that net interest will grow from 1.3 percent of gross domestic product in 2015 to 3 percent in 2025.

The president’s budget predicts a similar rise. Information from the president’s budget shows just how large interest expense will become over the next decade. By 2025 the nation will spent more on interest than it does on defense or nondefense discretionary spending, according to the president’s budget plan. Only spending on Social Security and Medicare will cost more than net interest.

The chart below shows the dramatic increase in net interest compared to defense and nondefense discretionary spending, as projected in the president’s budget.

Even with that sharp rise, the president’s budget low-balls interest costs a bit compared to CBO. His budget assumes that the 10-year Treasury rate slowly climbs from 4.0 percent in 2019 to 4.5 percent by 2025.  CBO, on the other hand, assumes that the 10-year Treasury rate is 4.5 percent in 2019 growing to 4.6 percent in 2025. In fact, the president assumes a lower interest rate in each year of his budget request.  This means that the president expects net interest to cost $785 billion in 2025 compared to $827 billion for CBO.

The president’s somewhat rosier assumptions regarding debt and deficit than CBO play a part in this comparison. As I showed on Monday, he predicts that the debt held by the public will be significantly less in 2025 than CBO does,  $20,307 trillion vs. $21,605 trillion.

Even under the president’s optimistic assumptions, net interest costs will grow quickly over the next decade, illustrating the importance of controlling the growth in federal spending. Our current path means that future generations could end up paying a high price for our current, excess spending.

Daniel R. Pearson

In a previous blog post I discussed the implications of the proposed agreement to settle the antidumping and countervailing duty (AD/CVD) cases brought by U.S. sugar producers against imports from Mexico.  That article amounted to a lament on the difficulties of trying to balance sugar supply and demand by government fiat.  Market managers employed by the U.S. Department of Agriculture (USDA) and the Department of Commerce (DOC) have a really hard job, as do their counterparts in the Mexican government.  Not only do the supply, demand, and price of sugar tend not to stay quiet and well behaved, but important firms involved in the business also can prove (from the perspective of the program managers) to be vexing and disputatious.

Such is the case with Imperial Sugar Company and AmCane Sugar, both of which are U.S. cane refiners that rely on ample supplies of raw sugar to run their operations.  Much of that raw sugar comes from other countries; in recent years Mexico has been the largest supplier to the United States.  It now appears that U.S. cane refiners were not too happy with either the original proposed settlement that was announced on October 27, 2014, or the final suspension agreements announced December 19 that set aside the underlying AD/CVD investigations. 

One source of that unhappiness seems to have been that the initial proposal would have allowed 60 percent of imports from Mexico to be in the form of refined sugar rather than raw.  The U.S. and Mexican governments acknowledged that concern in the December 19 agreement by reducing the allowable level of refined sugar imports to 53 percent.  Another issue bothering U.S. refiners likely was the relatively narrow spread between the original proposal’s import reference prices, which were 20.75 cents per pound for raw sugar and 23.75 cents per pound for refined.  U.S. refiners may have feared suppression of their processing margins, if imported refined sugar from Mexico could have been sold at only 3 cents per pound above the price of raw sugar imports.  The December 19 version increased that price spread to 3.75 cents (22.25 cents for raw and 26.0 cents for refined).  From the standpoint of the refiners, that margin still may be uncomfortably narrow.

Given those adjustments in the terms of the suspension agreements, many observers were surprised when on January 8, 2015, Imperial and AmCane took the unprecedented step of filing a challenge to the pact.  They petitioned the U.S. International Trade Commission (ITC) to determine whether the suspension agreements actually “eliminated completely” the “injurious effect of imports” on the domestic industry.  This is the first time that provision of law has been exercised since it was added to the statute in 1979.  If the ITC determines that injury has been fully ameliorated, the suspension agreements will remain in effect.  On the other hand, if the ITC determines that injury was not completely eliminated, the suspension agreements would be scrapped and both the ITC and DOC would resume work on the underlying AD/CVD investigations.

The statute grants the ITC only 75 days from the filing date to make this determination, which means the process needs to be completed by March 24.  Having never before done this type of investigation, the Commission issued a notice seeking input as to how it should evaluate whether the injury has been completely eliminated by the suspension agreements.  The ITC will hold a public meeting on February 19 “to receive oral presentations from parties to the reviews.”

Without delving into the wide variety of arguments that could be presented to the Commission, it seems reasonable to assume that Imperial and AmCane believe they will be able to provide convincing evidence – likely through use of non-public “business proprietary information” (BPI) – that the suspension agreements do not entirely eliminate their injury.  It may be challenging for supporters of the agreements to prove otherwise. 

Notwithstanding the unusual petition to the ITC, Imperial and AmCane also filed requests on January 16 with the DOC asking that the suspension agreements be terminated and the AD/CVD investigations be continued.  Sugar producers in both the United States and Mexico are not enamored with the thought that the suspension agreements might be overturned, so are challenging the legal standing of the refiners to make such requests.  DOC is seeking comments on that issue; it all appears to be quite contentious.  (There is no similar question regarding standing with respect to the refiners’ petition to the ITC.)

It’s fair to say that the overall situation is rather fluid right now.   If the refiners are found to have standing in the DOC proceeding, the AD/CVD investigations will move forward toward an eventual decision by the ITC as to whether the duties determined by DOC should be imposed.  Even if it is decided that the refiners don’t have standing at the DOC, the ITC investigation as to whether the suspension agreements entirely eliminate the domestic industry’s injury will proceed. 

It is interesting to note that both Imperial and AmCane have made clear that they are quite willing to agree to a market-management scheme that better suits their interests.  Talks with all parties to the suspension agreements may yet produce new versions that achieve consensus.  However, such a negotiation must overcome a significant hurdle.  This is basically a zero-sum game in which some other player would have to earn less money from the pact in order for the refiners to earn more.  These discussions – if they occur – could be more than just a bit fractious.

So why are the U.S. refiners apparently willing to upset the whole applecart?  Obviously they must believe they would be better off with any of three possible outcomes: 

  • An eventual negative decision by the ITC on the merits of the AD/CVD cases would mean that no duties would be applied to imports from Mexico, so refiners would have the same access to raw sugar supplies as they’ve had since the sugar provisions of NAFTA were fully implemented in 2008. 
  • An eventual affirmative ITC decision on the AD/CVD cases would mean the imposition of antidumping duties on imports of sugar from Mexico in the neighborhood of 40 percent, plus anti-subsidy duties of up to 17 percent.  Those duties likely are high enough to prevent any imports from Mexico.  How would managers of the U.S. sugar program respond to the loss of more than a million tons of Mexican sugar from the U.S. market?  Refiners may have concluded that government officials would have no choice other than to increase the tariff-rate quota (TRQ) amounts for the 40 TRQ-holding countries in order to keep the U.S. market adequately supplied.  All of that additional sugar would be in raw form, so all of it would require refining in the United States. 
  • Since U.S. refiners have demonstrated that they are willing to gum up the works of the government-regulated sugar market if their interests aren’t sufficiently taken into account, perhaps there will be a renegotiation of the suspension agreements that will treat them more favorably. 

Ahhh, the challenges of managing the U.S. and Mexican sugar markets just seem to become greater and greater.  One wonders how the United States of America has gotten itself into this dirigiste situation.  Perhaps we will live long enough to see U.S. sugar policy reformed by ending all import restrictions and domestic support measures.  (More on this topic will be available in an upcoming paper.)  If the marketplace was made open and competitive, there is little doubt that sugar still would be produced in the United States, that some of it still would be imported from other countries, and that consumers would buy some combination of the two.  If supply and demand were allowed to guide sugar production, marketing, and consumption, resource allocation and economic efficiency would improve a great deal.  Deadweight losses to the economy would be reduced.  And the former government managers of the sugar program would likely find more satisfying work and suffer fewer headaches.

Jim Harper

If you’re a privacy conscious traveler, you may have wondered from time to time why hotels ask for ID when you check in, or why they ask you to give them the make and model of your car and other information that isn’t essential to the transaction. What’s the ID-checking for? There’s never been a problem with fraudsters checking into hotels under others’ reservations, paying for the privilege to do so…

Well, in many jurisdictions around the country, that information-gathering is mandated by law. Local ordinances require hotels, motels, and other lodgers (such as AirBnB hosts), to collect this information and keep it on hand. These laws also require that the information be made available to the police on request, for any reason or no reason, without a warrant.

That’s the case in Los Angeles, which not only requires this data retention about hotel guests for law enforcement to access at will or whim. It also requires hoteliers to check a government-issued ID from guests that pay cash.

Open access to hotel records may have been innocuous enough in the early years of travel and lodging. Reading through hotel registers was a social sport among the wealthy, who could afford long-distance travel and lodging. Today, tourism is available to the masses, and hotel records enjoy tighter privacy protections. Most people would quit a hotel that left their information open to the public, and many would be surprised that hoteliers’ records are open to law enforcement collection and review without any legal process.

In City of Los Angeles v. Patel, which will be argued in the Supreme Court March 3rd, a group of hoteliers have challenged the city’s ordinance requiring them to hand over customer data whenever a police officer wants it. After losing in the District Court and in their first appearance before the Ninth Circuit Court of Appeals, the hoteliers won when an en banc panel of the Ninth Circuit found that it was unreasonable (and thus unconstitutional) for the statute to require hoteliers to turn over their records without giving them an opportunity to challenge law enforcement’s discretion.

In our brief to the Court supporting the hoteliers, we make some points that we hope will strengthen Fourth Amendment case law. As we’ve done in many prior briefs, we discourage the Court from applying the “reasonable expectation of privacy” test. “Reasonable expectations” doctrine is a contortion of the Fourth Amendment that springs from one concurrence in a 1967 case. Rather than estimating whether hoteliers have a “privacy expectation” in their records, we invite the Court to adhere to the Fourth Amendment’s language and determine whether the the right of Los Angeles hoteliers “to be secure in their persons, houses, papers, and effects” is protected by a statute that permits any search of their records law enforcement should want.

The question is not whether private parties’ privacy expectations are reasonable. The Fourth Amendment asks whether government agents’ searches and seizures are reasonable.

The petitions submitted by the City of Los Angeles and the U.S. government both treat the idea of “frequent, unannounced inspections” as a virtue of the statute. According to the government parties, innocent business owners, who are not suspects of any crime, should be subject to routine surprise inspections by government agents to make sure that they are performing surveillance of their guests for the government.

There is some precedent for warrantless searches of businesses under the “administrative search” doctrine. If warrantless searches of pervasively regulated businesses are reasonable at all, the doctrine has never been applied when the search is for evidence of wrongdoing by someone other than the party searched. It may be reasonable to search auto dismantlers because of the propensity to possession of stolen cars and car parts in that line of business. It is not reasonable to search hoteliers because some of their customers may use drugs or participate in prostitution.

There would be no end to it if the government were allowed to require businesses to perform surveillance on its behalf. Banks could be made to collect and turn over sensitive financial information about customers. The phone company could be made to turn over information about Americans’ calling behavior. The list goes on.

If you’re privacy conscious, of course, you recognize that the federal government already does require banks to turn over sensitive financial information about non-suspect Americans. The government collects phone calling records about as many Americans as it can every day, all without probable cause or a warrant. This is because of a key pair of Supreme Court cases ratifying Bank Secrecy Act requirements on banks to report information about their customers.

The case of California Bankers Association v. Schultz (1974) could be treated as a precedent suggesting that the Los Angeles law is valid. Our brief shows that it is not, as the Court did not carefully consider the Fourth Amendment rights of businesses in that case. To the extent California Bankers and its companion case, United States v. Miller, suggest that businesses can constitutionally be conscripted into spying on their customers, they deserve reconsideration.

This was something Justice Sonia Sotomayor directly suggested in her concurrence with the majority’s decision in United States v. Jones (2012), which struck down warrantless tracking of automobiles using GPS devices.

[I]t may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.

The Court should revisit the third-party doctrine and the “reasonable expectation of privacy test,” which produced it. I’ll update you here, of course, about developments in the Patel case.

Michael F. Cannon

Health and Human Services Secretary Sylvia Burwell is the lead defendant in King v. Burwell, in which the plaintiffs claim the Obama administration is taxing millions of employers and individuals and subsidizing millions of HealthCare.gov enrollees contrary to the plain language of the Patient Protection and Affordable Care Act (a.k.a., ObamaCare). The Supreme Court will hear oral arguments in the case on March 4, and will likely rule by late June. If the Court rules against Burwell, 57 million individuals and employers will be freed from those illegal taxes and maybe four million HealthCare.gov enrollees will lose subsidies that the administration never had the authority to issue in the first place. Those four million people could see their insurance bills quadruple, face an unexpected tax liability of up to $5,000, and lose their health insurance. You might think they have a right to know about that risk. You might think a responsible public servant like Secretary Burwell would inform them of that risk. 

You would be wrong.

Today, Burwell appeared before the Senate Finance Committee. Though HHS has already deployed its contingency plan for HealthCare.gov-participating insurers, she refused to answer whether HHS has a contingency plan for HealthCare.gov enrollees:

Right now, my focus is on completing and implementing the law, which we believe is the law. Right now, what we’re focused on is the open enrollment.

HHS Head Ducks Questions On ACA Tax Credit Backup Plan,” wrote Law360. Modern Healthcare wrote, “HHS Stonewalls on King v. Burwell,” while The Hill seemed to laud Burwell because she “did not back down” from her firm stand against transparency and consumer information. Sen. John Cornyn (R-TX) fumed, “to come here and repeatedly refuse to answer the questions strikes me as nothing less than contempt of our oversight responsibility.”

Burwell refused to answer because any answer could be politically costly. Answering yes would lend credence to the King plaintiffs’ case. It would also spur requests for specifics about HHS’s contingency plans, an examination of whether HHS has the authority to execute its plan, and examinations of whether the plan would work. All that unwanted attention could scare off a lot of current and prospective HealthCare.gov enrollees. Answering no could increase the likelihood that the administration loses King, because it would signal to the Supreme Court that HHS doesn’t think a ruling for the plaintiffs would be all that big of deal. So Burwell chose the least politically costly option: stonewalling.

Stonewalling is also the most irresponsible option. Burwell is literally refusing to inform consumers about the risks of HealthCare.gov coverage, and how HHS would respond if those risks materialize. You may have noticed that President Obama also failed to mention those risks in his recent State of the Union address.

It is hard to believe this is an accident. Misleading consumers seems to be a conscious part of the administration’s litigation strategy.

When Burwell said she is focused on enrolling as many people as she can in HealthCare.gov, though, I’m sure she meant it. The more people she enrolls, (1) the more disruptive a ruling for the plaintiffs in King v. Burwell would be, (2) the less likely the justices will issue such a ruling, and (3) the more voters the administration can hope to mobilize to lobby Congress if the Court strikes down the subsidies anyway.

At today’s hearing, Burwell boasted 7.5 million HealthCare.gov enrollees. But she isn’t enrolling people in health insurance. She’s taking hostages. And she’ll inform them of their hostage status when she’s good and ready.

David Boaz

I’ve been talking a lot about the parasite economy this week – like in my forthcoming book The Libertarian Mind and on STOSSEL this Friday night – and two stories in the Washington Post today illustrate the problem.

John Wagner reports that campaign contributions are now flowing to surprise Maryland gubernatorial winner Larry Hogan. Why would campaign contributions come in after the campaign is over?

“A lot of people speculatively invested in the Brown campaign and now realize they made the wrong choice,” said Jennifer Bevan-Dangel, executive director of Common Cause Maryland, a group that closely monitors campaign contributions. “Donors give because it gets them in the door, regardless of who’s in power.”

The reports show that Hogan raised nearly $1.4 million in the two months after the election — roughly the amount that Martin O’Malley (D) raised after he was elected governor in 2006.

When a state government hands out some $40 billion a year, lots of people want to get friendly with the people who will influence how that money is spent. Through regulations, the government influences billions more, and lobbyists don’t want to be left out of those discussions either.

Money flowed to Hogan from utilities, banks and health-care companies that are regulated by the state and from associations that represent businesses in Annapolis. Groups representing chiropractors, nurse practitioners, nursing homes and psychologists have all given since the election….

Other donors include more than a dozen of the highest-paid lobbyists in Annapolis. 

Also in today’s Post, Mike DeBonis reports that council candidates backed by newly elected D.C. mayor Muriel Bowser are raking in cash for their upcoming special elections. People want a friend in city hall, too.

Why indeed do “chiropractors, nurse practitioners, nursing homes and psychologists” need lobbies, much less give campaign contributions? Because they want a piece of vast government expenditures on health care, they want regulatory protection from competition, or they want something else that government can deliver. 

I make no criticism here of Governor Hogan or Mayor Bowser. I have no reason to think that either of them has done anything inappropriate for a campaign contributor. This is a systemic problem.

It’s just part of the parasite economy, where you use the law to get something you couldn’t get voluntarily in the marketplace.

Tim Lynch

Over at Cato’s Police Misconduct web site, we have identified the worst case for January.  It comes from Miramar, Florida. The misconduct took place in the 1980s, but it took some time for it to be exposed.  A federal appeals court recently upheld a $7,000,000 judgment against two now-former police officers

In 1983, the officers coerced a mentally challenged 15-year-old boy, Anthony Caravella, to confess to rape and murder.

From the Florida Sun-Sentinel:

Caravella was arrested by Mantesta and Pierson on Dec. 28, 1983, on a juvenile case that alleged he stole a bicycle and didn’t show up for court.

Over the next week, while in juvenile custody, Caravella gave a series of statements to the officers that culminated in him confessing to the murder.

Heyer said Caravella trusted Mantesta and the officers, who spent hours alone with him, fed him information about the crime scene and got him to repeat it back to them.

Caravella and his childhood friend, Dawn Simone Herron, testified in the 2013 civil trial that the officers coerced Caravella into falsely incriminating himself by telling him that if he gave a statement they would free the 16-year-old girl who was with him when he was arrested.

After that “police work,” prosecutors actually sought the death penalty against the teen, but the jury opted for a life sentence instead.

The man who was actually responsible for the rape and murder remained free, endangering other members of the community.  He never faced justice for this crime.

Adam Bates

Considering the growing controversy over the abuse of civil asset forfeiture at the federal and state levels, the Institute for Justice’s newly released report on the IRS’ questionable use of the practice is perfectly timed.

An excerpt from the executive summary:

Federal civil forfeiture laws give the Internal Revenue Service the power to clean out bank accounts without charging their owners with any crime. Making matters worse, the IRS considers a series of cash deposits or withdrawals below $10,000 enough evidence of “structuring” to take the money, without any other evidence of wrongdoing. Structuring—depositing or withdrawing smaller amounts to evade a federal law that requires banks to report transactions larger than $10,000 to the federal government—is illegal, but more importantly, structured funds are also subject to civil forfeiture.

Civil forfeiture is the government’s power to take property suspected of involvement in a crime. Unlike criminal forfeiture, no one needs to be convicted of—or even a charged with—a crime for the government to take the property. Lax civil forfeiture standards enable the IRS to “seize first and ask questions later,” taking money without serious investigation and forcing owners into a long and difficult legal battle to try to stop the forfeiture. Any money forfeited is then used to fund further law enforcement efforts, giving agencies like the IRS an incentive to seize.

Data provided by the IRS indicate that its civil forfeiture activities for suspected structuring are large and growing…

For the uninitiated, under the Bank Secrecy Act of 1970, financial institutions are required to report deposits of more than $10,000 to the federal government.  The law also makes it illegal to “structure” deposits in such a way as to avoid that reporting requirement.  Under the IRS’ conception of the law, “structuring” may be nothing more than making several sub-$10,000 deposits, without any further suspicion of particular wrongdoing.  For obvious reasons, many small businesses and individuals can find themselves on the wrong side of this law without any criminal intent.

When the structuring law is combined with the incredibly low burdens required for the federal government to seize assets through civil forfeiture, the potential for abuse is self-evident.  While the lack of criminal intent may protect against criminal structuring charges, it is no barrier to the government’s overbroad power to initiate civil proceedings against the money itself.

IJ’s report, authored by Dick M. Carpenter II and Larry Salzman, goes in depth to reveal the history and unbelievable breadth of the IRS’ civil forfeiture regime, the perverse incentives it creates for government agencies, and the individual livelihoods it threatens and destroys.  IJ makes the case for much stronger protections for private property rights (including the outright abolition of civil forfeiture as a government power).

Be sure to check out the full report, as well as the Institute for Justice’s other work on asset forfeiture and private property here.

For more of Cato’s recent work on civil forfeiture, see Roger Pilon’s recent National Interest  article here, my blog post here, and a recent podcast here.

 

Christopher A. Preble

In 2015, U.S. defense spending will be about $600 billion, or about 3.24 percent of GDP. The former figure would strike many Americans as sufficient, and a few would find it excessive. Robert Gates once said, “If the Department of Defense can’t figure out a way to defend the United States on a budget of more than half a trillion dollars a year, then our problems are much bigger than anything that can be cured by buying a few more ships and planes.”

But hawks want you to focus on the latter figure, 3.24: they believe that an arbitrary fixed percentage of national output should be dedicated to defense spending every year. For example, Mitt Romney and Bobby Jindal would peg defense spending at 4 percent of GDP. Wall Street Journal columnist Bret Stephens would see that, and raise them. In his new book, America in Retreat, Stephens calls for sharply increasing “military spending to upwards of 5 percent of GDP.”

It’s unclear whether these gentlemen fully appreciate what their proposals would equate to in real dollar terms. (Take a look at the chart below, prepared by my colleague Travis Evans). The bipartisan Budget Control Act (BCA) capped discretionary Pentagon spending at $3.9 trillion between 2015 and 2021, an average of 2.6 percent of GDP per year. That means Americans would need to spend $2.1 trillion above the current caps to meet the 4 percent threshold, and $3.6 trillion more to reach 5 percent. For added perspective, then-House Budget Committee Chairman Paul Ryan’s FY15 alternative projected $4.2 trillion for defense, or 2.8 percent of GDP. In other words, Romney proposed to spend $1.8 trillion more than his running mate, and Stephens’ plan is even more disconnected from fiscal reality – $3.3 trillion more than the de facto GOP budget.

 

To justify their spending levels, hawks rely on imperfect historical analogies and threat inflation. Of course, it is true that the United States has spent more than 5 percent of GDP on defense in prior years and, at times, far more. President Dwight Eisenhower, who famously warned of a military industrial complex, presided over defense budgets that averaged about 10 percent; President Reagan averaged about 6 percent.

But, as I explain in a review of Stephens’ book in the latest issue of Barron’s:

While it’s true that military spending’s share of gross domestic product used to be higher than 5 percent, that was during the Cold War, when the U.S. was locked in a global struggle with the Soviet Union, and well before soaring entitlement spending threatened to overwhelm the federal budget.

If Stephens is serious about dedicating 5 percent of the nation’s economy to the Pentagon, nearly double what is called for under current law, cutting other federal spending won’t be enough to make up the difference. He never says whether he would hike taxes or add to a federal debt that is already out of control to pay for this global police force. But either way, taxpayer support is not likely.

That support is unlikely because U.S. foreign policy–and the military force structure needed to implement it–isn’t focused solely, or even primarily, on protecting the United States from foreign threats. Rather, our military aims to reassure nervous allies, and thus discourage them from defending themselves. As Stephens puts it, “America is better served by a world of supposed freeloaders than by a world of foreign policy freelancers.”

This is a pretty flimsy justification for massive spending increases. From my Barron’s review:

Set aside the hubristic assumption that the U.S. government can be relied on to respond to distant threats more wisely and prudently than governments much closer to the problem. More broadly, Stephens is asking U.S. men and women to risk their lives in foreign conflicts, many of which have nothing to do with safeguarding American security.

He is also expecting Americans to pay for something they do not support. A recent poll reprinted in the Wall Street Journal in December 2014, pointed out that among the foreign-policy goals that Americans counted as “very important,” “defending our allies’ security” ranked second from the bottom, just one percentage point above “strengthening the United Nations.”

Instead of using an arbitrary percentage of national output to determine defense spending and expecting Americans to pay the bill without question, policymakers should develop a national security strategy that places a priority on U.S. national security, including the nation’s fiscal health, and demands appropriate burden-sharing by our allies. America can maintain its military preeminence for decades if we reduce our military spending (or at least maintain the current caps), enact other reforms to get our fiscal house in order (including fixing entitlements), and allow our allies to better provide for their own security.

Pages