Policy Institutes

Police Misconduct -- The Worst Case in March

Cato Op-Eds - Wed, 04/09/2014 - 14:02

Tim Lynch

Over at Cato’s Police Misconduct web site, we have identified the worst case for the month of March. It was the case of the soon-to-be-former Philadelphia police officer, Kevin Corcoran. Mr. Corcoran was driving the wrong way down a one-way street near a group of individuals when one of them pointed out that the officer had made an illegal turn. The officer got out and aggressively approached the individuals, who readied their cell phone cameras to capture the incident. The footage (warning: graphic language) shows Corcoran accosting one of the persons filming, an Iraq war veteran, and shouting “Don’t fucking touch me!” before slapping the vet’s phone out of his hand, throwing him up against his police vehicle, arresting him, and driving off. Another of the cameras showed the vet with his hands up in a defensive posture, retreating from the officer. When the vet asked why he had been arrested, Corcoran said it was for public intoxication. Corcoran later cooled-off and, after finding out the individual was a veteran, let him go without charges.

Civil suits over Corcoran’s abuse of authority have been settled out of court in the past, but thanks to the quick cameras of the individuals he encountered here, Corcoran faces charges of false imprisonment, obstructing the administration of law, and official oppression—along with a suspension with intent to dismiss.  This incident shows the importance of the right to film police behavior.

Readers help us to track police misconduct stories from around the country–so if you see an item in the news from your community, please take a moment and send it our way using this form.

Categories: Policy Institutes

Social Cost of Carbon Inflated by Extreme Sea Level Rise Projections

Cato Op-Eds - Wed, 04/09/2014 - 13:57

Patrick J. Michaels and Paul C. "Chip" Knappenberger

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

As we mentioned in our last post, the federal Office of Management and Budget (OMB) is in the process of reviewing how the Obama administration calculates and uses the social cost of carbon (SCC). The SCC is a loosey-goosey computer model result that attempts to determine the present value of future damages that result from climate change caused by pernicious economic activity. Basically, it can be gamed to give any result you want.

We have filed a series of comments with the OMB outlining what is wrong with the current federal determination of the SCC used as the excuse for more carbon dioxide restrictions. There is so much wrong with the feds’ SCC, that we concluded that rather than just update it, the OMB ought to just chuck the whole concept of the social cost of carbon out the window and quickly close and lock it.

We have discussed many of the problems with the SCC before, and in our last post we described how the feds have turned the idea of a “social cost” on its head. In this installment, we describe a particularly egregious fault that exists in at least one of the prominent models used by the federal government to determine the SCC: The projections of future sea-level rise (a leading driver of future climate change-related damages) from the model are much higher than even the worst-case mainstream scientific thinking on the matter. This necessarily results in an SCC determination that is higher than the best science could possibly allow.

The text below, describing our finding, is adapted from our most recent set of comments to the OMB.

The Dynamic Integrated Climate-Economy (DICE) model, developed by Yale economist William Nordhaus (2010a), is what is termed an “integrated assessment model” or, IAM. An IAM is computer model which combines economics, climate change and feedbacks between the two to project how future societies are impacted by projected climate change and ultimately to determine the social cost of carbon (i.e., how much future damage, in today’s monetary terms, occurs for each unit emission of carbon (dioxide)).

In examining the climate change output from the DICE model, we found that it projects a degree of future sea level rise that far exceeds mainstream projections and are unsupported by the best available science. The sea level rise projections from more than half of the future scenarios examined exceed even the highest end of the projected sea level rise by the year 2300 as reported in the Fifth Assessment Report (AR5) of the UN’s Intergovernmental Panel on Climate Change (see Figure 1).

Figure 1. Projections of sea level rise from the DICE model (the arithmetic average of the 10,000 Monte Carlo runs from each scenario) for the five scenarios examined by the federal interagency working group (colored lines) compared with the range of sea level rise projections for the year 2300 given in the IPCC AR5 (represented by the vertical blue bar). (DICE data provided by Kevin Dayaratna and David Kreutzer of the Heritage Foundation)

Interestingly, Nordhaus (2010b) recognizes that the DICE sea level rise projections are outside the mainstream climate view as expressed by the IPCC:

“The RICE [DICE] model projection is in the middle of the pack of alternative specifications of the different Rahmstorf specifications. Table 1 shows the RICE, base Rahmstorf, and average Rahmstorf. Note that in all cases, these are significantly above the IPCC projections in AR4.” [emphasis added]

The justification given for the high sea-level rise projections in the DICE model (Nordhaus, 2010) is that they well-match the results of a “semi-empirical” methodology employed by Rahmstorf (2007) and Vermeer and Rahmstorf (2009).

However, as we have pointed out, recent science has proven the “semi-empirical” approach to projecting future sea level rise unreliable. For example, Gregory et al. (2012) examined the assumption used in the “semi-empirical” methods and found them to be unsubstantiated. Gregory et al (2012) specifically refer to the results of Rahmstorf (2007) and Vermeer and Rahmstorf (2009):

The implication of our closure of the [global mean sea level rise, GMSLR] budget is that a relationship between global climate change and the rate of GMSLR is weak or absent in the past. The lack of a strong relationship is consistent with the evidence from the tide-gauge datasets, whose authors find acceleration of GMSLR during the 20th century to be either insignificant or small. It also calls into question the basis of the semi-empirical methods for projecting GMSLR, which depend on calibrating a relationship between global climate change or radiative forcing and the rate of GMSLR from observational data (Rahmstorf, 2007; Vermeer and Rahmstorf, 2009; Jevrejeva et al., 2010).

In light of these findings, the justification for the very high sea-level rise projections produced by the DICE model is not acceptable.

Given the strong relationship between sea-level rise and future damage built into the DICE model, there can be no doubt that the SCC estimates from the DICE model are higher than the best science can allow and consequently, should not be accepted by the OMB as a reliable estimate of the social cost of carbon.

We did not investigate the sea-level rise projections from the other two IAMs employed in the federal SCC determination, but such an analysis must be carried out prior to extending any confidence in the values of the SCC resulting from those models—confidence that we demonstrate cannot be assigned to the DICE determinations of the social cost of carbon.

References:

Gregory, J., et al., 2012. Twentieth-century global-mean sea-level rise: is the whole greater than the sum of the parts? Journal of Climate, 26, 4476-4499, doi:10.1175/JCLI-D-12-00319

Nordhaus, W. 2010a. Economic aspects of global warming in a post-Copenhagen environment. Proceedings of the National Academy of Sciences 107(26): 11721-11726.

Nordhaus, W., 2010b. Projections of Sea Level Rise (SLR), http://www.econ.yale.edu/~nordhaus/homepage/documents/SLR_021910.pdf

 

Categories: Policy Institutes

The Fourth Amendment: Cars, Phones, and Keys?

Cato Op-Eds - Wed, 04/09/2014 - 12:53

Jim Harper

Here’s a law-school hypothetical for you: Suppose a gang-banger is pulled over for having expired tags on his car. He has no driver’s license, and records show that he has repeatedly driven without a license. The protocol in such situations is to impound the car to prevent him from driving unlicensed again, and the impoundment search reveals that he has guns hidden in the car. He is arrested, patted down, and his possessions seized to secure officer safety during his transportation and booking.

Now suppose that police officers take the gang-banger’s car out of the impound yard and drive it around looking for his confederates and for more evidence against him. Can they use the car for this purpose?

If you’re like most people, you probably think the answer is: “No.” But can you say why?

In two cell-phone-seizure cases headed for Supreme Court argument this month, Ilya Shapiro and I have argued for a sharp delineation of the property right that government agents seize when they arrest a suspect and take control of his things. They may rightly seize possession of an article, but they may not therefore put that item to whatever use they please.

The first paragraph above describes the facts in Riley v. California, on which we briefed the Court last month. Government agents did not use Riley’s car to further investigate him, but they twice used his cell phone to gather more evidence of his wrongful behavior.

Though they had properly seized the physical phone, they did not get a warrant to search the phone’s contents, and we think that violates the Fourth Amendment. Phones today carry huge amounts of information that are equivalent to the papers, postal mail, books, drawings, and portraits of the founding era, which the Fourth Amendment was designed to protect.

The second case we filed in today. It’s called United States v. Wurie, and it’s a similar case, in which arresting officers seized an arrestee’s flip-phone. After it received calls identified on the exterior display screen as coming from “my house,” they opened his phone and looked to see what the number was so they could learn the address and take their investigation there. We argue that they were entitled to observe and take cognizance of the information the phone put in plain view, but having seized the phone didn’t entitle them to use the phone for further investigation without a warrant—even though it seemed to provide easy access to interesting evidence.

They didn’t get a warrant to search at Wurie’s house either. They took his keys, which they had also seized upon his arrest, and used them to open the door to the vestibule of his duplex apartment, then test the lock on a second floor residence. The keys unlocked the door of the first-floor apartment, behind which was a woman and her baby.

Possession of those keys didn’t entitle government agents to go use them on the doors of two houses, even to turn the locks and confirm or deny their suspicions about Wurie’s residency.

The use of the keys is not an issue in the case, but it helps illustrate the difference between possession and use. When an item is taken from an arrestee in the interest of officer safety and preventing destruction of evidence, this does not entitle law enforcement officer’s to use it any way they please. Government agent’s use of Wurie’s cell phone to investigate him was an additional seizure beyond the taking of possession that happened when he was arrested. It should have required a warrant because of the volume of personal and private information—digital papers and effects—that cell phones access and store.

It may be easier to argue that cell phones shouldn’t be searched without a warrant because that violates a “reasonable expectation of privacy”—and it probably does—but that has not proven to be a constitutional test that courts can reliably administer. It is as likely to produce bad results as good ones because it puts judges in the role of making sweeping statements about societal values rather than determining the facts and law in individual cases.

If we can convince the Court to flex some atrophied property muscles and recognize the difference between taking possession of a thing and making use of it, this could be the basis of stronger Fourth Amendment law, in which the courts apply the terms of the law to the facts of cases rather than pronouncing rules based on soaring, untethered doctrine like the “reasonable expectation of privacy” test.

Categories: Policy Institutes

School Choice Lawsuits and Legislation Roundup

Cato Op-Eds - Wed, 04/09/2014 - 12:09

Jason Bedrick

We’re only at hump day but this week has already seen the filing of a new anti-school choice lawsuit, the dismissal of another, the potential resolution of a third, and the adoption of a new school choice program. [UPDATE: Plus the passage of a second school choice program. See below.]

Alabama: Yesterday, a federal judge dismissed the Southern Poverty Law Center’s ridiculous lawsuit against Alabama’s scholarship tax credit program which essentially claimed that the program unconstitutionally violated the Equal Protection clause since it did not solve all the problems facing education in Alabama. The SPLC argued that the law creates two classes of citizens: those who can afford decent schooling and those who cannot. In fact, those classes already exist, but the law moves some students from the latter category into the former, as the judge wisely recognized:

“The requested remedy is arguably mean: Withdraw benefits from those students who can afford to escape non-failing schools. The only remedy requested thus far would leave the plaintiffs in exactly the same situation to which they are currently subject, but with the company of their better-situated classmates. The equal protection requested is, in effect, equally bad treatment,” the judge said.

The scholarship program still faces a lawsuit from Alabama’s teachers union.

Georgia: Anti-school choice activists filed a lawsuit against Georgia’s scholarship tax credit program, alleging that it violates the state constitution’s ban on granting public funds to religious institutions. The lawsuit is longer and more complicated than similar suits in other states, and portions requesting that the government enforce certain accountability measures (e.g. - making sure that only eligible students are receiving scholarships) may actually have merit. However, the central claim that a private individual’s money becomes the government’s even before reaching the tax collector’s hand has been forcefully rejected by the U.S. Supreme Court and other state supreme courts with similar constitutional language.

Kansas: In the best school choice news of the week, as a part of its school finance legislation, Kansas lawmakers included both a scholarship tax credit program for low-income students and a personal-use tax credit. The former would grant corporations tax credits worth 70% of their donations to scholarship organizations that aid students from families earning up to 185% of the federal poverty line. The program is capped at $10 million. The personal-use tax credit grants $1,000 per child in tax credits against the family’s property tax liability up to $2,500 in total for any family without any students attending a government school.

Louisiana: A federal judge has mostly sided with the U.S. Department of Justice in its lawsuit demanding that Louisiana fork over data about students participating in the state’s school voucher program, including their race and the racial breakdown of both the government schools they are leaving and the private schools they want to attend. The DOJ wanted that data so that it can challenge individual vouchers if a student’s departure would leave a district “too white” or “too black” (no word yet on whether the DOJ will challenge families whose decision to move out of the district has the exact same impact). However, the judge required the state to provide the data to the DOJ only 10 days before issuing vouchers rather than 45 days beforehand, as the DOJ had requested. A study sponsored by the state of Louisiana determined that the voucher program has had a positive impact on racial integration.

Lawsuits against scholarship tax credit programs in New Hampshire, North Carolina, and Oklahoma are still pending. Parents for Educational Freedom in North Carolina released the following video announcing their efforts to fight the lawsuit:

PEFNC President Darrell Allison on “One of 4500” Campaign

UPDATE: 

Alaska: Last night, Alaska’s House of Representatives passed a scholarship tax credit program. The bill still has to go to the state senate and the governor.

Categories: Policy Institutes

Obama Administration: Federal Spending Essential to Technological Progress

Cato Op-Eds - Wed, 04/09/2014 - 11:30

Andrew J. Coulson

According to Politico,

Innovation has been slow to reach classrooms across America in part because the federal government spends very little to support basic research on education technology, a senior White House official said Tuesday.

Really?

Does the presence or absence of federal research spending really determine an industry’s rate of technological progress? Was federal spending a driving force in the leap from cathode ray tubes to flat panel displays? Was it responsible for the birth of the “brick” cell phone of 1984 and its astonishing progress from a pricy dumb radio to an inexpensive supercomputer/GPS device/entertainment center? Is federal research spending the reason desktop laser printers went from a $15,000 (inflation-adjusted) plaything of the rich to a $100 commodity?

No. Not really.

If anything, the rate of technological progress across fields seems negatively correlated with federal spending—and indeed with government spending at all levels. As illustrated in my recent study of State Education Trends, education has suffered a massive productivity collapse over the past 40 years. Perhaps not coincidentally, it is the only field in this country dominated by a government-funded, state-run monopoly.

Categories: Policy Institutes

France's Valls Is No Bill Clinton

Cato Op-Eds - Tue, 04/08/2014 - 15:04

Steve H. Hanke

President Francois Hollande has put in place a new French government led by Prime Minister Manual Valls. This maneuver has all the hallmarks of shuffling the deck chairs on the Titanic. Yes, one has the chilling feeling that accidents are waiting to happen.

President Hollande’s new lineup is loaded with contradictions. That’s not a good sign.

Just take Prime Minister Valls’ assertion that, when it comes to economics, he is a clone of Bill Clinton. For anyone familiar with the facts, this claim is bizarre, if not delusional.

When it comes to France’s fiscal stance, the Valls’ government is fighting austerity tooth and nail. Indeed, the Socialist government is seeking greater leeway from the European Commission (read: Germany) over targets for reducing France’s stubborn budget deficit. With French government expenditures accounting for a whopping 56.6 percent of GDP, it’s truly astounding that the government is reluctant to engage in a bit of belt tightening.

This brings us back to Valls’ self-promotion – namely, to compare himself to Bill Clinton. For a reality check, a review of the fiscal records of U.S. presidents is most edifying. Let’s take a look at Clinton:

The Clinton presidency was marked by the most dramatic decline in the federal government’s share of the U.S. economy since 1952, Harry Truman’s last full year in office. The Clinton administration reduced the relative size of government by 3.9 percentage points. Since 1952, no other president has even come close. At the end of his second term, President Clinton’s big squeeze left the size of government, as a percent of GDP, at 18.2 percent.

What is noteworthy is that the squeeze was not only in defense spending, but also in non-defense expenditures. Indeed, the non-defense squeeze accounted for 2.2 percentage points of Clinton’s total 3.9 percentage point reduction in the relative size of the federal government. Since 1952, the only other president who has been able to reduce non-defense expenditures was Ronald Reagan.

During his presidency, Clinton squeezed and squeezed hard, and his rhetoric matched his actions. Recall that in his 1996 State of the Union address, he declared that “the era of big government is over.”

When it comes to fiscal rhetoric and record, it’s hard to imagine that Manuel Valls – even in his wildest dreams – will be able to match Bill Clinton, the king of the fiscal squeeze.

Categories: Policy Institutes

International Regulatory Conflict

Cato Op-Eds - Tue, 04/08/2014 - 12:32

Simon Lester

My colleague Peter Van Doren posted here yesterday about a new National Highway Traffic Safety Administration (NHTSA) rule which mandates that “all cars and light trucks sold in the United States in 2018 have rearview cameras installed.” I’m going to leave the analysis of the domestic regulatory aspects of this issue to experts like Peter. I just wanted to comment briefly on some of the international aspects.

In particular, what if other governments decide to regulate in this area as well and they all do it differently?  That would mean significant costs for car makers, as they would have to tailor their cars to meet the requirements of different governments. Note that the U.S. regulation doesn’t just say, “cars must have a rear-view camera.”  Rather, it gets very detailed:

The final rule amends a current standard by expanding the area behind a vehicle that must be visible to the driver when the vehicle is shifted into reverse. That field of view must include a 10-foot by 20-foot zone directly behind the vehicle. The system used must meet other requirements as well, including the size of the image displayed for the driver. 

In contrast to a market solution, which provides flexibility as to what will be offered, the regulatory approach has very specific requirements.

As far as I have been able to find out, the United States is the first to regulate here, but others are likely to follow. When the EU or Japan turn to the issue, for example, will they develop regulations that are incompatible with the U.S. approach? Will there be a proliferation of conflicting regulations?

In theory, it’s easy to avoid these problems. Smart regulators would recognize that their foreign counterparts’ regulations are equally effective. But in other areas of automobile regulation, we haven’t seen enough of this cooperation. The rear-view camera issue provides an opportunity for regulators from different countries to work together to avoid making regulation even more costly than it already is.

Categories: Policy Institutes

Washington Should Not Risk War over Ukraine

Cato Op-Eds - Tue, 04/08/2014 - 10:39

Doug Bandow

Russia’s brazen annexation of Crimea has generated a flood of proposals to reinvigorate NATO. Doing so would make America less secure.

For most of its history, the United States avoided what George Washington termed “entangling alliances.”  In World War II and the Cold War, the United States aided friendly states to prevent hostile powers from dominating Eurasia. 

The collapse of communism eliminated the prospect of any nation controlling Europe and Asia. But NATO developed new roles to stay in business, expanding into a region highly sensitive to Russia. 

The invasion of Crimea has triggered a cascade of demands for NATO, mostly meaning America, to act. President Barack Obama responded: “Today NATO planes patrol the skies over the Baltics, and we’ve reinforced our presence in Poland, and we’re prepared to do more.”

The Eastern Europeans desired much more. An unnamed former Latvian minister told the Economist: “We would like to see a few American squadrons here, boots on the round, maybe even an aircraft carrier.” A gaggle of American policy advocates agreed.

Moreover, Secretary General Anders Fogh Rasmussen said alliance members would “intensify our military cooperation with Ukraine,” including assisting in modernizing its military.  A number of analysts would make Ukraine an ally in everything but name.

For instance, wrote Kurt Volker of the McCain Institute, NATO should “[d]etermine that any further assaults on Ukraine’s territorial integrity beyond Crimea represent a direct threat to NATO security and … will be met with a NATO response.”  Charles Krauthammer suggested creating “a thin tripwire of NATO trainer/advisers” to “establish a ring of protection at least around the core of western Ukraine.”

AEI’s Thomas Donnelly proposed “putting one brigade astride each of the two main roads” connecting Crimea to the Ukrainian mainland, “backed by U.S. aircraft.” Robert Spalding of the Council on Foreign Relations advocated deploying F-22 fighters along “with an American promise to defend Ukrainian skies from attack.”

Senators John McCain and Lindsey Graham urged increasing “cooperation with, and support for, Ukraine, Georgia, Moldova, and other non-NATO partners.” John Bolton suggested putting “both Georgia and Ukraine on a clear path to NATO membership.” 

Of course, more must be spent on the military. Ilan Berman of the American Foreign Policy Council complained that “The past half-decade has seen the U.S. defense budget fall victim to the budgetary axe.” 

Yet America’s military spending is up 37 percent over the last two decades, while collective expenditures by NATO’s other 27 members are down by 3.4 percent. Overall, the Europeans spend 1.6 percent of GDP on the military, compared to America’s 4.4 percent. Today most NATO members, including the Eastern Europeans–with the exception of Poland–continue to cut outlays.

Of course, U.S. officials insist that Europe should do more. But the Europeans have no reason to change so long as Washington guarantees their security.

Despite Europe’s anemic military efforts, it still far outranges Russia. And with a collective GDP more than eight times that of Russia, the Europeans could do far more if they desired. 

The basic problem, noted Stephen Walt, is that “president after president simply assumed the pledges they were making would never have to be honored.” Obviously, an American threat to go to war may deter. But history is replete with alliances that failed to prevent conflict and became transmission belts of war instead. 

In fact, in 2008 Georgia appeared to believe that Washington would back it against Russia. Offering military support to Ukraine could have a similar effect.

Washington should bar further NATO expansion. Over the longer term the United States should turn responsibility for Europe’s defense back to Europe. 

As I point out in my latest Forbes column:  “Americans should sympathize with the Ukrainian people, who have been ill-served by their own government as well as victimized by Moscow.But that does not warrant extending military support or security guarantees to Kiev.  Doing so would defeat the original purpose of the alliance: enhancing U.S. security.”

Today Washington could best protect itself outside of the transatlantic alliance.

Categories: Policy Institutes

NHTSA’s Rearview Camera Mandate

Cato Op-Eds - Mon, 04/07/2014 - 13:56

Peter Van Doren

Last week the National Highway Traffic Safety Administration (NHTSA) completed rulemaking that mandated that all cars and light trucks sold in the US in 2018 have rearview cameras installed.

In 2008 Congress enacted legislation that mandated that the NHTSA issue a rule to enhance rear view visibility for drivers by 2011.  Normally, such a delay would be held up as an example of bureaucratic ineptitude and waste. But in this case, NHTSA was responding to its own analysis that determined (p. 143) that driver error is the major determinant of the effectiveness of backup assist technologies including cameras.

In addition, NHTSA concluded that the cost per life saved from installation of the cameras ranged from about 1.5 times, to more than 3 times the 6.1 million dollar value of a statistical life used by the Department of Transportation to evaluate the cost effectiveness of its regulations.  NHTSA waited until the possibility of intervention by the courts forced it to issue the rule.  The problem in this case is Congress overreacting to rare events rather than the agency.

For more on auto safety regulation, see Kevin McDonald’s piece in Regulation here.

Categories: Policy Institutes

Incorrect, Gov. Bush

Cato Op-Eds - Mon, 04/07/2014 - 13:26

Neal McCluskey

Speaking off the cuff, it’s easy to make a mistake. But for a long time former Florida governor – and trendy presidential possibility – Jeb Bush has been criticizing Common Core opponents for, among other things, saying the Core was heavily pushed by the federal government. His still getting the basics wrong on how Core adoption went down must be called out.

Interviewed at this weekend’s celebration of the 25th anniversary of his father’s presidential election – an event where, perhaps, he actually knew which questions were coming – Bush said the only way one could think the Core was a “federal program” is that the Obama administration offered waivers from the No Child Left Behind Act if states adopted it. (Start around the 7:15 mark.) And even that, he said, basically came down to states having “to accept something [they] already did”: agree to the Core.

Frankly, I’m tired of having to make the same points over and over, and I suspect most people are sick of reading them. Yet, as Gov. Bush makes clear, they need to be repeated once more: Washington coerced Core adoption in numerous ways, and creators of the Core – including the National Governors Association and Council of Chief State School Officers – asked for it!

In 2008 – before there even was an Obama administration – the NGA and CCSSO published Benchmarking for Success, which said the feds should incentivize state use of common standards through funding carrots and regulatory relief. That was eventually repeated on the website of the Common Core State Standards Initiative.

The funding came in the form of Race to the Top, a piece of the 2009 “stimulus” that de facto required states to adopt the Core to compete for a chunk of $4.35 billion. Indeed, most states’ governors and chief school officers promised to adopt the Core before the final version was even published. The feds also selected and paid for national tests to go with the Core. Finally, waivers from the widely hated NCLB were offered after RTTT, cementing adoption in most states by giving only two options to meet “college- and career-ready standards” demands: Either adopt the Core, or have a state college system certify a state’s standards as college and career ready.

Gov. Bush, the facts are clear: The feds bought initial adoption with RTTT, then coerced further adoption through NCLB waivers. And all of that was requested by Core creators before there was a President Obama!

Let’s never have to go over this again!

Categories: Policy Institutes

50 Years of Federal Spending

Cato Op-Eds - Mon, 04/07/2014 - 13:06

Chris Edwards

Fifty years ago, one of the biggest-spending presidents in U.S. history was settling into office after coming to power the prior November. Lyndon Johnson signed into law Medicare, Medicaid, and hundreds of subsidy programs for the states and cities.

Johnson was followed in office by one of the worst presidents of the 20th century in terms of domestic policy. Richard Nixon added and expanded many programs, and he helped to cement in place the array of new federal interventions pioneered by Johnson.

The chart below shows federal spending over the five decades since Johnson. Spending is divided into four components and measured as a share of gross domestic product (GDP).

Blue Line: Entitlement spending soared from the mid-1960s to the early-1980s. Medicare and Medicaid grew rapidly after being created in 1965, and Nixon signed into law numerous large increases in Social Security for current recipients. A month before the 1972 election, Social Security recipients received a letter informing them that Nixon had just signed a law bumping up their benefits by 20 percent. The recent spike in spending stems from increases in Medicare, Medicaid, food stamps, and other programs.

Black Line: Defense spending spiked in the late 1960s due to the Vietnam War. The number of U.S. troops in Vietnam peaked in 1968, then fell steadily after that. The Ronald Reagan and George W. Bush defense build-ups are also visible on the chart.

Red Line: Defense spending fell as a share of GDP in the 1970s, while nondefense discretionary spending rose. Then in the 1980s and 1990s, nondefense spending was restrained under Reagan and Bill Clinton, but then rose under George W. Bush in the 2000s.

Green Line: Bush and Barack Obama have been lucky budgeters because federal interest costs have been low during the past decade, despite the large deficits run by these two presidents. The luck won’t last: under its baseline, CBO projects that interest costs will rise from 1.3 percent of GDP today, to 2.7 percent by 2020, and to 4.1 percent by 2030.

Categories: Policy Institutes

The Golden Rule of Spending Restraint

Cato Op-Eds - Mon, 04/07/2014 - 12:46

Daniel J. Mitchell

My tireless (and probably annoying) campaign to promote my Golden Rule of spending restraint is bearing fruit.

The good folks at the editorial page of the Wall Street Journal allowed me to explain the fiscal and economic benefits that accrue when nations limit the growth of government.

Here are some excerpts from my column, starting with a proper definition of the problem.

What matters, as Milton Friedman taught us, is the size of government. That’s the measure of how much national income is being redistributed and reallocated by Washington. Spending often is wasteful and counterproductive whether it’s financed by taxes or borrowing.

So how do we deal with this problem?

I’m sure you’ll be totally shocked to discover that I think the answer is spending restraint.

More specifically, governments should be bound by my Golden Rule.

Ensure that government spending, over time, grows more slowly than the private economy. …Even if the federal budget grew 2% each year, about the rate of projected inflation, that would reduce the relative size of government and enable better economic performance by allowing more resources to be allocated by markets rather than government officials.

I list several reasons why Mitchell’s Golden Rule is the only sensible approach to fiscal policy.

A golden rule has several advantages over fiscal proposals based on balanced budgets, deficits or debt control. First, it correctly focuses on the underlying problem of excessive government rather than the symptom of red ink. Second, lawmakers have the power to control the growth of government spending. Deficit targets and balanced-budget requirements put lawmakers at the mercy of economic fluctuations that can cause large and unpredictable swings in tax revenue. Third, spending can still grow by 2% even during a downturn, making the proposal more politically sustainable.

The last point, by the way, is important because it may appeal to reasonable Keynesians. And, in any event, it means the Rule is more politically sustainable.

I then provide lots of examples of nations that enjoyed great success by restraining spending. But rather than regurgitate several paragraphs from the column, here’s a table I prepared that wasn’t included in the column because of space constraints.

It shows the countries that restrained spending and the years that they followed the Golden Rule. Then I include three columns of data. First, I show how fast spending grew during the period, followed by numbers showing what happened to the overall burden of government spending and the change to annual government borrowing.

Last but not least, I deal with the one weakness of Mitchell’s Golden Rule. How do you convince politicians to maintain fiscal discipline over time?

I suggest that Switzerland’s “debt brake” may be a good model.

Can any government maintain the spending restraint required by a fiscal golden rule? Perhaps the best model is Switzerland, where spending has climbed by less than 2% per year ever since a voter-imposed spending cap went into effect early last decade. And because economic output has increased at a faster pace, the Swiss have satisfied the golden rule and enjoyed reductions in the burden of government and consistent budget surpluses.

In other words, don’t bother with balanced budget requirements that might backfire by giving politicians an excuse to raise taxes.

If the problem is properly defined as being too much government, then the only logical answer is to shrink the burden of government spending.

Last but not least, I point out that Congressman Kevin Brady of Texas has legislation, the MAP Act, that is somewhat similar to the Swiss Debt Brake.

We know what works and we know how to get there. The real challenge is convincing politicians to bind their own hands.

Categories: Policy Institutes

Education, Standards, and Private Certification

Cato Op-Eds - Mon, 04/07/2014 - 12:03

Jason Bedrick

Can there be standards in education without the government imposing them?

Too many education policy wonks, including some with a pro-market bent, take it for granted that standards emanate solely from the government. But that does not have to be the case. Indeed, the lack of a government-imposed standard leaves space for competing standards. As a result of market incentives, these standards are likely to be higher, more diverse, more comprehensive, and more responsive to change than the top-down, one-size-fits-all standards that governments tend to impose. I explain why this is so at Education Next today in ”What Education Reformers Can Learn from Kosher Certification.” 

Categories: Policy Institutes

Hungary's Slide Towards Authoritarianism

Cato Op-Eds - Mon, 04/07/2014 - 11:20

Dalibor Rohac

Yesterday’s general election in Hungary has given Viktor Orbán’s party, Fidesz, a very comfortable majority in the Hungarian Parliament, while strengthening the openly racist Jobbik party, which earned over 21 percent of the popular vote. Neither of this is good news for Hungarians or for Central Europe as a whole.

In the 1990s, Hungary was among the most successful of transitional economies of Central and Eastern Europe. With a significant exposure to markets in the final years of the Cold War and a political establishment committed to reforms, it was often singled out as an example of how a successful, sustained transition towards market and democracy should look like.

In 2014, the situation could not be more different. Hungary’s economic policies have become increasingly populist and haphazard, as the government has confiscated the assets of private pension funds, undermined the independence of the central bank, and botched the consolidation of the country’s public finances (p. 77). Worse yet, Hungary has seen a growth of nationalist and anti-Semitic sentiments which have not been adequately countered by the country’s political elites. In a recent column, I wrote about Mr. Orbán’s personal responsibility for the disconcerting political and economic developments in Hungary:

Mr. Orbán’s catering to petty nationalism often borders on selective amnesia about certain parts of Hungarian history. Recently the Federation of Hungarian Jewish Communities, the Mazsihisz, announced it would not take part in the Orbán government’s Holocaust commemorations. According to the Mazsihisz, the framing of the ceremonies whitewashes the role that the Hungarian government played and focuses exclusively on the crimes perpetrated by the Germans—despite the fact that Hungary adopted its first anti-Jewish laws as early as 1938.

Mr. Orbán’s tone-deafness when it comes to historical symbols goes hand in hand with a concerted effort to undermine the foundations of liberal democracy and rule of law in Hungary. Since Mr. Orbán came to office four years ago, Fidesz has consolidated its political power and used it to pass controversial legislation tightening media oversight, as well as constitutional changes that curb judicial power and restrict political advertising, among other measures.

Categories: Policy Institutes

FEMA Disaster Declarations

Cato Op-Eds - Fri, 04/04/2014 - 16:34

Chris Edwards

I am writing a study on the Federal Emergency Management Agency (FEMA) and looking at the issue of presidential disaster declarations. Under the 1988 Stafford Act, a state governor may request that the president declare a “major disaster” in the state if “the disaster is of such severity and magnitude that effective response is beyond the capabilities of the state and the affected local governments.”

The main purpose of declarations is to impose on federal taxpayers the relief and rebuilding costs that would otherwise fall on state and local taxpayers and individuals in the affected area. Federalism is central to disaster planning and response in the United States, and federal aid is only supposed to be for the most severe events. Unfortunately, the relentless political forces that are centralizing power in just about every policy area are also doing so in disaster policy.

Below is a chart of FEMA data showing the number of “major disasters” declared by presidents since 1974, when the current process was put in place. The number of declared disasters has soared as presidents have sought political advantage in handing out more aid. Presidents have been ignoring the plain language of the Stafford Act, which allows for aid only in the most severe situations.

In the chart, I marked with red bars the years that presidents ran for reelection. In those years, presidents have generally declared the most major disasters. That was true of Ronald Reagan in 1984, George H.W. Bush in 1992, and Bill Clinton in 1996. George W. Bush declared the most disasters of his first term in his reelection year of 2004. The two presidents who do not fit the pattern are Jimmy Carter and Barack Obama.

Categories: Policy Institutes

Minimum Wage Solidarity Misplaced

Cato Op-Eds - Fri, 04/04/2014 - 14:31

James A. Dorn

Senate Democrats are anxious to bring the Minimum Wage Fairness Act (S. 1737) up for a vote to express their solidarity with “progressives.”  That solidarity, however, is misplaced. The bill is not a panacea for the prosperity of low-skilled workers; it is anti-free market and immoral—based on coercion not consent.

The bill would increase the federal minimum wage to $10.10 after two years, index it for inflation, and increase the minimum for tipped workers.  Those changes would substantially increase the cost of hiring low-skilled workers, lead to job losses and unemployment (especially in the longer run as businesses shift to labor-saving methods of production), and slow job growth.

Although there is virtually no chance this bill would pass, Senate Majority Leader Harry Reid (D-Nev) wants it to come to the floor so he and his compatriots can express their support for low-income workers (and for unions and others who support the minimum wage increase) in an election year.  “Democrats are focused on the future,” says Reid, and “we were elected to improve people’s lives.” 

In a recent email, the Agenda Project Action Fund provided talking points and instructed recipients to contact their senators to bring the minimum wage issue to the floor for debate.  Erica Payne, founder of the Agenda Project, believes “a higher minimum wage is consistent with free market principles” and that conservatives should support the increase.  The talking points include assertions that “raising the minimum wage is a boon to business” and “will not lead to job loss.”

The Agenda Project’s goal is “to build a powerful, intelligent, well-connected political movement capable of identifying and advancing rational, effective ideas in the public debate and in so doing ensure our country’s enduring success.”  Those are admirable goals but the minimum wage is neither a rational nor effective means of attaining them.

Companies like the Gap and Costco, which have increased entry-level wages, do so because they expect those voluntary increases to be profitable in the long run.  Such actions are consistent with free market principles, unlike a minimum wage law that forces employers to pay more than the prevailing market wage and prevents workers from contracting for less than the legal minimum in order to retain or secure a job.

It is disingenuous to deny the law of demand: when a worker’s skill level and experience do not change and the government mandates a higher minimum wage that exceeds those workers’ productivity, employers will hire fewer workers.  Importantly, the negative impact on jobs for low-skilled workers will be stronger in the long run than the short run.  But politicians focus on the short run and argue that small increases in the minimum wage will not harm jobs.

Rationality depends on taking the long view and using the logic of the market to analyze the impact of the minimum wage and other policies.  Common sense and a massive amount of empirical evidence show that raising the minimum wage is not an effective solution to poverty or unemployment. The minimum wage rhetoric is one thing, reality another.

In a recent Tax & Budget Bulletin for the Cato Institute, noted labor economist Joseph J. Sabia of San Diego State University, presents a strong body of evidence that minimum wage increases adversely affect employment opportunities for lower-skilled workers and those who do benefit mostly come from non-poor households. That evidence, supported by numerous peer-reviewed journal articles, is in direct contrast to The Agenda Project’s contention that “since 1994, studies have found there is little to no evidence of employment reduction following minimum wage increases at both state and federal levels.”

The idea that a higher minimum wage will “drive the economy” and fuel economic growth is an illusion.  Workers must first produce more if they are to be paid more and keep their jobs.  A higher minimum wage is neither necessary nor sufficient for economic growth. Some workers would gain but others would lose, as would employers who are crowded out of the market or have fewer funds to invest, and consumers who have to pay higher prices. There is no free lunch.

The minimum wage redistributes a given economic pie, it doesn’t enlarge it.  The only way to “drive the economy” is to raise productivity, not the minimum wage.  The key factors that improve real economic growth—and lead to a higher standard of living—are institutional changes that safeguard persons and property, lower the costs of doing business, and encourage entrepreneurship.  Those institutions are endangered by the politicization of the labor market.

When New York State increased its minimum wage by 31 percent (from $5.15 an hour to $6.75) in 2004–06, the number of jobs open to younger, less-educated workers decreased by more than 20 percent, as Sabia, Richard Burkhauser, and Benjamin Hansen found in a landmark study published in the Industrial and Labor Relations Review in 2012.  An increase in the federal minimum wage from $7.25 an hour to $10.10 would no doubt have a similar impact.

In another study, Sabia and Burkhauser found “no evidence that minimum wage increases between 2003 and 2007 lowered state poverty rates” (Southern Economic Journal, January 2010). Those workers most apt to lose their jobs as a result of a higher minimum wage are from low-income households. Hence, an increase in the minimum wage can actually increase poverty.  As David Neumark, Mark Schweitzer, and William Wascher noted in the Journal of Human Resources (2005), “The net effect of higher minimum wages is …  to increase the proportion of families that are poor and near-poor.”

Proponents of the higher minimum wage downplay those adverse consequences and point to widespread public support—and politicians like nothing better than polls to guide their agendas. The public supports higher minimum wages because they haven’t thought about the longer-run consequences. Most polls simply ask, “Are you in favor of a higher minimum wage?” without saying anything about the loss of jobs and unemployment that will occur.  When those costs are taken into account the majority swings against an increase in the minimum wage, as shown by Emily Ekins.

When legislators mandate a minimum wage above the market wage determined by demand and supply they deprive workers and employers of the freedom of contract that lays at the heart of a dynamic market economy.  The wealth of a nation is not enhanced by prohibitions on free trade—whether in product, labor, or capital markets.  People should be free to choose and improve. If low-skilled workers can’t find a job at the minimum wage, they won’t have the opportunity to fully develop themselves and move up the income ladder.

Groups that are pushing for a higher minimum wage may have good intentions but they discount—or fail to understand—the longer-run adverse effects of that legislation on freedom and prosperity.  They only look at those who may benefit from a higher minimum wage, including union members, while downplaying the inevitable shift to labor-saving technology that will occur over time and the jobs that will never be created.

Those who argue that there is a moral case for a higher minimum wage seem to think that using the force of government/law to mandate wage rates that are greater than those freely negotiated in markets is both “fair” and “just.”  Yet the minimum wage by its very nature interferes with freedom of contract and, in that sense, is unjust. Moreover, it prevents mutually beneficial exchanges. A young worker with little education and few job skills who is willing to work at less than the minimum wage to get a job and gain experience is prevented from doing so.  How can that be “fair?”

Instead of solidarity, minimum wage proponents create dissent when workers find that prosperity cannot be created by a stroke of the legislation pen.  Politicians may promise a higher wage rate to low-skilled workers but for those workers who lose their jobs, their incomes will be zero.  Even the CBO thinks the Obama promise of $10.10 an hour, if implemented, would lead to at least 500,000 fewer jobs for those the law is intended to help.  

The Minimum Wage Fairness Act is the wrong medicine for improving the plight of low-income families and creating a prosperous nation. Poverty is not abolished by legislative fiat.  Rather, the path toward economic growth and well-being is paved with genuine free markets and limited government, and by thinking in terms of long-run effects of current legislation, not short-term benefits to special interests.

Categories: Policy Institutes

The Current Wisdom: The Administration’s Social Cost of Carbon Turns “Social Cost” on its Head

Cato Op-Eds - Thu, 04/03/2014 - 17:43

Paul C. "Chip" Knappenberger and Patrick J. Michaels

This Current Wisdom takes an in-depth look at how politics can masquerade as science.

                      “A pack of foma,” Bokonon said

                                                Paraphrased from Cat’s Cradle (1963), Kurt Vonnegut

In his 1963 classic, Cat’s Cradle, iconic writer Kurt Vonnegut described the sleepy little Caribbean island of San Lorenzo, where the populace was mesmerized by the prophet Bokonon, who created his own religion and his own vocabulary. Bokonon communicated his religion through simple verses he called “calypsos.” “Foma” are half-truths that conveniently serve the religion, and the paraphrase above is an apt description of the Administration’s novel approach to determining the “social cost of carbon” (dioxide). 

Because of a pack of withering criticism, the Office of Management and Budget (OMB) is now in the process of reviewing how the Obama Administration calculates and uses the social cost of carbon (SCC).  We have filed a series of Comments with the OMB outlining what is wrong with the current SCC determination. Regular readers of this blog are familiar with some of the problems that we have identified, but our continuing analysis of the Administration’s SCC has yielded a few more nuggets.

We describe a particularly rich one here—that the government wants us to pay more today to offset a modest climate change experienced by a wealthy future society than to help alleviate a lot of climate change impacting a less well-off future world.

This kind of logic might be applied by Bokonon on San Lorenzo, but here in sophisticated Washington? It is exactly the opposite of what a rational-thinking person would expect. In essence, the Obama Administration has turned the “social cost” of the social cost of carbon on its head.  The text below, describing this counterintuitive result, is adapted from our most recent set of Comments to the OMB.

The impetus behind efforts to determine the “social cost of carbon” is generally taken to be the desire to quantify the “externalities,” or unpaid future costs, that result from climate changes produced from anthropogenic emissions of carbon dioxide and other greenhouse gases. Such information could be a useful tool in guiding policy decisions regarding greenhouse gas emissions—it if were robust and reliable.

However, as is generally acknowledged, the results of such efforts (generally through the development of Integrated Assessment Models, IAMs) are highly sensitive not only to the model input parameters but also to how the models have been developed and what processes they try to include. One prominent economist, Robert Pindyck of M.I.T. recently wrote (Pindyck, 2013) that the sensitivity of the IAMs to these factors renders them useless in a policymaking environment:

Given all of the effort that has gone into developing and using IAMs, have they helped us resolve the wide disagreement over the size of the SCC? Is the U.S. government estimate of $21 per ton (or the updated estimate of $33 per ton) a reliable or otherwise useful number? What have these IAMs (and related models) told us? I will argue that the answer is very little. As I discuss below, the models are so deeply flawed as to be close to useless as tools for policy analysis. Worse yet, precision that is simply illusory, and can be highly misleading.

…[A]n IAM-based analysis suggests a level of knowledge and precision that is nonexistent, and allows the modeler to obtain almost any desired result because key inputs can be chosen arbitrarily.

Nevertheless (or perhaps because of this), the federal government has now incorporated IAM-based determinations of the SCC into many types of new and proposed regulations. 

The social cost of carbon should simply be the fiscal impact on future society that human-induced climate change from greenhouse gas emissions will impose. Knowing this, we (policymakers and regular citizens) can decide how much (if at all) we are willing to pay currently to reduce the costs to future society.

Logically,  we are probably more willing to sacrifice more now if we know that future society would be impoverished and suffer from extreme climate change, than we would if we knew that the future held minor or modest climate changes impacting a society that will be very well off.  We would expect that the value of the social cost of carbon would reflect the difference between these two hypothetical future worlds—the SCC should be far greater in an impoverished future facing a high degree of climate change than an affluent future confronted with much less climate change.

But if you thought this, you would be wrong. This is Bokononism.

Instead, the IAMs,  as run by a federal “Interagency Working Group” (IWG) produce nearly the opposite result— that is that the SCC is far lower in the less affluent/high climate change future than it is in the more affluent/low climate change future. Bokonon says it is so.

We illustrate this illogical and impractical result using one of the Integrated Assessment Models used by the IWG—a model called the Dynamic Integrated Climate-Economy model (DICE) that was developed by Yale economist William Nordhaus. The DICE model was installed and run at the Heritage Foundation by Kevin Dayaratna and David Kreutzer using the same model set up and emissions scenarios as prescribed by the IWG. The DICE projections of future temperature change were provided to us by the Heritage Foundation.

Contrary to Einstein’s dictum, Bokonon does throw DICE. Figure 1 shows DICE projections of the earth’s average surface temperature for the years 2000-2300 produced by using the five different scenarios of how future societies develop (and emit greenhouse gases). 

Disregard the fact that anyone who thinks we can forecast the future state of society 100 years from now (much less 300) is handing us a pack of foma.  Heck, 15 years ago everyone knew the world was running out of natural gas.   

The numerical values on the right-hand side of the illustration are the values for the social cost of carbon associated with the temperature change resulting from each emissions scenario (the SCC is reported for the year 2020 using constant $2007 and assuming a 3% discount rate).  The temperature change can be considered a good proxy for the magnitude of the overall climate change impacts.

 

Figure 1. Future temperature changes, for the years 2000-2300, projected by the DICE model for each of the five emissions scenarios used by the federal Interagency Working Group. The temperature changes are the arithmetic average of the 10,000 Monte Carlo runs from each scenario. The 2020 value of the SCC (in $2007) produced by the DICE model (assuming a 3% discount rate) is included on the right-hand side of the figure. (DICE data provided by Kevin Dayaratna and David Kreutzer of the Heritage Foundation).

Notice in Figure 1 that the value for the SCC shows little (if any) correspondence to the magnitude of climate change. The MERGE scenario produces the greatest climate change and yet has the smallest SCC associated with it. The “5th Scenario” is one that holds climate change to a minimum by imposing strong greenhouse gas emissions limitations yet a SCC that is more than 20% greater than the MERGE scenario.  The global temperature change by the year 2300 in the MERGE scenario is 9°C while in the “5th Scenario” it is only 3°C. The highest SCC is from the IMAGE scenario—a scenario with a mid-range climate change. All of this makes sense only to Bokonon.

If the SCC bears little correspondence to the magnitude of future human-caused climate change, than what does it represent?

Figure 2 provides some insight.

Figure 2. Future global gross domestic product, for the years 2000-2300 for each of the five emissions scenarios used by the federal Interagency Working Group. The 2020 value of the SCC (in $2007) produced by the DICE model (assuming a 3% discount rate) is included on the right-hand side of the figure.

When comparing the future global gross domestic product (GDP) to the SCC, we see, generally, that the scenarios with the higher future GDP (most affluent future society) have the higher SCC values, while the futures with lower GDP (less affluent society) have, generally, lower SCC values.

Combining the results from Figures 1 and 2 thus illustrates the absurdities in the federal government’s (er, Bokonon’s) use of the DICE model. The scenario with the richest future society and a modest amount of climate change (IMAGE) has the highest value of the SCC associated with it, while the scenario with the poorest future society and the greatest degree of climate change (MERGE) has the lowest value of the SCC. Only Bokononists can understand this.

This counterintuitive result occurs because the damage functions in the IAMs produce output in terms of a percentage decline in the GDP—which is then translated into a dollar amount (which is divided by the total carbon emissions) to produce the SCC. Thus, even a small climate change-induced percentage decline in a high GDP future yields greater dollar damages (i.e., higher SCC) than a much greater climate change-induced GDP percentage decline in a low GDP future.

Only Bokonon would want to spend (sacrifice) more today to help our rich descendants deal with a lesser degree of climate change than to help our relatively less-well-off descendants deal with a greater degree of climate change.

Yet that is what the government’s SCC would lead you to believe and that is what the SCC implies when it is incorporated into federal cost/benefit analyses.

In principle, the way to handle this situation is by allowing the discount rate to change over time. In other words, the richer we think people will be in the future (say the year 2100), the higher the discount rate we should apply to damages (measured in 2100 dollars) they suffer from climate change, in order to decide how much weshould be prepared to sacrifice today on their behalf.

Until (if ever) the current situation is properly rectified, the federal government’s determination of the SCC is not fit for use in the federal regulatory process as it is deceitful and misleading.

Tiger got to hunt

Bird got to fly

Man got to sit and wonder why, why, why.

 

Tiger got to sleep

Bird got to land

Man got to tell himself he understand.

                                                                -Foma in a Bokonon’s Calypso

References:

Nordhaus, W. 2010. Economic aspects of global warming in a post-Copenhagen environment. Proceedings of the National Academy of Sciences 107(26): 11721-11726.

Pindyck, R. S., 2013. Climate Change Policy: What Do the Models Tell Us? Journal of Economic Literature, 51(3), 860-872.

Categories: Policy Institutes

FSOC’s Failing Grade?

Cato Op-Eds - Thu, 04/03/2014 - 17:24

Louise Bennetts

All the recent hype over the legitimacy of high frequency trading has overshadowed another significant event. In a speech in Washington DC yesterday Securities and Exchange Commissioner, Luis Aguilar, made some fairly strong statements about the recent actions of the Financial Stability Oversight Council (FSOC). The speech was significant because it is the first time that a Democratic Commissioner has criticized the actions of one of the Dodd-Frank Act’s most controversial creations. (To date, the criticism of the Council emanating from the Commission has been levied by the two Republican Commissioners and we all know that Republicans don’t much like Dodd-Frank.) Indeed, Commissioner Aguilar’s statements indicate just how fractured and fragmented the post-Dodd-Frank “systemic risk monitoring” system is.

At issue is the FSOC’s recent foray into the regulation of the mutual fund industry. Commissioner Aguilar described the Council’s actions as “undercut(ting)” the traditional authority of the Securities and Exchange Commission and described the report by the FSOC’s research arm (the Office of Financial Research) as “receiv(ing) near universal criticism.”

He went on to note that “the concerns voiced by commenters and lawmakers raise serious questions about whether the OFR’s report provides (an) adequate basis for the FSOC to designate asset managers as systemically important…and whether OFR is up to the tasks called for by its statutory mandate.”

For those of us who have been following this area for a while, the answer to the latter question is clearly a resounding “no”. The Council claims legitimacy because the heads of all the major financial regulatory agencies are represented on its Board. Yet it has been clear for a while that the Council has been mostly off on a frolic of its own.

Commissioner Aguilar notes that the SEC staff has “no input or influence into” the FSOC or OFR processes and that the Council paid scant regard to the expertise or industry knowledge of the traditional regulators. Indeed, the preliminary actions of the Council in determining whether to “designate” mutual funds as systemic echoes the Council’s actions in the lead-up to the designation of several insurance firms. It should be remembered that the only member of the Board to vote against the designation of insurance powerhouse, Prudential as a “systemic nonbank financial company” was Roy Woodall. He is also the only Board member with any insurance industry experience. And in the case of mutual funds and asset managers, the quality of the information informing the Council’s decisions – in the form of the widely ridiculed OFR study - is even weaker. The process Commissioner Aguilar describes, where traditional regulatory agencies must merely rubber stamp decisions made by the FSOC staff, is untenable (in part, because the FSOC staff itself has no depth of experience, financial or otherwise).

Commissioner Aguilar’s comments could be viewed as the beginning of the regulatory turf war that was an inevitable outcome of Dodd-Frank’s overbroad and contradictory mandates to competing regulators. But the numerous and well documented problems with the very concept of the Financial Stability Oversight Council means that it is time Congress pays some attention to Commissioner Aguilar’s comments and reigns in the FSOC’s excessive powers. 

 

Categories: Policy Institutes

No, There Are NOT Three Job Seekers for Every Job Opening

Cato Op-Eds - Thu, 04/03/2014 - 14:32

Alan Reynolds

Unemployment benefits could continue up to 73 weeks until this year, thanks to “emergency” federal grants, but only in states with unemployment rates above 9 percent.  That gave the long-term unemployed a perverse incentive to stay in high-unemployment states rather than move to places with more opportunities.   

Before leaving the White House recently, former Presidential adviser Gene Sperling had been pushing Congress to reenact “emergency” benefits for the long-term unemployed.  That was risky political advice for congressional Democrats, ironically, because it would significantly increase the unemployment rate before the November elections.  That may explain why congressional bills only restore extended benefits through May or June.

Sperling argued in January that, “Most of the people are desperately looking for jobs. You know, our economy still has three people looking for every job (opening).”  PolitiFact declared that statement true.  But it is not true. 

The “Job Openings and Labor Turnover Survey” (JOLTS) from the Bureau of Labor Statistics does not begin to measure “every job (opening).”  JOLTS asks 16,000 businesses how many new jobs they are actively advertising outside the firm.  That is comparable to the Conference Board’s index of help wanted advertising, which found almost 5.2 million jobs advertised online in February.  

With nearly 10.5 million unemployed, and 5.2 million jobs ads, one might conclude that our economy has two people looking for every job (opening)” rather than three.  But that would also be false, because no estimate of advertised jobs can possibly gauge all available jobs.

Consider this: The latest JOLTS survey says “there were 4.0 million job openings in January,” but “there were 4.5 million hires in January.”  If there were only 4.0 million job openings, how were 4.5 million hired?   Because the estimated measure of “job openings” was ridiculously low. It always is.

The Table shows that from 2004 to 2013 a million more people were hired every month (4.6 million a month) than the alleged number of “job openings” (3.6 million).   

In years of high unemployment, such as 2004 and 2009, the gap between hires and ads reached 1.4 million – mainly because of reduced advertising rather than reduced hiring.  The well-known cyclicality of help-wanted ads makes such ads a terrible measure of job opportunities.  The number of advertised job openings always falls sharply when unemployment is high – because employers can fill most jobs without advertising.   As Stanford economist Robert Hall explains, employers “see there are all kinds of highly qualified people out there they can hire easily, so they don’t need to do a lot of recruiting—people are pounding on the door.”  This is one reason the number of help-wanted ads tells us little about the number of available jobs.

“Many jobs are never advertised,” explained a recent BLS Occupational Outlook Handbook; “People get them by talking to friends, family, neighbors, acquaintances, teachers, former coworkers, and others who know of an opening.”  Note that because many jobs are never advertised they are also never counted as “job openings” by JOLTS. 

The BLS Handbook adds that, “Directly contacting employers is one of the most successful means of job hunting.”  Executive employment counselor Paul Bernard likewise warns against the “reactive approach” of looking for job ads; “Instead, take a proactive approach. Spend at least 75 percent of your time networking — online and in person — with people in fields you want to work in. These days, most good jobs come through personal networking.” 

Yet jobs are acquired through the initiative of job seekers are not counted as job openings by JOLTS.   New job openings within firms are excluded from this constricted concept of job opportunities.  Even the common practice of rehiring previously laid-off employees when business picks up does not count as “job openings” in JOLTS because it requires only a letter or phone call, not advertising.

The JOLTS data on hiring proves that the JOLTS data on the number of help-wanted ads is useless as a measure of “job openings.”   Paul Krugman, Gene Sperling, PolitiFact and others who repeatedly claim these ill-defined JOLTS estimates demonstrate that there are three job seekers for every available job are completely wrong.

Categories: Policy Institutes

IRS Shouldn't Force Taxpayers Into Tax-Maximizing Transactions

Cato Op-Eds - Thu, 04/03/2014 - 13:24

Ilya Shapiro

While tax evasion is a crime, the Supreme Court has long recognized that taxpayers have a legal right to reduce how much they owe, or avoid taxes all together, through careful tax planning. Whether that planning takes the form of an employee’s deferring income into a pension plan, a couple’s filing a joint return, a homeowner’s spreading improvement projects over several years, or a business’s spinning-off subsidiaries, so long as the actions are otherwise lawful, the fact they were motivated by a desire to lessen one’s tax burden doesn’t render them illegitimate.

The major limitation that the Court (and, since 2010, Congress) has placed on tax planning is the “sham transaction” rule (also known as the “economic substance” doctrine), which, in its simplest form, provides that transaction solely intended to lessen a commercial entity’s tax burden, with no other valid business purpose, will be held to have no effect on that entity’s income-tax assessment. The classic sham transaction is a deal where a corporation structures a series of deals between its subsidiaries, producing an income-loss on paper that is then used to lower the parent company’s profits (and thus its tax bill) without reducing the value of the assets held by the commercial entity as a whole.

We might quibble with a rule that effectively nullifies perfectly legal transactions, but a recent decision by the U.S. Court of Appeals for the Eighth Circuit greatly expanded even the existing definition of “economic substance,” muddying the line between lawful tax planning and illicit tax evasion. At issue was Wells Fargo’s creation of a new non-banking subsidiary to take over certain unprofitable commercial leases. Because the new venture wasn’t a bank, it wasn’t subject to the same stringent regulations as its parent company. As a result, the holding company (WFC Holdings Corp.) was able to generate tens of millions of dollars in profits.

Despite the very real economic gains to the subsidiary (and the parent), the Eighth Circuit held that the set-up was a sham because not every individual component of the restructuring produced a substantial economic benefit or was justified by a non-tax-related business purpose. In effect, the court created a new rule by which a deal with an unquestionable business purpose can still be declared a sham if the companies involved choose to structure it to be as tax-efficient as possible.

Joined by the U.S. Chamber of Commerce and the Financial Services Roundtable, Cato has filed an amicus brief supporting Wells Fargo’s petition for Supreme Court review. We present three arguments: (1) The Eighth Circuit’s ruling, which conflicts with those of its sister circuits, adds to the general confusion surrounding the economic substance doctrine—such that even the most sophisticated taxpayers, assisted by teams of lawyers and accountants, can’t predict how much tax they will owe at the end of the year. (2) This confusion creates substantial burdens for businesses and consumers, without any benefit to the economy. Legal uncertainty causes companies to shy away from complex deals and to waste millions of dollars on tax planning and litigation, money that could be better spent on growing their businesses (creating new jobs) or R&D (creating better products for consumers). (3) Taken to its logical extreme, the lower court’s rule requires taxpayers with a valid business purpose to select the transaction that fulfills that purpose with the highest possible tax liability. That rule is profoundly unwise and contrary to precedent, which holds that “[a taxpayer’s] legal right … to decrease the amount of what otherwise would be his taxes, or altogether avoid them, by means which the law permits, cannot be doubted.”

The Supreme Court hasn’t revisited the economic substance doctrine in nearly 75 years, so it’s high time it provided some clarity. The Court will likely decide whether to review WFC Holdings Corp. v. United States by the time it recesses for the summer; if it takes the case, oral argument will be in late fall.

This blogpost was co-authored by Cato legal associate Gabriel Latner.

Categories: Policy Institutes

Pages