Feed aggregator

The Golden Rule of Spending Restraint

Cato Op-Eds - Mon, 04/07/2014 - 12:46

Daniel J. Mitchell

My tireless (and probably annoying) campaign to promote my Golden Rule of spending restraint is bearing fruit.

The good folks at the editorial page of the Wall Street Journal allowed me to explain the fiscal and economic benefits that accrue when nations limit the growth of government.

Here are some excerpts from my column, starting with a proper definition of the problem.

What matters, as Milton Friedman taught us, is the size of government. That’s the measure of how much national income is being redistributed and reallocated by Washington. Spending often is wasteful and counterproductive whether it’s financed by taxes or borrowing.

So how do we deal with this problem?

I’m sure you’ll be totally shocked to discover that I think the answer is spending restraint.

More specifically, governments should be bound by my Golden Rule.

Ensure that government spending, over time, grows more slowly than the private economy. …Even if the federal budget grew 2% each year, about the rate of projected inflation, that would reduce the relative size of government and enable better economic performance by allowing more resources to be allocated by markets rather than government officials.

I list several reasons why Mitchell’s Golden Rule is the only sensible approach to fiscal policy.

A golden rule has several advantages over fiscal proposals based on balanced budgets, deficits or debt control. First, it correctly focuses on the underlying problem of excessive government rather than the symptom of red ink. Second, lawmakers have the power to control the growth of government spending. Deficit targets and balanced-budget requirements put lawmakers at the mercy of economic fluctuations that can cause large and unpredictable swings in tax revenue. Third, spending can still grow by 2% even during a downturn, making the proposal more politically sustainable.

The last point, by the way, is important because it may appeal to reasonable Keynesians. And, in any event, it means the Rule is more politically sustainable.

I then provide lots of examples of nations that enjoyed great success by restraining spending. But rather than regurgitate several paragraphs from the column, here’s a table I prepared that wasn’t included in the column because of space constraints.

It shows the countries that restrained spending and the years that they followed the Golden Rule. Then I include three columns of data. First, I show how fast spending grew during the period, followed by numbers showing what happened to the overall burden of government spending and the change to annual government borrowing.

Last but not least, I deal with the one weakness of Mitchell’s Golden Rule. How do you convince politicians to maintain fiscal discipline over time?

I suggest that Switzerland’s “debt brake” may be a good model.

Can any government maintain the spending restraint required by a fiscal golden rule? Perhaps the best model is Switzerland, where spending has climbed by less than 2% per year ever since a voter-imposed spending cap went into effect early last decade. And because economic output has increased at a faster pace, the Swiss have satisfied the golden rule and enjoyed reductions in the burden of government and consistent budget surpluses.

In other words, don’t bother with balanced budget requirements that might backfire by giving politicians an excuse to raise taxes.

If the problem is properly defined as being too much government, then the only logical answer is to shrink the burden of government spending.

Last but not least, I point out that Congressman Kevin Brady of Texas has legislation, the MAP Act, that is somewhat similar to the Swiss Debt Brake.

We know what works and we know how to get there. The real challenge is convincing politicians to bind their own hands.

Categories: Policy Institutes

Education, Standards, and Private Certification

Cato Op-Eds - Mon, 04/07/2014 - 12:03

Jason Bedrick

Can there be standards in education without the government imposing them?

Too many education policy wonks, including some with a pro-market bent, take it for granted that standards emanate solely from the government. But that does not have to be the case. Indeed, the lack of a government-imposed standard leaves space for competing standards. As a result of market incentives, these standards are likely to be higher, more diverse, more comprehensive, and more responsive to change than the top-down, one-size-fits-all standards that governments tend to impose. I explain why this is so at Education Next today in ”What Education Reformers Can Learn from Kosher Certification.” 

Categories: Policy Institutes

Hungary's Slide Towards Authoritarianism

Cato Op-Eds - Mon, 04/07/2014 - 11:20

Dalibor Rohac

Yesterday’s general election in Hungary has given Viktor Orbán’s party, Fidesz, a very comfortable majority in the Hungarian Parliament, while strengthening the openly racist Jobbik party, which earned over 21 percent of the popular vote. Neither of this is good news for Hungarians or for Central Europe as a whole.

In the 1990s, Hungary was among the most successful of transitional economies of Central and Eastern Europe. With a significant exposure to markets in the final years of the Cold War and a political establishment committed to reforms, it was often singled out as an example of how a successful, sustained transition towards market and democracy should look like.

In 2014, the situation could not be more different. Hungary’s economic policies have become increasingly populist and haphazard, as the government has confiscated the assets of private pension funds, undermined the independence of the central bank, and botched the consolidation of the country’s public finances (p. 77). Worse yet, Hungary has seen a growth of nationalist and anti-Semitic sentiments which have not been adequately countered by the country’s political elites. In a recent column, I wrote about Mr. Orbán’s personal responsibility for the disconcerting political and economic developments in Hungary:

Mr. Orbán’s catering to petty nationalism often borders on selective amnesia about certain parts of Hungarian history. Recently the Federation of Hungarian Jewish Communities, the Mazsihisz, announced it would not take part in the Orbán government’s Holocaust commemorations. According to the Mazsihisz, the framing of the ceremonies whitewashes the role that the Hungarian government played and focuses exclusively on the crimes perpetrated by the Germans—despite the fact that Hungary adopted its first anti-Jewish laws as early as 1938.

Mr. Orbán’s tone-deafness when it comes to historical symbols goes hand in hand with a concerted effort to undermine the foundations of liberal democracy and rule of law in Hungary. Since Mr. Orbán came to office four years ago, Fidesz has consolidated its political power and used it to pass controversial legislation tightening media oversight, as well as constitutional changes that curb judicial power and restrict political advertising, among other measures.

Categories: Policy Institutes

FEMA Disaster Declarations

Cato Op-Eds - Fri, 04/04/2014 - 16:34

Chris Edwards

I am writing a study on the Federal Emergency Management Agency (FEMA) and looking at the issue of presidential disaster declarations. Under the 1988 Stafford Act, a state governor may request that the president declare a “major disaster” in the state if “the disaster is of such severity and magnitude that effective response is beyond the capabilities of the state and the affected local governments.”

The main purpose of declarations is to impose on federal taxpayers the relief and rebuilding costs that would otherwise fall on state and local taxpayers and individuals in the affected area. Federalism is central to disaster planning and response in the United States, and federal aid is only supposed to be for the most severe events. Unfortunately, the relentless political forces that are centralizing power in just about every policy area are also doing so in disaster policy.

Below is a chart of FEMA data showing the number of “major disasters” declared by presidents since 1974, when the current process was put in place. The number of declared disasters has soared as presidents have sought political advantage in handing out more aid. Presidents have been ignoring the plain language of the Stafford Act, which allows for aid only in the most severe situations.

In the chart, I marked with red bars the years that presidents ran for reelection. In those years, presidents have generally declared the most major disasters. That was true of Ronald Reagan in 1984, George H.W. Bush in 1992, and Bill Clinton in 1996. George W. Bush declared the most disasters of his first term in his reelection year of 2004. The two presidents who do not fit the pattern are Jimmy Carter and Barack Obama.

Categories: Policy Institutes

Minimum Wage Solidarity Misplaced

Cato Op-Eds - Fri, 04/04/2014 - 14:31

James A. Dorn

Senate Democrats are anxious to bring the Minimum Wage Fairness Act (S. 1737) up for a vote to express their solidarity with “progressives.”  That solidarity, however, is misplaced. The bill is not a panacea for the prosperity of low-skilled workers; it is anti-free market and immoral—based on coercion not consent.

The bill would increase the federal minimum wage to $10.10 after two years, index it for inflation, and increase the minimum for tipped workers.  Those changes would substantially increase the cost of hiring low-skilled workers, lead to job losses and unemployment (especially in the longer run as businesses shift to labor-saving methods of production), and slow job growth.

Although there is virtually no chance this bill would pass, Senate Majority Leader Harry Reid (D-Nev) wants it to come to the floor so he and his compatriots can express their support for low-income workers (and for unions and others who support the minimum wage increase) in an election year.  “Democrats are focused on the future,” says Reid, and “we were elected to improve people’s lives.” 

In a recent email, the Agenda Project Action Fund provided talking points and instructed recipients to contact their senators to bring the minimum wage issue to the floor for debate.  Erica Payne, founder of the Agenda Project, believes “a higher minimum wage is consistent with free market principles” and that conservatives should support the increase.  The talking points include assertions that “raising the minimum wage is a boon to business” and “will not lead to job loss.”

The Agenda Project’s goal is “to build a powerful, intelligent, well-connected political movement capable of identifying and advancing rational, effective ideas in the public debate and in so doing ensure our country’s enduring success.”  Those are admirable goals but the minimum wage is neither a rational nor effective means of attaining them.

Companies like the Gap and Costco, which have increased entry-level wages, do so because they expect those voluntary increases to be profitable in the long run.  Such actions are consistent with free market principles, unlike a minimum wage law that forces employers to pay more than the prevailing market wage and prevents workers from contracting for less than the legal minimum in order to retain or secure a job.

It is disingenuous to deny the law of demand: when a worker’s skill level and experience do not change and the government mandates a higher minimum wage that exceeds those workers’ productivity, employers will hire fewer workers.  Importantly, the negative impact on jobs for low-skilled workers will be stronger in the long run than the short run.  But politicians focus on the short run and argue that small increases in the minimum wage will not harm jobs.

Rationality depends on taking the long view and using the logic of the market to analyze the impact of the minimum wage and other policies.  Common sense and a massive amount of empirical evidence show that raising the minimum wage is not an effective solution to poverty or unemployment. The minimum wage rhetoric is one thing, reality another.

In a recent Tax & Budget Bulletin for the Cato Institute, noted labor economist Joseph J. Sabia of San Diego State University, presents a strong body of evidence that minimum wage increases adversely affect employment opportunities for lower-skilled workers and those who do benefit mostly come from non-poor households. That evidence, supported by numerous peer-reviewed journal articles, is in direct contrast to The Agenda Project’s contention that “since 1994, studies have found there is little to no evidence of employment reduction following minimum wage increases at both state and federal levels.”

The idea that a higher minimum wage will “drive the economy” and fuel economic growth is an illusion.  Workers must first produce more if they are to be paid more and keep their jobs.  A higher minimum wage is neither necessary nor sufficient for economic growth. Some workers would gain but others would lose, as would employers who are crowded out of the market or have fewer funds to invest, and consumers who have to pay higher prices. There is no free lunch.

The minimum wage redistributes a given economic pie, it doesn’t enlarge it.  The only way to “drive the economy” is to raise productivity, not the minimum wage.  The key factors that improve real economic growth—and lead to a higher standard of living—are institutional changes that safeguard persons and property, lower the costs of doing business, and encourage entrepreneurship.  Those institutions are endangered by the politicization of the labor market.

When New York State increased its minimum wage by 31 percent (from $5.15 an hour to $6.75) in 2004–06, the number of jobs open to younger, less-educated workers decreased by more than 20 percent, as Sabia, Richard Burkhauser, and Benjamin Hansen found in a landmark study published in the Industrial and Labor Relations Review in 2012.  An increase in the federal minimum wage from $7.25 an hour to $10.10 would no doubt have a similar impact.

In another study, Sabia and Burkhauser found “no evidence that minimum wage increases between 2003 and 2007 lowered state poverty rates” (Southern Economic Journal, January 2010). Those workers most apt to lose their jobs as a result of a higher minimum wage are from low-income households. Hence, an increase in the minimum wage can actually increase poverty.  As David Neumark, Mark Schweitzer, and William Wascher noted in the Journal of Human Resources (2005), “The net effect of higher minimum wages is …  to increase the proportion of families that are poor and near-poor.”

Proponents of the higher minimum wage downplay those adverse consequences and point to widespread public support—and politicians like nothing better than polls to guide their agendas. The public supports higher minimum wages because they haven’t thought about the longer-run consequences. Most polls simply ask, “Are you in favor of a higher minimum wage?” without saying anything about the loss of jobs and unemployment that will occur.  When those costs are taken into account the majority swings against an increase in the minimum wage, as shown by Emily Ekins.

When legislators mandate a minimum wage above the market wage determined by demand and supply they deprive workers and employers of the freedom of contract that lays at the heart of a dynamic market economy.  The wealth of a nation is not enhanced by prohibitions on free trade—whether in product, labor, or capital markets.  People should be free to choose and improve. If low-skilled workers can’t find a job at the minimum wage, they won’t have the opportunity to fully develop themselves and move up the income ladder.

Groups that are pushing for a higher minimum wage may have good intentions but they discount—or fail to understand—the longer-run adverse effects of that legislation on freedom and prosperity.  They only look at those who may benefit from a higher minimum wage, including union members, while downplaying the inevitable shift to labor-saving technology that will occur over time and the jobs that will never be created.

Those who argue that there is a moral case for a higher minimum wage seem to think that using the force of government/law to mandate wage rates that are greater than those freely negotiated in markets is both “fair” and “just.”  Yet the minimum wage by its very nature interferes with freedom of contract and, in that sense, is unjust. Moreover, it prevents mutually beneficial exchanges. A young worker with little education and few job skills who is willing to work at less than the minimum wage to get a job and gain experience is prevented from doing so.  How can that be “fair?”

Instead of solidarity, minimum wage proponents create dissent when workers find that prosperity cannot be created by a stroke of the legislation pen.  Politicians may promise a higher wage rate to low-skilled workers but for those workers who lose their jobs, their incomes will be zero.  Even the CBO thinks the Obama promise of $10.10 an hour, if implemented, would lead to at least 500,000 fewer jobs for those the law is intended to help.  

The Minimum Wage Fairness Act is the wrong medicine for improving the plight of low-income families and creating a prosperous nation. Poverty is not abolished by legislative fiat.  Rather, the path toward economic growth and well-being is paved with genuine free markets and limited government, and by thinking in terms of long-run effects of current legislation, not short-term benefits to special interests.

Categories: Policy Institutes

The Current Wisdom: The Administration’s Social Cost of Carbon Turns “Social Cost” on its Head

Cato Op-Eds - Thu, 04/03/2014 - 17:43

Paul C. "Chip" Knappenberger and Patrick J. Michaels

This Current Wisdom takes an in-depth look at how politics can masquerade as science.

                      “A pack of foma,” Bokonon said

                                                Paraphrased from Cat’s Cradle (1963), Kurt Vonnegut

In his 1963 classic, Cat’s Cradle, iconic writer Kurt Vonnegut described the sleepy little Caribbean island of San Lorenzo, where the populace was mesmerized by the prophet Bokonon, who created his own religion and his own vocabulary. Bokonon communicated his religion through simple verses he called “calypsos.” “Foma” are half-truths that conveniently serve the religion, and the paraphrase above is an apt description of the Administration’s novel approach to determining the “social cost of carbon” (dioxide). 

Because of a pack of withering criticism, the Office of Management and Budget (OMB) is now in the process of reviewing how the Obama Administration calculates and uses the social cost of carbon (SCC).  We have filed a series of Comments with the OMB outlining what is wrong with the current SCC determination. Regular readers of this blog are familiar with some of the problems that we have identified, but our continuing analysis of the Administration’s SCC has yielded a few more nuggets.

We describe a particularly rich one here—that the government wants us to pay more today to offset a modest climate change experienced by a wealthy future society than to help alleviate a lot of climate change impacting a less well-off future world.

This kind of logic might be applied by Bokonon on San Lorenzo, but here in sophisticated Washington? It is exactly the opposite of what a rational-thinking person would expect. In essence, the Obama Administration has turned the “social cost” of the social cost of carbon on its head.  The text below, describing this counterintuitive result, is adapted from our most recent set of Comments to the OMB.

The impetus behind efforts to determine the “social cost of carbon” is generally taken to be the desire to quantify the “externalities,” or unpaid future costs, that result from climate changes produced from anthropogenic emissions of carbon dioxide and other greenhouse gases. Such information could be a useful tool in guiding policy decisions regarding greenhouse gas emissions—it if were robust and reliable.

However, as is generally acknowledged, the results of such efforts (generally through the development of Integrated Assessment Models, IAMs) are highly sensitive not only to the model input parameters but also to how the models have been developed and what processes they try to include. One prominent economist, Robert Pindyck of M.I.T. recently wrote (Pindyck, 2013) that the sensitivity of the IAMs to these factors renders them useless in a policymaking environment:

Given all of the effort that has gone into developing and using IAMs, have they helped us resolve the wide disagreement over the size of the SCC? Is the U.S. government estimate of $21 per ton (or the updated estimate of $33 per ton) a reliable or otherwise useful number? What have these IAMs (and related models) told us? I will argue that the answer is very little. As I discuss below, the models are so deeply flawed as to be close to useless as tools for policy analysis. Worse yet, precision that is simply illusory, and can be highly misleading.

…[A]n IAM-based analysis suggests a level of knowledge and precision that is nonexistent, and allows the modeler to obtain almost any desired result because key inputs can be chosen arbitrarily.

Nevertheless (or perhaps because of this), the federal government has now incorporated IAM-based determinations of the SCC into many types of new and proposed regulations. 

The social cost of carbon should simply be the fiscal impact on future society that human-induced climate change from greenhouse gas emissions will impose. Knowing this, we (policymakers and regular citizens) can decide how much (if at all) we are willing to pay currently to reduce the costs to future society.

Logically,  we are probably more willing to sacrifice more now if we know that future society would be impoverished and suffer from extreme climate change, than we would if we knew that the future held minor or modest climate changes impacting a society that will be very well off.  We would expect that the value of the social cost of carbon would reflect the difference between these two hypothetical future worlds—the SCC should be far greater in an impoverished future facing a high degree of climate change than an affluent future confronted with much less climate change.

But if you thought this, you would be wrong. This is Bokononism.

Instead, the IAMs,  as run by a federal “Interagency Working Group” (IWG) produce nearly the opposite result— that is that the SCC is far lower in the less affluent/high climate change future than it is in the more affluent/low climate change future. Bokonon says it is so.

We illustrate this illogical and impractical result using one of the Integrated Assessment Models used by the IWG—a model called the Dynamic Integrated Climate-Economy model (DICE) that was developed by Yale economist William Nordhaus. The DICE model was installed and run at the Heritage Foundation by Kevin Dayaratna and David Kreutzer using the same model set up and emissions scenarios as prescribed by the IWG. The DICE projections of future temperature change were provided to us by the Heritage Foundation.

Contrary to Einstein’s dictum, Bokonon does throw DICE. Figure 1 shows DICE projections of the earth’s average surface temperature for the years 2000-2300 produced by using the five different scenarios of how future societies develop (and emit greenhouse gases). 

Disregard the fact that anyone who thinks we can forecast the future state of society 100 years from now (much less 300) is handing us a pack of foma.  Heck, 15 years ago everyone knew the world was running out of natural gas.   

The numerical values on the right-hand side of the illustration are the values for the social cost of carbon associated with the temperature change resulting from each emissions scenario (the SCC is reported for the year 2020 using constant $2007 and assuming a 3% discount rate).  The temperature change can be considered a good proxy for the magnitude of the overall climate change impacts.


Figure 1. Future temperature changes, for the years 2000-2300, projected by the DICE model for each of the five emissions scenarios used by the federal Interagency Working Group. The temperature changes are the arithmetic average of the 10,000 Monte Carlo runs from each scenario. The 2020 value of the SCC (in $2007) produced by the DICE model (assuming a 3% discount rate) is included on the right-hand side of the figure. (DICE data provided by Kevin Dayaratna and David Kreutzer of the Heritage Foundation).

Notice in Figure 1 that the value for the SCC shows little (if any) correspondence to the magnitude of climate change. The MERGE scenario produces the greatest climate change and yet has the smallest SCC associated with it. The “5th Scenario” is one that holds climate change to a minimum by imposing strong greenhouse gas emissions limitations yet a SCC that is more than 20% greater than the MERGE scenario.  The global temperature change by the year 2300 in the MERGE scenario is 9°C while in the “5th Scenario” it is only 3°C. The highest SCC is from the IMAGE scenario—a scenario with a mid-range climate change. All of this makes sense only to Bokonon.

If the SCC bears little correspondence to the magnitude of future human-caused climate change, than what does it represent?

Figure 2 provides some insight.

Figure 2. Future global gross domestic product, for the years 2000-2300 for each of the five emissions scenarios used by the federal Interagency Working Group. The 2020 value of the SCC (in $2007) produced by the DICE model (assuming a 3% discount rate) is included on the right-hand side of the figure.

When comparing the future global gross domestic product (GDP) to the SCC, we see, generally, that the scenarios with the higher future GDP (most affluent future society) have the higher SCC values, while the futures with lower GDP (less affluent society) have, generally, lower SCC values.

Combining the results from Figures 1 and 2 thus illustrates the absurdities in the federal government’s (er, Bokonon’s) use of the DICE model. The scenario with the richest future society and a modest amount of climate change (IMAGE) has the highest value of the SCC associated with it, while the scenario with the poorest future society and the greatest degree of climate change (MERGE) has the lowest value of the SCC. Only Bokononists can understand this.

This counterintuitive result occurs because the damage functions in the IAMs produce output in terms of a percentage decline in the GDP—which is then translated into a dollar amount (which is divided by the total carbon emissions) to produce the SCC. Thus, even a small climate change-induced percentage decline in a high GDP future yields greater dollar damages (i.e., higher SCC) than a much greater climate change-induced GDP percentage decline in a low GDP future.

Only Bokonon would want to spend (sacrifice) more today to help our rich descendants deal with a lesser degree of climate change than to help our relatively less-well-off descendants deal with a greater degree of climate change.

Yet that is what the government’s SCC would lead you to believe and that is what the SCC implies when it is incorporated into federal cost/benefit analyses.

In principle, the way to handle this situation is by allowing the discount rate to change over time. In other words, the richer we think people will be in the future (say the year 2100), the higher the discount rate we should apply to damages (measured in 2100 dollars) they suffer from climate change, in order to decide how much weshould be prepared to sacrifice today on their behalf.

Until (if ever) the current situation is properly rectified, the federal government’s determination of the SCC is not fit for use in the federal regulatory process as it is deceitful and misleading.

Tiger got to hunt

Bird got to fly

Man got to sit and wonder why, why, why.


Tiger got to sleep

Bird got to land

Man got to tell himself he understand.

                                                                -Foma in a Bokonon’s Calypso


Nordhaus, W. 2010. Economic aspects of global warming in a post-Copenhagen environment. Proceedings of the National Academy of Sciences 107(26): 11721-11726.

Pindyck, R. S., 2013. Climate Change Policy: What Do the Models Tell Us? Journal of Economic Literature, 51(3), 860-872.

Categories: Policy Institutes

FSOC’s Failing Grade?

Cato Op-Eds - Thu, 04/03/2014 - 17:24

Louise Bennetts

All the recent hype over the legitimacy of high frequency trading has overshadowed another significant event. In a speech in Washington DC yesterday Securities and Exchange Commissioner, Luis Aguilar, made some fairly strong statements about the recent actions of the Financial Stability Oversight Council (FSOC). The speech was significant because it is the first time that a Democratic Commissioner has criticized the actions of one of the Dodd-Frank Act’s most controversial creations. (To date, the criticism of the Council emanating from the Commission has been levied by the two Republican Commissioners and we all know that Republicans don’t much like Dodd-Frank.) Indeed, Commissioner Aguilar’s statements indicate just how fractured and fragmented the post-Dodd-Frank “systemic risk monitoring” system is.

At issue is the FSOC’s recent foray into the regulation of the mutual fund industry. Commissioner Aguilar described the Council’s actions as “undercut(ting)” the traditional authority of the Securities and Exchange Commission and described the report by the FSOC’s research arm (the Office of Financial Research) as “receiv(ing) near universal criticism.”

He went on to note that “the concerns voiced by commenters and lawmakers raise serious questions about whether the OFR’s report provides (an) adequate basis for the FSOC to designate asset managers as systemically important…and whether OFR is up to the tasks called for by its statutory mandate.”

For those of us who have been following this area for a while, the answer to the latter question is clearly a resounding “no”. The Council claims legitimacy because the heads of all the major financial regulatory agencies are represented on its Board. Yet it has been clear for a while that the Council has been mostly off on a frolic of its own.

Commissioner Aguilar notes that the SEC staff has “no input or influence into” the FSOC or OFR processes and that the Council paid scant regard to the expertise or industry knowledge of the traditional regulators. Indeed, the preliminary actions of the Council in determining whether to “designate” mutual funds as systemic echoes the Council’s actions in the lead-up to the designation of several insurance firms. It should be remembered that the only member of the Board to vote against the designation of insurance powerhouse, Prudential as a “systemic nonbank financial company” was Roy Woodall. He is also the only Board member with any insurance industry experience. And in the case of mutual funds and asset managers, the quality of the information informing the Council’s decisions – in the form of the widely ridiculed OFR study - is even weaker. The process Commissioner Aguilar describes, where traditional regulatory agencies must merely rubber stamp decisions made by the FSOC staff, is untenable (in part, because the FSOC staff itself has no depth of experience, financial or otherwise).

Commissioner Aguilar’s comments could be viewed as the beginning of the regulatory turf war that was an inevitable outcome of Dodd-Frank’s overbroad and contradictory mandates to competing regulators. But the numerous and well documented problems with the very concept of the Financial Stability Oversight Council means that it is time Congress pays some attention to Commissioner Aguilar’s comments and reigns in the FSOC’s excessive powers. 


Categories: Policy Institutes

No, There Are NOT Three Job Seekers for Every Job Opening

Cato Op-Eds - Thu, 04/03/2014 - 14:32

Alan Reynolds

Unemployment benefits could continue up to 73 weeks until this year, thanks to “emergency” federal grants, but only in states with unemployment rates above 9 percent.  That gave the long-term unemployed a perverse incentive to stay in high-unemployment states rather than move to places with more opportunities.   

Before leaving the White House recently, former Presidential adviser Gene Sperling had been pushing Congress to reenact “emergency” benefits for the long-term unemployed.  That was risky political advice for congressional Democrats, ironically, because it would significantly increase the unemployment rate before the November elections.  That may explain why congressional bills only restore extended benefits through May or June.

Sperling argued in January that, “Most of the people are desperately looking for jobs. You know, our economy still has three people looking for every job (opening).”  PolitiFact declared that statement true.  But it is not true. 

The “Job Openings and Labor Turnover Survey” (JOLTS) from the Bureau of Labor Statistics does not begin to measure “every job (opening).”  JOLTS asks 16,000 businesses how many new jobs they are actively advertising outside the firm.  That is comparable to the Conference Board’s index of help wanted advertising, which found almost 5.2 million jobs advertised online in February.  

With nearly 10.5 million unemployed, and 5.2 million jobs ads, one might conclude that our economy has two people looking for every job (opening)” rather than three.  But that would also be false, because no estimate of advertised jobs can possibly gauge all available jobs.

Consider this: The latest JOLTS survey says “there were 4.0 million job openings in January,” but “there were 4.5 million hires in January.”  If there were only 4.0 million job openings, how were 4.5 million hired?   Because the estimated measure of “job openings” was ridiculously low. It always is.

The Table shows that from 2004 to 2013 a million more people were hired every month (4.6 million a month) than the alleged number of “job openings” (3.6 million).   

In years of high unemployment, such as 2004 and 2009, the gap between hires and ads reached 1.4 million – mainly because of reduced advertising rather than reduced hiring.  The well-known cyclicality of help-wanted ads makes such ads a terrible measure of job opportunities.  The number of advertised job openings always falls sharply when unemployment is high – because employers can fill most jobs without advertising.   As Stanford economist Robert Hall explains, employers “see there are all kinds of highly qualified people out there they can hire easily, so they don’t need to do a lot of recruiting—people are pounding on the door.”  This is one reason the number of help-wanted ads tells us little about the number of available jobs.

“Many jobs are never advertised,” explained a recent BLS Occupational Outlook Handbook; “People get them by talking to friends, family, neighbors, acquaintances, teachers, former coworkers, and others who know of an opening.”  Note that because many jobs are never advertised they are also never counted as “job openings” by JOLTS. 

The BLS Handbook adds that, “Directly contacting employers is one of the most successful means of job hunting.”  Executive employment counselor Paul Bernard likewise warns against the “reactive approach” of looking for job ads; “Instead, take a proactive approach. Spend at least 75 percent of your time networking — online and in person — with people in fields you want to work in. These days, most good jobs come through personal networking.” 

Yet jobs are acquired through the initiative of job seekers are not counted as job openings by JOLTS.   New job openings within firms are excluded from this constricted concept of job opportunities.  Even the common practice of rehiring previously laid-off employees when business picks up does not count as “job openings” in JOLTS because it requires only a letter or phone call, not advertising.

The JOLTS data on hiring proves that the JOLTS data on the number of help-wanted ads is useless as a measure of “job openings.”   Paul Krugman, Gene Sperling, PolitiFact and others who repeatedly claim these ill-defined JOLTS estimates demonstrate that there are three job seekers for every available job are completely wrong.

Categories: Policy Institutes

IRS Shouldn't Force Taxpayers Into Tax-Maximizing Transactions

Cato Op-Eds - Thu, 04/03/2014 - 13:24

Ilya Shapiro

While tax evasion is a crime, the Supreme Court has long recognized that taxpayers have a legal right to reduce how much they owe, or avoid taxes all together, through careful tax planning. Whether that planning takes the form of an employee’s deferring income into a pension plan, a couple’s filing a joint return, a homeowner’s spreading improvement projects over several years, or a business’s spinning-off subsidiaries, so long as the actions are otherwise lawful, the fact they were motivated by a desire to lessen one’s tax burden doesn’t render them illegitimate.

The major limitation that the Court (and, since 2010, Congress) has placed on tax planning is the “sham transaction” rule (also known as the “economic substance” doctrine), which, in its simplest form, provides that transaction solely intended to lessen a commercial entity’s tax burden, with no other valid business purpose, will be held to have no effect on that entity’s income-tax assessment. The classic sham transaction is a deal where a corporation structures a series of deals between its subsidiaries, producing an income-loss on paper that is then used to lower the parent company’s profits (and thus its tax bill) without reducing the value of the assets held by the commercial entity as a whole.

We might quibble with a rule that effectively nullifies perfectly legal transactions, but a recent decision by the U.S. Court of Appeals for the Eighth Circuit greatly expanded even the existing definition of “economic substance,” muddying the line between lawful tax planning and illicit tax evasion. At issue was Wells Fargo’s creation of a new non-banking subsidiary to take over certain unprofitable commercial leases. Because the new venture wasn’t a bank, it wasn’t subject to the same stringent regulations as its parent company. As a result, the holding company (WFC Holdings Corp.) was able to generate tens of millions of dollars in profits.

Despite the very real economic gains to the subsidiary (and the parent), the Eighth Circuit held that the set-up was a sham because not every individual component of the restructuring produced a substantial economic benefit or was justified by a non-tax-related business purpose. In effect, the court created a new rule by which a deal with an unquestionable business purpose can still be declared a sham if the companies involved choose to structure it to be as tax-efficient as possible.

Joined by the U.S. Chamber of Commerce and the Financial Services Roundtable, Cato has filed an amicus brief supporting Wells Fargo’s petition for Supreme Court review. We present three arguments: (1) The Eighth Circuit’s ruling, which conflicts with those of its sister circuits, adds to the general confusion surrounding the economic substance doctrine—such that even the most sophisticated taxpayers, assisted by teams of lawyers and accountants, can’t predict how much tax they will owe at the end of the year. (2) This confusion creates substantial burdens for businesses and consumers, without any benefit to the economy. Legal uncertainty causes companies to shy away from complex deals and to waste millions of dollars on tax planning and litigation, money that could be better spent on growing their businesses (creating new jobs) or R&D (creating better products for consumers). (3) Taken to its logical extreme, the lower court’s rule requires taxpayers with a valid business purpose to select the transaction that fulfills that purpose with the highest possible tax liability. That rule is profoundly unwise and contrary to precedent, which holds that “[a taxpayer’s] legal right … to decrease the amount of what otherwise would be his taxes, or altogether avoid them, by means which the law permits, cannot be doubted.”

The Supreme Court hasn’t revisited the economic substance doctrine in nearly 75 years, so it’s high time it provided some clarity. The Court will likely decide whether to review WFC Holdings Corp. v. United States by the time it recesses for the summer; if it takes the case, oral argument will be in late fall.

This blogpost was co-authored by Cato legal associate Gabriel Latner.

Categories: Policy Institutes

FBI Seizes Antiquities First, Asks Questions Later

Cato Op-Eds - Thu, 04/03/2014 - 12:57

Walter Olson

An extraordinary and disturbing story just out from the Indianapolis Star/USA Today

WALDRON, Ind. — FBI agents Wednesday seized “thousands” of cultural artifacts, including American Indian items, from the private collection of a 91-year-old man who had acquired them over the past eight decades.

An FBI command vehicle and several tents were spotted at the property in rural Waldron, about 35 miles southeast of Indianapolis.

The Rush County man, Don Miller, has not been arrested or charged.

So if the owner hasn’t been arrested or charged, what’s the basis of the raid? 

Robert A. Jones, special agent in charge of the Indianapolis FBI office, would not say at a news conference specifically why the investigation was initiated, but he did say the FBI had information about Miller’s collection and acted on it by deploying its art crime team.

FBI agents are working with art experts and museum curators, and neither they nor Jones would describe a single artifact involved in the investigation, but it is a massive collection. Jones added that cataloging of all of the items found will take longer than “weeks or months.”…

The aim of the investigation is to determine what each artifact is, where it came from and how Miller obtained it, Jones said, to determine whether some of the items might be illegal to possess privately.

Jones acknowledged that Miller might have acquired some of the items before the passage of U.S. laws or treaties prohibited their sale or purchase.

Might be illegal. Or might have been acquired lawfully. They’re not saying! But to satisfy its curiosity the government gets to seize everything and sort through at its leisure over longer than “weeks or months.” 

It doesn’t sound as if the artifacts were in some sort of immediate danger:

In addition to American Indian objects, the collection includes items from China, Russia, Peru, Haiti, Australia and New Guinea, he said. …

The objects were not stored to museum standards, Jones said, but it was apparent Miller had made an effort to maintain them well.

I’ve written previously, elsewhere and in this space, about 

the rise of a new “antiquities law” in which museums and private collectors have come under legal pressures to hand over (“repatriate”) ancient artifacts and archaeological finds to governments, Indian tribes and other officially constituted bodies, even when those artifacts have been in legitimate collector hands for 100 or more years with no hint of force or fraud.

Further regulatory regimes covering exotic and endangered animal and plant material make it dangerous to let the feds anywhere near your high-end guitar or other wooden artifact, and will soon make it unlawful to sell or move across state lines your family’s antique ivory-keyed piano (more here).


Categories: Policy Institutes

Theory: The Supreme Court Could Apply the Terms of the Fourth Amendment in Fourth Amendment Cases

Cato Op-Eds - Thu, 04/03/2014 - 10:58

Jim Harper

The Supreme Court could apply the terms of the Fourth Amendment in Fourth Amendment cases.

I know. Weird idea, right?

But it’s an idea I’ve pushed in briefs to the Court over the last few years: in U.S. v. Jones (2011), Jardines v. Florida (2012), In re Electronic Privacy Information Center (2013), and most recently in Riley v. California (2014). We’ll file in U.S. v. Wurie next week.

The idea is interesting enough that Mason Clutter of the National Association of Criminal Defense Lawyers has paid me the compliment of discussing it in her new law review article, “Dogs, Drones, and Defendants: The Fourth Amendment in the Digital Age.”

Jim Harper, director of information policy studies at the Cato Institute and one of the authors of Cato’s amicus brief in Jardines, regularly makes the argument that “[a] ‘search’ occurs when government agents seek out that which is otherwise concealed from view, the opposite condition from what pertains when something is in ‘plain view.’ People maintain ‘privacy’ by keeping things out of others’ view, exercising control over personal information using physics and law.” The “Harper Theory” of search and seizure encourages judges, lawyers, and law enforcement officers to revert to the “plain meaning[]” of the Fourth Amendment’s use of “search” and “seizure.”

That’s right. The idea of using the words of the Fourth Amendment rather than stacks of confusing doctrine now has a name, and it’s the “Harper theory.” I guess I thought of it, so it’s named after me!

In seriousness, it is a challenge to recognize seizures and searches as such in “high-tech” contexts. Today’s problems with the Fourth Amendment—and the problem of doctrine obfuscating the text—began in 1929, when the Olmstead Court failed to recognize parallels between that era’s high-tech—telephonic communications—and written material sent through the mail.

But it is possible to recognize electronic and digital documents and communications as papers and effects. It is possible to recognize seizures when invasions of property rights occur in whatever form. And it is possible to recognize searches as efforts to discover information that is otherwise concealed from view. All this makes it possible to apply the words of the Fourth Amendment in Fourth Amendment cases.

I’m complimented if that’s called the “Harper theory.” I feel like I got it from Cardozo.

Categories: Policy Institutes

Chairman Ryan's Budget: A Mixed Bag of Reforms

Cato Op-Eds - Wed, 04/02/2014 - 11:38

Nicole Kaeding

House Budget Committee Chairman Paul Ryan released his budget proposal yesterday, his last as committee chairman. This budget differs greatly from the budget request submitted by President Obama last month. Ryan would “cut” federal spending by $5.1 trillion over the next 10 years and calls upon Congress to pass pro-growth tax reform. However, Ryan’s budget is still a mixed bag from a small-government perspective.

Positive Reforms in Ryan’s budget:

  • Medicaid Block Grants: Ryan suggests block granting Medicaid to institute some fiscal sanity to this ever-growing program. This reform would reduce state government incentives to overspend and would allow them greater flexibility to innovate and cut costs. Federal spending would be reduced by $732 billion compared to baseline by this simple reform.
  • SNAP Block Grants: The Supplemental Nutrition Assistance Program (“food stamps”) would also be block granted, saving $125 billion over 10 years compared to baseline. SNAP and Medicaid block grant reforms would copy the successful approach of welfare reforms in the 1990s.
  • Medicare Premium Support: Repeating a proposal from his last several budgets, Ryan suggests changing Medicare to a premium-support model. Rather than federal spending going to health care providers, it would be directed toward health care consumers. That would hopefully generate incentives to reduce costs and improve quality. It would also allow seniors to pick the health plan that most closely matches their needs.
  • Repeals ObamaCare Spending: Ryan’s budget repeals ObamaCare’s spending components. This is his largest reduction, which would save taxpayers $2 trillion over the next ten years.

Downsides to Ryan’s budget:

  • Social Security Reform: Ryan’s budget does not tackle Social Security reform, leaving almost one quarter of the federal budget unchanged. He calls on the president and Congress to submit recommendations to reform the program, but does not submit any suggestions of his own.
  • Higher Revenue Baseline: Chairman Ryan calls for pro-growth tax reform within his budget; however, he adopts the Congressional Budget Office’s current revenue baseline. This would keep the extra revenues generated from the numerous tax hikes enacted over the last several years.
  • Delayed Reforms: Perhaps due to political concerns, many of Ryan’s reforms would not start for several years. His SNAP block grant would not begin for five years, and his Medicare premium support model would not start until 2024.
  • Keeps Higher Spending: In December, Ryan and Senate Budget Chairman Patty Murray agreed to increase discretionary spending levels for fiscal year 2014 and fiscal year 2015. This partly gutted the bipartisan Budget Control Act from 2011. Ryan’s budget retains the higher spending levels.

In sum, Ryan’s budget would not solve the government’s overspending problem. But it would be a good first step to reforming the federal behemoth.

Categories: Policy Institutes

Another Campaign Restriction Falls Because First Amendment Strongly Protects Political Speech

Cato Op-Eds - Wed, 04/02/2014 - 11:05

Ilya Shapiro

Despite the 5-4 split among the justices, McCutcheon is an easy case if you apply well-settled law: Restrictions on the total amount an individual may donate to candidates and party committees—as opposed to how much he can donate to any one candidate—violate the First Amendment because they do not prevent quid pro quo corruption or the appearance thereof. That corruption-prevention rationale is the only government interest that the Supreme Court accepts as a valid one for restricting political-campaign activities. As Chief Justice Roberts wrote for the majority (and it is a majority because Justice Thomas concurs in the judgment): “Money in politics may at times seem repugnant to some, but so too does much of what the First Amendment vigorously protects. If the First Amendment protects flag burning, funeral protests, and Nazi parades—despite the profound offense such spectacles cause—it surely protects political campaign speech despite popular opposition.”

With Justice Thomas, however, I would go beyond that simple point and overrule Buckley v. Valeo (1976) altogether because “[c]ontributions and expenditures are simply ‘two sides of the same First Amendment coin’” and the Court’s “efforts to distinguish the two have produced mere ‘word games’ rather than any cognizable principle of constitutional law” (quoting Chief Justice Burger’s partial dissent in Buckley). Buckley rewrote the speech-restrictive post-Watergate campaign-finance law into something no Congress would’ve passed, also inventing legal standards such that one type of political speech has greater First Amendment protection than another. Nearly 20 years later, the Supreme Court rewrote another congressional attempt (McCain-Feingold) to “reform” the rules by which people run for office, shying away from striking down Buckley and producing a convoluted mish-mash opinion that serves nobody’s interest. Enough! The drip-drip of campaign-finance rulings over the last decade has shown, existing campaign-finance law is as unworkable as it is unconstitutional.

As Cato argued in its amicus brief, in a truly free society, people should be able to give whatever they want to whomever they choose, including candidates for public office. The Supreme Court today correctly struck down the biennial campaign contribution limits and gave those who contribute money to candidates and parties as much freedom as those who spend independently to promote campaigns and causes. But it should have gone further.

Categories: Policy Institutes

Another Blow to Campaign Finance Regulation

Cato Op-Eds - Wed, 04/02/2014 - 10:44

Roger Pilon

A quick heads-up: The Supreme Court has just handed down its decision in the much-anticipated campaign finance case of McCutcheon v. Federal Election Commission, and free speech won. See Cato’s brief here. Ilya will write more fully about the decision as soon as he’s had a chance to digest it. In the meantime, here’s the opening paragraph from the syllabus:

The right to participate in democracy through political contributions is protected by the First Amendment, but that right is not absolute. Congress may regulate campaign contributions to protect against corruption or the appearance of corruption. See, e.g., Buckley v. Valeo, 424 U.S. 1, 26-27. It may not, however, regulate contributions simply to reduce the amount of money in politics, or to restrict political participation of some in order to enhance the relative influence of others. See, e.g., Arizona Free Enterprise Club’s Freedom Club PAC v. Bennett, 564 U.S. ___, ___.

In a word, Buckley was not overruled, as we had hoped, although Justice Thomas would have done so (in his concurring opinion). And the usual dissenters dissented. But chalk this up as one more blow against the campaign finance regulators, from whom we will soon hear that the sky is falling, again.

Categories: Policy Institutes

Under the Hood of the House Intel Committe's NSA Reform Bill

Cato Op-Eds - Tue, 04/01/2014 - 17:27

Julian Sanchez

This post was originally published on March 31, 2014 on Just Security

While details on the president’s proposal to end NSA bulk collection of telephony records remain sparse, we do now have an actual piece of legislation to look at from the House Permanent Select Committee on Intelligence—one that tracks the broad outlines of the White House plan even as it differs in several critical details. I’ve already done a quick take in broad brushstrokes over at The Daily Beast; here I want to get into the weeds a bit.

The HPSCI bill actually covers quite a bit more than just NSA bulk collection—there are a few transparency measures and a provision for the FISA Court to appoint amici curiae, which mostly seems like an attempt to preempt legislation creating a more robust FISC “advocate”—but in this post I want to focus on the meat: The prohibition (or so it seems) on bulk collection, and the new authority in §503 designed to replace the current bulk telephony program.

(A) The Bulk Prohibition

The first thing to note is that the (apparent) prohibition on bulk collection is structured somewhat oddly, even taking into account the framers apparent desire to limit that prohibition to certain subcategories of records. The USA Freedom Act, for instance, does this by means of a fairly straightforward modification: It limits the scope of §215 (as well as FISA pen/trap orders and National Security letters) to records that are both relevant to an investigation and pertain to a suspected foreign agent or their direct contacts, using language the Senate had unanimously approved back in 2005. The HPSCI bill is rather bit more convoluted.

First, Section 2 of the bill completely excludes “call detail records” from the scope of §215—and only from §215. The bill defines “call detail records” as “communications routing information,” which sounds awfully general, but both the description as “call detail records” and the series of enumerated telephony-specific data types that follow strongly suggest it’s really limited to telephonic communications routing information. There’s some wiggle room here since the general term precedes the more specific enumeration, but especially in light of the subsequent separate prohibition on acquisition of “electronic communications” records, defined to exclude telephonic communications, I’d be surprised if the FISC didn’t read this narrowly. Though the “including” that precedes the enumerated data types indicates that it’s not exhaustive, the omission of location-associated terms like “cell site and sector” is conspicuous. HPSCI staff are apparently assuring reporters that location data is implicitly included, but we do know that law enforcement routinely obtain bulk location data in the form of “tower dumps,” or records of all the phones registered with a specific cell tower at a particular time. Since phones routinely do this even when they’re not placing a call—which is to say, when no particular “communication” is being “routed”—it’s at least an open question whether this provision forbids bulk collection of tower location data.

Then Section 3, “notwithstanding any other provision of law,” prohibits the government from acquiring “records of any electronic communication without the use of specific identifiers or selection terms” under any provision of FISA. Contrast the White House proposal, which from what we’ve heard so far would not impose any limits on non-telephony collection. This section incorporates the Electronic Communications Privacy Act definition of “electronic communications,” which as noted above, means it excludes records of phone calls or other “aural transfer” (e.g. VoIP), which fall under the mutually exclusive category of “wire communications.” Later in §503, the bill explicitly refers to both “electronic” and “wire” communications records, suggesting that this is very much intentional. This provision, then, would not appear to preclude bulk collection of telephony metadata (“call detail records”) under FISA authorities other than §215. Nor, of course, does it apply to National Security Letters, which are issued by the heads of FBI field offices without judicial pre-approval, since those are not technically part of FISA, despite generally being used in the same investigations.

Also left ambiguous is precisely what “specific identifiers or selection terms” means. Intuitively it would refer to things like e-mail addresses and account logins, but documents leaked by Edward Snowden suggest that in some contexts the government has used much broader “selectors,” such as ranges of Internet Protocol addresses. If something that broad can count as a “specific identifier,” then at the outer limits the distinction between “targeted” and “bulk” collection becomes somewhat semantic.

Finally, note that the prohibition here only applies to the “acquisition” of a “record.” Crucially, collection of information live from the wire pursuant to 50 USC §1842, the provision that authorized NSA’s now-defunct bulk Internet metadata program, probably does not count as the “acquisition of a record,” even though, intuitively, it is a process by which the government ends up with records of communications. A former intelligence official I informally bounced this language off agreed that the use of this pen register/trap-and-trace provision would not fall under this prohibition, because the information obtained isn’t acquired in the form of a record maintained by a communications provider: Rather, the government is acquiring data in transit and creating its own record rather than “acquiring” one.

The last prohibition, similarly covering all FISA authorities, bars acquisition without specific identifiers of several other categories of sensitive records, specifically:

library circulation records, library patron lists, book sales records, book customer lists, firearm sales records, tax return records, educational records, or medical records containing information that would identify a person

This is the same list of sensitive records currently requiring explicit approval by Attorney General before they can be acquired under §215. The final qualifier—”containing information that would identify a person”—is likely to be read as applying to all the preceding types of information. In addition to the other loopholes and ambiguities, this might be read to allow bulk acquisition of “anonymized” records for various data mining purposes. Anonymization, however, should not obviate privacy concerns: As Paul Ohm has documented, any sufficiently rich and informative “anonymous” data set can be re-identified given enough other data sets—which the NSA has in abundance. And of course, many types of records not specifically named—credit card records, for instance—are not included in any of these prohibitions (or pseudo-prohibitions) on bulk collection.

(B) The New Authority

In order to preserve the capabilities of the current NSA telephony program, the HPSCI bill created a new and distinct authority, §503, that authorizes rapid collection of both telephony and electronic communications metadata under a process superficially somewhat similar to §702 of the FISA Amendments Act. The Attorney General and Director of National Intelligence jointly issue broad “authorizations” for the collection of records pertaining to suspected agents of foreign powers and their direct contacts or associates. (This effectively gives you two “hops” from a “seed” number: The direct contact is the first hop and their records contain identifiers for the second hop.) Records must not include communications content or other personally identifying information, and procedures must be developed to protect privacy and civil liberties. The FISC signs off on general procedures for establishing “reasonable articulable suspicion” of the appropriate foreign power link in the selectors that providers are directed to provide records on. The government then issues directives to telecom providers requiring both historical and prospective, ongoing production of records pertaining to specific identifiers. The FISC does not pre-approve these directives and selectors, but must be “promptly” provided with each directive and a record of the basis for thinking it meets the criteria—at which point the court can terminate acquisition if it believes the criteria are not met, though no further affirmative approval is required.

While it may not be obvious, probably the critical thing here is actually the provision requiring the providers to produce “records, whether existing or created in the future, in the format specified by the Government” coupled with one providing for the providers to be compensated and receive any necessary technical assistance from the government. For domestic phone numbers, after all, FISA pen register authority already covers this type of collection, and many providers should be able to do a historical search of their records for foreign numbers. But the CALEA J-standard spelling out the surveillance capabilities that telecoms are required to have seems to assume that “pen registers” are always and only applied to a specific “facility” corresponding to a customer phone line. The trick, in other words, was rapidly getting the carriers to produce records of calls to or from specific foreign numbers—and to produce them in a format that made it easy to cross-reference records across carriers. In other words, this provision lets the government demand that the carriers create records in the form they need, even if the company doesn’t maintain records of that type for its own business purposes, with government money and tech support to help them do it.

The HPSCI authority differs from the §215 statute, the current telephony program, and the president’s proposal in several salient ways.

The very first words of §503 capture one difference: “notwithstanding any other law.” While ultimately the FISC has apparently not been much deterred by the absence of a “notwithstanding” provision in §215, it does at least in principle mean that §215 does not automatically trump other statutory protections—and as a rule one wants these “notwithstanding” provisions used sparingly in broad collection authorities. Without access to the FISC’s other §215 opinions, it is hard to say what effect—if any—this addition will have.

Unlike the current telephony program (and apparently the president’s proposal), this authority is not restricted to identifiers tied to any particular terrorist group. Rather, a link (based on reasonable suspicion) to any foreign power or agent of a foreign power will suffice. The “reasonable suspicion” nexus is, obviously, narrower than the requirement of “relevance” required by §215 as currently interpreted by the FISC—and indeed, narrower even than the common pre-Snowden understanding of §215.

What is entirely eliminated required link to “an investigation to obtain foreign intelligence information not concerning a United States person or to protect against international terrorism or clandestine intelligence activities.” Given the breadth of FBI “enterprise investigations,” frequently invoked by defenders of the FISC’s strained “relevance” ruling, one would not think that requirement would prove unduly burdensome in practice. Removing it, however, does a couple things: First, it eliminates whatever check might have been provided by the predication requirements for opening an investigation, and unmoors the acquisition authority from any particular investigative target. In at least some cases, this specific investigative link has tipped off the FISC that a record request might run afoul of the proscription on targeting Americans (presumably journalists) based solely on First Amendment protected conduct. If the FISC is only evaluating the foreign power link, that warning flag might not go up. Second—and perhaps more importantly—it would appear to eliminate the requirement that records pertaining to U.S. persons be acquired only for counterterror or counterespionage investigations, rather than for “foreign intelligence purposes” generally, which might include almost any effort to understand the actions and intentions of foreign entities. In practice, of course, these have not been effective limits on the acquisition of records, but the FISC has at least tried to embody these limits in back-end querying and usage limitations.

The most obvious difference from what the president has proposed—beyond the application to non-telephony communications records—is of course the combination of ex-ante FISC approval of programmatic procedures coupled with ex-post review of specific directives, instead of the pre-approval of specific selector queries that the president has endorsed. I’m not quite as persuaded as some of my colleagues in the civil liberties community that this should be an absolute deal-breaker this specific instance—provided that the FISC also reviews some basic information about the initial fruits of a query, which the HPSCI bill does not require or provide for.

I say that because in this case, each directive will yield the records of dozens or hundreds of contacts for every selector explicitly specified. Moreover, the FISC will rarely have much ex-ante basis for second guessing the government’s “reasonable suspicion” determinations. Suppose instead the FISC were to review directives relatively quickly after issuance along with a very rough statistical precis of the information obtained: How many unique contacts are identified at the first and second hop? How many of these belong to United States persons, to the extent this can be easily determined? While permitting ex-post approval does increase the risk that some requests will “slip through the cracks,” or that some information will be obtained on an inadequate basis, a more robust review provision than the HPSCI bill provides might at least give the FISC some basis for catching dubious determinations of suspicion. If a particular seed selector is pulling in an unusually large number of first-hop contacts, or if the purported cell phone of a Pashtun goatherd is primarily calling numbers in the 202 area code, the FISC might at least be motivated to ask for some supporting documentation. That’s not to say the trade is necessarily worth making—and again, what I’ve described is emphatically not provided for in the HPSCI bill—but it’s at least worth considering.

(C) The Bottom Line

Let’s sum up. First, the HPSCI bill’s seemingly broad prohibition on bulk collection turns out to be riddled with ambiguities and potential loopholes. The fuzzy definition of “specific identifiers” leaves the door open to collection that’s extremely broad even if not completely indiscriminate. Because the provision dealing with “call detail records” applies only to §215 and the provision dealing with “electronic communications records” excludes telephony records, the law does not bar the bulk collection of telephony records under FISA provisions other than §215. The prohibition on non-specific acquisition of other communications “records” probably does not preclude bulk collection under the FISA pen register provision that was previously used for the NSA Internet metadata dragnet. And, of course, none of these prohibitions apply to National Security Letters. If the government wanted to keep collecting metadata in bulk, it would have plenty of ways to do so within the parameters of this statute given a modicum of creative lawyering—at least if the FISC were to continue being as accommodating as it has been in the past.

Second, something like the novel authority created here may well be necessary to enable fast and flexible acquisition of targeted records without dragnet collection. However, once we get down to details—and even leaving aside the question of ex-post versus ex-ante judicial approval—this authority is in some respects broader than either the current §215 telephony program, the president’s proposal, or the pre-Snowden understanding of the FISA business records authority. Critically, it eliminates the required link to a predicated investigation—which, in the case of U.S. persons, must be for counterterror or counterespionage purposes.

While this would at least presumably put an end to the current dragnet collection of telephony metadata, it is not at all clear how seriously it would constrain the government’s bulk collection of records on the whole. In some respects, there is at least a colorable argument that the new authority could expand the scope of government collection in some respects. Given the government’s track record on this front, it is probably not excessively paranoid to suspect that any such loopholes and ambiguities are likely to be exploited.

Categories: Policy Institutes

Chairman Ryan’s Supposed Budget Slashing

Cato Op-Eds - Tue, 04/01/2014 - 15:28

Nicole Kaeding

Chairman Paul Ryan’s budget released today “cuts spending by $5.1 trillion over the next ten years,” the document claims. Similarly, the headline from the Washington Post says that Ryan’s budget “would slash $5 trillion over next decade.”

Yet looking at the details of Ryan’s proposal, the federal government will spend $1.5 trillion more in 2024 than it is expected to spend in 2014.

How can spending both be “slashed” and increased by $1.5 trillion? It’s because of the bizarre way that Washington discusses spending, which is known as baseline budgeting.

Here is a graph of Ryan’s proposed federal outlays.


The graph shows that under Ryan’s budget, federal spending increases every year.

But here is another graph showing Ryan’s spending compared to the Congressional Budget Office (CBO) baseline projection of spending made in February.

Notice the gap? That’s the $5 trillion that is “slashed” from the federal budget.

In Washington, all spending proposals are compared to the CBO’s baseline projections. The CBO releases these projections a couple times a year, which are based on their estimates of current federal law. Every proposal is then compared to this baseline. Inside-Washington discussions of spending cuts or increases are relative to CBO’s figures.

But this is a very different way of thinking about budgeting than used by families, who don’t assume that their income will go up automatically every year. Families prioritize, and they cut back when they need to make the books balance. Sadly, few proposals in Congress make tough trade-offs and cut actual levels of spending.

Chairman Ryan’s budget would spend $42.6 trillion over the next ten years. Opponents will say that Ryan’s budget slashes federal spending, while supporters will say that it includes large budgetary savings. The reality is that Ryan’s budget would increase spending at an annual average rate of 3.5 percent, or from $3.54 trillion in 2014 to $5.0 trillion in 2024. Only in Washington would that be considered substantial restraint, let alone slashing.

Categories: Policy Institutes

Does Strong IP Protection Create Innovation and Jobs?

Cato Op-Eds - Tue, 04/01/2014 - 12:12

Simon Lester

The story that many people tell is that intellectual property protection creates innovation, and here in the U.S. our IP-based industries lead to many high-paying, export-oriented jobs. So, we need strong IP protection, and so does the rest of the world, and thus we push other countries to sign on to strong IP protection through trade negotiations.

But is this story true? Does strong IP protection really create innovation and jobs?

First up, jobs.  Here’s the U.S. Chamber of Commerce:

IP is a clear indicator of the ingenuity of a country’s economy and the U.S. depends heavily on our IP for economic success; and IP accounts for 40 million U.S. jobs, 2/3 of all exports and $5.06 trillion in value.

The implication of this statement seems to be that, without our current level of IP protection, the U.S. would have 40 million fewer jobs, and only 1/3 as many exports.  But that can’t be right, can it?  If we had some lower level of IP protection, say, shorter copyright terms, perhaps fewer people would work in copyright-related industries.  But they wouldn’t just starve.  They would find something else to do.  If government policy didn’t skew the incentives in a way that moves people into copyright-related jobs, they would find something just as productive, if not more so, to do.  So, it may be true that 40 million people work in IP-related jobs.  But IP doesn’t create the jobs; government policy pushes people towards these jobs, away from other jobs.

But what about innovation?  Would there be any innovation at all without our existing levels of IP protection?  Here’s Rebecca Strauss of CFR:

no one has a good grasp of whether the U.S. patent system is doing what it was intended to do—promote innovation.

… in practice, economists have not yet come to any empirically robust conclusions about whether this theory pans out or how well the U.S. patent system is performing.

Even the most elemental components of patents have no clear economic analysis backing them up, including how long patent rights should last or whether the patent right should be granted to whoever files the application first versus whoever invents it first. 

In the absence of definitive economic analysis, the trend in U.S. patent reform over the past thirty years has been to strengthen the status quo system, with powerful corporate lobbies driving the policy discussion.

This means that the deal has generally gotten sweeter for patent holders. Congress has extended the lifeterm of patents, and most drastically for copyright, which lasts nearly four times longer today than in 1800. Curiously, whenever Disney’s Mickey Mouse copyright is due to expire, the official copyright lifeterm is lengthened. 

Does strong IP protection lead to more innovation?  The answer seems to be: We don’t know!

And I don’t know either.  My point is not that we should adopt policy X or policy Y with regard to IP protection.  Rather, what I think needs to happen is that we have an internal debate over appropriate levels of IP protection.  U.S. government folks say they want a debate.  But I don’t see it happening.  What I see is the U.S. pushing our existing policies on others through trade agreements, even though it’s not at all clear that these policies are good ones.

Categories: Policy Institutes

Chamber of Commerce and Business Roundtable: Borg Enablers

Cato Op-Eds - Tue, 04/01/2014 - 12:05

Neal McCluskey

Remember the Borg? You know, the Star Trek cyborgs who would encounter a ship, tell its occupants “resistance is futile,” then turn them all into Borg? Of course the Enterprise always resisted, and always survived. But what if Captain Picard had instead ordered, “Surrender. Then they’ll leave us alone.”

The crew response to that would certainly have been, “ol’ Jean-Luc is losing it!” At least, it would have been for the few seconds before everyone was converted into mindless drones. Yet that is just the sort of order a group calling itself the “Higher State Standards Partnership” is trying to issue to conservatives and libertarians when it comes to the Common Core. Yesterday, the Partnership – a front for the U.S. Chamber of Commerce and Business Roundtable – wrote in the Daily Caller that opponents of the Core should stop resisting if they want to keep schools from being assimilated by the federal government.

You read that right: After blaming the Obama administration for using the Race to the Top to meddle “in a clearly state-led, locally controlled education initiative,” the Partnership counseled Core opponents to end their resistance. Defeating the Core, they wrote, “would only bolster the hand of the Administration and invite federal control into our schools.”

That is absurd. But perhaps it’s easier to write if you utterly ignore basic facts about the effect of federal force, and the coercive role Core supporters intended for the feds to have all along. The Partnership blames the Obama administration for overstepping while neglecting to mention that in the 2008 report Benchmarking for Success the Core’s creators said Washington’s job was to incentivize standards adoption. The creators later repeated the call on the Common Core State Standards Initiative website. And Core supporters quite likely lobbied the administration to make adopting the Core a de facto RTTT requirement.

Sadly, the Partnership chose not only to ignore that Core supporters absolutely called for federal coercion, but it offered a laughable fiction that what federal influence there was basically meant nothing:

Despite the Administration’s attempt to capitalize on a state and local effort, it does not change the facts; a diverse group of local stakeholders with an interest in seeing children succeed – parents, teachers, education experts, policymakers and business people – came together in each of the states to debate and discuss how the standards would make sense for their classrooms. They decided locally whether higher standards made sense for their students. The federal government did not play a role – and had no place – in making that decision.

To continue the Trek theme, what planet do these people come from? First off, none of the decision to follow the Core is local: Even if you believe there was no federal coercion, it’s still states – not districts – that select standards. And there absolutely was federal force, both through Race to the Top and the offer of waivers out of the hated No Child Left Behind Act if states, among other things, adopted federally approved “college- and career-ready standards.” And don’t pretend there was much “democratic” debate about Core implementation. Indeed, if states wanted to compete for RTTT money they had to promise to adopt the Core before the final version had even been published!

If the Partnership really wants states and districts to avoid federal control, why deny the truth about federal power? Why act like states and districts are still in control of the ship when the Borg controls the engine room, the communications system, life support, etc? Are states and districts really in charge just because they’re on the bridge pressing inert buttons and barking meaningless orders?  Of course not. Which makes it hard not to conclude: The Partnership’s concern isn’t staving off federal control. It’s ending not-so-futile resistance to national standardization.

Categories: Policy Institutes

A Shocking Police Encounter

Cato Op-Eds - Tue, 04/01/2014 - 10:32

Caleb O. Brown

Our friends at Reason have acquired the video of an unbelievable roadside police stop near the U.S. border. Words don’t do it justice.

You Won’t Believe This Border Patrol Checkpoint Refusal Video

Please take a moment to look at Cato’s National Police Misconduct Reporting Proejct and Cato’s other work on criminal justice.

Categories: Policy Institutes

The Whistleblower Versus Robert Mugabe and the United Nations

Cato Op-Eds - Tue, 04/01/2014 - 10:18

Doug Bandow

Zimbabwe’s Robert Mugabe is a corrupt authoritarian.  The United Nations is a wasteful, inefficient organization that tolerates corrupt authoritarians.  Unfortunately, the two don’t make beautiful music together.

Not everyone at the UN is corrupt.  One hero is Georges Tadonki, a Cameroonian who for a time headed the UN Office for the Coordination of Humanitarian Affairs (OCHA) in Zimbabwe.  The others are three judges in a United Nations Dispute Tribunal who last year ruled for Tadonki in a suit against the international organization.

Soon we will find if members of a UN appeals panel possess equal courage.  That ruling is expected soon with rumors circulating that these judges might reverse course and absolve the organization of misconduct.

In 2008 President Robert Mugabe, who took power in 1980, and ZANU-PF, the ruling party, used violent intimidation to preserve their control.  At the time Tadonki had been on station for six years and predicted epidemics of both cholera and violence. 

Unfortunately, UN country chief Agostinho Zacarias dismissed Tadonki’s warnings.  By the end of the year 100,000 people had been infected with cholera and thousands had died.  During the election campaigns hundreds also had been killed by government thugs, who succeeded in derailing democracy. 

Naturally, no good deed went unpunished.  After extended discord between the two UN officials, Tadonki was fired in January 2009.  There was little doubt that the action was retaliation for being right and embarrassing Zacarias—who now serves the UN in South Africa. 

The controversy demonstrates that something is very wrong with the UN system.  Tadonki decided to fight, though he had to ask the international law firm Amsterdam & Peroff to handle the litigation on a pro bono basis.  Last year the UN Dispute Tribunal based in Kenya heard his case and Judges Vinod Boolell, Nkemdilim Izuako, and Goolam Merran issued their 104-page judgment. 

They concluded “that the Applicant was not, at all material times, treated fairly and in accordance with due process, equity and the core values of the Charter of the Organization” and that OCHA management ignored the UN’s “humanitarian values.”  The tribunal ordered the UN to apologize for its misbehavior, investigate the mistreatment of Tadonki, hold his superiors accountable for their misconduct, cover Tadonki’s litigation costs, pay past salary through the judgment date, and provide $50,000 in “moral damages for the extreme emotional distress and physical harm suffered by the Applicant.”

Explained the judges:  “This case has brought to light not only managerial ineptitude and highhanded conduct but also bad faith from the top management of OCHA.  This mismanagement and bad faith were compounded by a sheer sense of injustice against the applicant who was hounded right from the beginning.” 

Perhaps even worse was the larger environment in which this misconduct occurred.  Observed the tribunal:  “There was a humanitarian drama unfolding and people were dying.  Part of the population had been abandoned and subjected to repression.  The issue between Tadonki and Zacarias was to what extent these humanitarian concerns should be exposed and addressed and the risk that there was of infuriating the Mugabe government.”

The tribunal’s conclusion is devastating:  “the political agenda that RC/HC Zacarias was engaged in with the Government of Zimbabwe far outweighed any humanitarian concerns that OCHA may have had.”  Of course, “The UN and Zacarias’s chief responsibility should have been to Zimbabwe’s embattled civilian population.  Instead, both failed to live up to their obligations—even as they were conspiring against someone who had exceeded them.”

But as I note in my new American Spectator online article:  “the final resolution depends on the appellate process, which is approaching its decision.  Hopefully Georges Tadonki and the three tribunal judges are not the only UN officials willing to do what’s right, irrespective of cost.”

Categories: Policy Institutes