Daniel J. Mitchell
I feel a bit like Goldilocks.
Think about when you were a kid and your parents told you the story of Goldilocks and the Three Bears.
And then she found a bed that was too hard, and then another that was too soft, before finding one that was just right.
Well, the reason I feel like Goldilocks is because I’ve shared some “Rahn Curve” research suggesting that growth is maximized when total government spending consumes no more than 20 percent of gross domestic product. I think this sounds reasonable, but Canadians apparently have a different perspective.
Back in 2010, a Canadian libertarian put together a video that explicitly argues that I want a government that is too big.
Now we have another video from Canada. It was put together by the Fraser Institute, and it suggests that the public sector should consume 30 percent of GDP, which means that I want a government that is too small.Measuring the Size of Government in the 21st Century by Livio Di Matteo
My knee-jerk reaction is to be critical of the Fraser video. After all, there are examples - both current and historical - of nations that prosper with much lower burdens of government spending.
Singapore and Hong Kong, for instance, have public sectors today that consume less than 20 percent of economic output. Would those fast-growing jurisdictions be more prosperous if the burden of government spending was increased by more than 50 percent?
Or look at Canadian history. As recently as 1920, government outlays were 16.7 percent of economic output. Would Canada have grown faster if lawmakers at the time had almost doubled the size of government?
And what about nations such as the United States, Germany, France, Japan, Sweden, and the United Kingdom, all of which had government budgets in 1870 that consumed only about 10 percent of GDP. Would those nations have been better off if the burden of government spending was tripled?
I think the answer to all three questions is no. So why, then, did the Fraser Institute conclude that government should be bigger?
There are three very reasonable responses to that question. First, the 30 percent number is actually a measurement of where you theoretically maximize “social progress” or “societal outcomes.” If you peruse the excellent study that accompanies the video, you’ll find that economic growth is most rapid when government consumes 26 percent of GDP.
Second, the Fraser research - practically speaking - is arguing for smaller government, at least when looking at the current size of the public sector in Canada, the United States, and Western Europe. According to International Monetary Fund data, government spending consumes 41 percent of GDP in Canada, 39 percent of GDP in the United States, and 55 percent of GDP in France.
The Fraser Institute research even suggests that there should be significantly less government spending in both Switzerland and Australia, where outlays total “only” 34 percent of GDP.
Third, you’ll see if you read the underlying study that the author is simply following the data. But he also acknowledges “a limitation of the data,” which is that the numbers needed for his statistical analysis are only available for OECD nations, and only beginning in 1960.
This is a very reasonable point, and one that I also acknowledged when writing about some research on this topic from Finland’s Central Bank.
…those numbers…are the result of data constraints. Researchers looking at the post-World War II data generally find that Hong Kong and Singapore have the maximum growth rates, and the public sector in those jurisdictions consumes about 20 percent of economic output. Nations with medium-sized governments, such as Australia and the United States, tend to grow a bit slower. And the bloated welfare states of Europe suffer from stagnation. So it’s understandable that academics would conclude that growth is at its maximum point when government grabs 20 percent of GDP. But what would the research tell us if there were governments in the data set that consumed 15 percent of economic output? Or 10 percent, or even 5 percent? Such nations don’t exist today.
For what it’s worth, I assume the author of the Fraser study, given the specifications of his model, didn’t have the necessary post-1960 data to include small-state, high-growth, non-OECD jurisdictions such as Hong Kong and Singapore. If that data had been available, I suspect he also would have concluded that government should be closer to 20 percent of economic output.
I explore all these issues in my video on this topic.The Rahn Curve and the Growth-Maximizing Level of Government
The moral of the story is that government is far too large in every developed nation.
I suspect even Hong Kong and Singapore have public sectors that are too large, causing too many resources to be diverted from the private sector.
But since I’m a practical and moderate guy, I’d be happy if the burden of government spending in the United States was merely reduced back down to 20 percent of economic output.
P.S. Though I would want the majority of that spending at the state and local level.
P.P.S. Since I’m sharing videos today, here’s an amusing video from American Commitment about the joy of being “liberated” from employment.Trapped
Some people say innovation is dead in America, but NASA is always looking for innovative ways to extract more money from the taxpayers. The Wall Street Journal reports on some of their innovations in using our tax dollars to persuade us to give them even more of those tax dollars:
In William Forstchen’s new science fiction novel, “Pillar to the Sky,” there are no evil cyborgs, alien invasions or time travel calamities. The threat to humanity is far more pedestrian: tightfisted bureaucrats who have slashed NASA’s budget.
The novel is the first in a new series of “NASA-Inspired Works of Fiction,” which grew out of a collaboration between the National Aeronautics and Space Administration and science fiction publisher Tor. The partnership pairs up novelists with NASA scientists and engineers, who help writers develop scientifically plausible story lines and spot-check manuscripts for technical errors.
The plot of Mr. Forstchen’s novel hinges on a multibillion-dollar effort to build a 23,000-mile-high space elevator—a quest threatened by budget cuts and stingy congressmen….
It isn’t the first time NASA has ventured into pop culture. NASA has commissioned art work celebrating its accomplishments from luminaries like Norman Rockwell and Andy Warhol. …
Some see NASA’s involvement in movies, music and books as an attempt to subtly shape public opinion about its programs.
“Getting a message across embedded in a narrative rather than as an overt ad or press release is a subtle way of trying to influence people’s minds,” says Charles Seife, author of “Decoding the Universe,” who has written about NASA’s efforts to rebrand itself. “It makes me worry about propaganda.”
Lobbying with taxpayers’ money isn’t new. But as Thomas Jefferson wrote in the Virginia Statute of Religious Liberty: “To compel a man to furnish contributions of money for the propagation of opinions which he disbelieves is sinful and tyrannical.” To compel him to furnish contributions of money to petition his elected officials to demand more contributions from him just adds insult to injury.
Bryan Caplan has an interesting post on the recent Swiss referendum to restrict immigration from the European Union. Tyler Cowen also blogged on the same issue twice. Caplan’s point is that the Swiss imposed restrictions because there was insufficient immigration rather than too much. Areas of Switzerland that had fewer immigrants voted to restrict immigration while areas with many immigrants voted to keep the doors open.
A similar theory could explain why immigration quotas were first imposed in the United States after World War I. That war substantially reduced immigration from Europe. From 1904 through 1914, almost 1 million immigrants arrived annually in the United States – a total of 10.9 million. This large population, combined with their children, opposed numerous legislative efforts to restrict immigration from Europe.1st Gen % 2nd Gen % 1st+2nd Gen % 1870 14.4 14.0 28.4 1880 13.3 18.3 31.6 1890* 14.8 ? ? 1900 13.7 20.9 34.6 1910 14.8 21.0 35.8 1920 13.4 21.9 35.3 1930 11.8 21.4 33.2 1940 11.8 18.2 30.0 1950 9.6 16.6 26.2 1960 6.0 13.7 19.7 1970 5.9 11.8 17.7 1980* 6.2 ? ? 1990^ 8.7 8.8 17.5 2000 12.2 10.3 22.5 2010 13.7 11.3 25.0 *Data unavailable ^1990 = 1993 Source: iPums
World War I erupted in August 1914, slowing immigration and causing the percentage of immigrants to decline more than the increase in the second generation. During the four years of the war, slightly more than one million immigrants arrived. That minor decline, especially in the 1st generation, might be part of the reason why anti-immigration politicians succeeded in passing the first immigration quotas in 1921. During that time many non-citizens could vote and it was much easier to naturalize than it is today.
The post-war U.S. recession, the continuing blockade of Germany, and chaos in Europe prevented immigration from rebounding until 1921 when 805,228 people immigrated – the same year that numerical quotas restricted immigration for the first time. If the pre-war pace of immigration was uninterrupted by World War I, 4.6 million additional immigrants would have landed in America by that time – boosting the immigrant share of the population to somewhat less than 17.7 percent of the total population and the second generation by a smaller amount too. Combined, the first and second generations would have been equal to around 40 percent of the American population. Supporters of immigration restrictions might have understood this and known that immigration from Europe was about to rapidly accelerate, meaning that they only had a narrow window to approve restrictions before the changing nativity of the population made that more politically difficult.
Several reasons would have made it more difficult to achieve the 1921 vote to restrict immigration if there were that many more immigrants.
First, 66.9 percent of the House of Representatives voted for the bill. President Harding supported the law but a previous attempt to restrict immigration like this was vetoed, making the 2/3 threshold important. If there were 4.6 million more immigrants, it would have been more difficult to clear that threshold.
Second, redistricting in 1920 gerrymandered Congressional districts to reduce the political power of immigrants – which was aided by the slight decrease in the percentage of foreign born. Representatives who voted against the bill came from states that had, on average, 20.4 percent of their population as immigrants according to the 1920 census. Representatives who voted for the restriction came from states that had, on average, 10.7 percent of their population as foreign born. The 104 Representatives who did not vote came from states that had, on average, 15.2 percent of their populations who were foreign born.
Looking at Massachusetts offers a puzzle though. In 1917, Congress voted on an immigration restrictionist bill called the Literacy Act. In that year, only four of Massachusetts’ 15 Congressmen voted “yea” on the Literacy Act with 11 voting “nay.” For the 1921 Act, however, Massachusetts only had 13 seats. Of those 13, five voted “yea,” three voted “nay,” and five did not vote. Between 1910 and 1920, the immigrant population of the state increased by 2.5 percent and it voted for more immigration restrictions. Gerrymandering could explain this shift, although I do not have the data to show that, or something else might have changed.
Other factors contributed to the end of the first era of immigration in the United States. Southern politicians opposed it because it gave more electoral weight to the Northeast. Labor unions opposed immigration and some business interests began to turn against it in fear that immigrants would bring socialism with them. The growing state-based welfare programs might have contributed to the public turning against immigration, Italian immigrants who went on a wave of terrorism, the Eugenics movement, and numerous other factors likely contributed to ending native support for immigration.
If this hypothesis is true, U.S. voters will support a more liberalized immigration policy as the percentage of the population that is foreign born and the second generation continues to increase.
Different states have also produced more strict and more lenient immigration-related laws over the years despite large differences in the immigrant percentages across states. Here are partially complete lists of state laws:
Pro Immigration Laws
Sources: American Community Survey, U.S. Census, Immigration Policy Center, and dreamact.org.
Anti Immigration Laws
Anti-DL for DACA
Anti-DL for DACAAverage
Sources: American Community Survey, U.S. Census, and Immigration Policy Center.
These lists do not include the states that didn’t pass laws or tried to pass them but failed, but they provide a starting point to analyze. States that pass pro-immigration laws typically have more immigrants as a percentage of their populations. Interestingly, the difference is not huge, although there is a great deal of variance. The average for anti-immigration states is 10.7 percent – pretty high. Compared to the Swiss example, anti-immigration American states have some immigrants – just enough to make some natives dislike them – and certainly aren’t devoid of them.
California and Arizona are odd cases because so much of their population was foreign born when they created their anti-immigration laws; but there is a good argument for excluding them from the anti-immigration list because they were the trend setters. California really pioneered the anti-immigration state law through Proposition 187 (it was certainly viewed that way by the public) in 1994. There was a higher fixed cost for Californians to develop the concept of an anti-immigration state law so it likely would have taken more of a public outrage to spur that bit of legislative innovation. The case is similar for Arizona and their anti-immigration laws in 2008 and 2010. After Californians and Arizonans developed the framework for opposing immigration on the state level, the marginal costs of other states copying anti-immigration laws were smaller, making it easier for others to adopt such laws – which is exactly what happened. Arizona also developed the anti-driver license idea for DACA recipients.
Excluding California and Arizona from that already short list lowers the average immigrant population of anti-immigrant states to 7.4 percent – almost half of the pro-immigration states – and the standard deviation to just 2.9 percent. Just eyeballing it, there might be a Kuznets curve for immigration restriction with the percent of a state’s population that is immigrant on the X-axis and support for immigration restrictions on the Y-axis. A state needs some immigrants to pass anti-immigrant laws, but after the immigrant population grows past a point, pro-immigration laws are instead passed. Getting to the far side of that curve makes further immigration restrictions very difficult to impossible.
This brings us back to Switzerland.
Although 27.3 percent of Switzerland’s population is foreign born, far fewer than that can vote. Switzerland also doesn’t have birthright citizenship, so the number of second-generation Swiss who could vote against restrictions was likely small. This could also explain why despite Switzerland having a relatively high percentage of its population foreign born, it was able to pass anti-immigration laws like American states with low immigrant populations did.
This is not enough data to support my theory in the United States, but it is a starting point. To avoid some of the worst anti-immigration laws, it seems that the immigrant population only has to literally outgrow them.
Canada released a new federal budget yesterday. The ruling Conservatives are centrists and far too supportive of the welfare state. Nonetheless, the government is expected to balance the budget next year while steadily reducing spending and debt as a share of GDP.
The contrast with the huge and unreformed federal budget in Washington is stark.
In Canada, federal spending fell to just 15.1 percent of GDP in 2013 and the government projects that the ratio will decline steadily to 14.0 percent by 2019 (p. 268). Federal debt as a share of GDP fell to just 33 percent this year.
In the United States, federal spending was 20.8 percent of GDP in 2013, and the CBO projects that the ratio will gradually rise to 21.4 percent by 2019. Federal debt held by the public as a share of GDP is 74 percent this year—more than twice the Canadian level.
On federal fiscal policy, Canada has had pragmatic centrist leadership for the last two decades, with voters keeping the loony left out of power. In the United States, we’ve had power divided between centrist Republicans and loony left Democrats in recent years.
Actually, the federal leadership of both U.S. parties is loony. The debt crisis in Europe illustrated that endlessly running large deficits when government debt is already high is dangerous. It is playing with fire. Yet congressional majorities have recently signed off on a big-spending appropriations deal, a big-spending farm bill, and a debt limit bill that does nothing to combat runaway red ink.
Pundits often claim that the Republicans are controlled by radical Tea Party elements. I wish that were true, but in terms of policy results there is no evidence of it. Republican and Democratic leaders are apparently satisfied with federal spending, deficits, and debt far larger than acceptable to the centrists in Canada.
The chart shows the remarkable gap in federal spending between the two countries in recent years.
Andrew M. Grossman
Faulting the IRS for attempting to “unilaterally expand its authority,” the D.C. Circuit today affirmed a district court decision tossing out the agency’s tax-preparer licensing program. Under the program, all paid tax-return preparers, hitherto unregulated, were required to pass a certification exam, pay annual fees to the agency, and complete 15 hours of continuing education each year.
The program, of course, had been backed by the major national tax-return preparers, chiefly as a way of driving up compliance costs for smaller rivals and pushing home-based “kitchen table” preparers out of business. Dan Alban of the Institute for Justice, lead counsel to the tax preparers challenging the program, called the decision “a major victory for tax preparers—and taxpayers—nationwide.”
The licensing program was not only a classic example of corporate cronyism, but also of agency overreach. IRS relied on an 1884 statute empowering it to “regulate the practice of representatives or persons before [it].” Prior to 2011, IRS had never claimed that the statute gave it authority to regulate preparers. Indeed, in 2005, an IRS official testified that preparers fell outside of the law’s reach.
But IRS reversed course in 2011. The problem, Judge Kavanaugh’s opinion for the court explains, is not that the agency changed its mind but that its action had no basis in the text of the statute. Preparer are not a “representatives” because they have no authority at all to act on behalf of the taxpayer, who is still responsible for signing his or her own return. Preparers also aren’t engaged in “practice…before” IRS because they do not present any sort of case to the agency, such as in an investigation or hearing. And finally, the court observed that IRS’s broad view of the statute would render superfluous other statutes that do allow the agency to impose penalties on preparers for certain conduct.
A victory for liberty in itself, the decision may have broader legal import, in two respects. First, it embraces the concept that, while Congress may delegate broad authority to agencies, “courts should not lightly presume congressional intent to implicitly delegate decisions of major economic or political significance to agencies.” This principle, applied most forcefully in the Supreme 2000 Court’s Brown and Williamson decision, is one that the D.C. Circuit has lately declined to apply in big-ticket challenges to agency action, such as EPA’s greenhouse gas regulatory scheme. It may well come in handy as the Obama Administration carries out an aggressive second term agenda through executive action, often at odds with its statutory authority.
The second value of the decision is illustrating the duty of courts to take statutory text seriously even while giving deference to agencies for their policy decisions. This was the issue that confronted the Supreme Court last term in City of Arlington v. FCC—which I wrote about here—and Justice Scalia’s majority opinion drew substantial criticism for its holding that agencies’ interpretations regarding the scope of their jurisdictions are due the same deference as with anything else. But Scalia’s point was not that agencies are free to do as they please, with no real judicial check, but only that courts should place a thumb on the scale one way or the other concerning statutory authority. Courts’ heavy lifting, Justice Scalia explained, is statutory interpretation (for legal geeks, Chevron step one), and leave the policy questions to the political branches.
Judge Kavanaugh’s opinion does just that, and should be a model to courts (particularly his own) in how to balance respect for the other branches with the rule of law.
Volume 15 of the Collected Works of F. A. Hayek has just been published by the University of Chicago Press. This volume, edited by series editor and Hayek biographer Bruce Caldwell, is The Market and Other Orders. It contains many of Hayek’s most important papers:
- The Use of Knowledge in Society
- The Meaning of Competition
- The Results of Human Action but Not of Human Design
- Competition as a Discovery Procedure
- The Pretence of Knowledge, his Nobel Prize lecture
- and The Political Ideal of the Rule of Law, lectures delivered in Egypt in 1954-55 that served as early drafts of chapters 11, 12, 13, 14, and 16 of The Constitution of Liberty
That’s only the beginning in this impressive volume, which should be of interest to any Hayek scholar, and indeed any student of economics or complex social orders.
Lawrence Summers, former secretary of the Treasury and president of Harvard, said in an interview for The Commanding Heights, Daniel Yergin and Joseph Stanislaw’s 1998 study of the resurgence of economic liberalism,
What’s the single most important thing to learn from an economics course today? What I tried to leave my students with is the view that the invisible hand is more powerful than the hidden hand. Things will happen in well-organized efforts without direction, controls, plans. That’s the consensus among economists. That’s the Hayek legacy.
This volume is a great introduction to those key ideas.
Daniel J. IkensonMedia have been reporting lately about the public’s burgeoning opposition to the Congress granting President Obama fast track trade negotiating authority. Among the evidence of this alleged opposition is a frequently cited survey, which finds that 62 percent of Americans oppose granting fast track to President Obama. Considering that the survey producing that figure was commissioned by a triumvirate of anti-trade activist groups – the Communication Workers of America, the Sierra Club, and the U.S. Business and Industry Council – I had my doubts about the accuracy of that claim. After all, would lobbyists who devote so much of their efforts to derailing the trade agenda risk funding a survey that might produce results contrary to their objectives? My skepticism – it turns out – was warranted. The 62 percent who allegedly “oppose giving the president fast-track authority for TPP [the Trans-Pacific Partnership agreement]” actually oppose giving the president a definition of fast track that is woefully inaccurate. The graphic below shows the question and response tally, as presented in the report showing the survey’s results, which is here. Read the question that begins with “As you may know…”
Convinced? Are you with the 62 percent? I would be, if fast track were really as the question implies. But the question includes an incomplete and misleading description of fast track. The question is being asked, presumably, of a random sample of Americans, which means that the average respondent has no idea about the purpose of fast track, and knows even less about its language and details. Thus, the phrasing of the question is highly determinative of the answer. The thrust of fast track as implied by the question above is that Congress has no role in the process whatsoever and sits by passively while the president negotiates deals to his liking, submits them to Congress, and says take it or leave it. Most thinking people who cherish our republican form of government should and would oppose legislation that sanctions the abdication of responsibility from one branch of government to another. But that’s not what fast track does. Under the Constitution, Congress is authorized to regulate foreign commerce and the Executive is authorized to make treaties. Negotiating, finalizing, and ratifying trade treaties involve both sets of authorities. The survey question, however, leaves out entirely the balancing term of this traditional sharing of authority. It is true that under the terms of fast track, Congress agrees to a timely, up-or-down vote without amendments. But that happens only after the Congress has conveyed its negotiating objectives and parameters to the president. Under the recently introduced legislation to restore fast track, Congress is demanding that 147 negotiating objectives be met before allowing fast track consideration of any trade deals that come before it. If those objectives are not met, consideration of the trade deal is taken off the fast track and subject to normal procedures. By presenting a severely misleading (and more menacing definition) of fast track to its survey respondents, and then representing and publicizing the results as the attitudes of Americans toward fast track, the survey designers (Hart Research Associates and Chesapeake Beach Consulting) and sponsors have done the public a major disservice. As a result, we are further from an informed debate than we were before the survey was conducted.
K. William Watson
There’s plenty of criticism flying around about the new farm bill. It spends unprecedented amounts of money to prop up one of the most successful industries in the country. It uses Soviet-style central planning to maintain food prices and make rich farmers richer. Its commodity programs distort trade in violation of global trade rules.
But this year’s the farm bill had the potential to mitigate some these sins by repealing a number of high-profile protectionist regulations. Despite a few close calls, however, the final version of the bill kept these programs in place, exposing the United States to possible retaliation.
One of those programs is the mandatory country-of-origin labeling (COOL) law. This requirement was first imposed by the 2002 farm bill. Ostensibly designed to increase consumer awareness, the true impact of the program is to push foreign-born cattle out of the market. The law requires meat packers to keep track of, and process separately, cattle that was born and/or raised for some time in Canada. The added expense benefits a portion of U.S. cattle ranchers at the expense of meat industry as a whole.
The negative impact on the Canadian and Mexican cattle industries was enough to prompt a complaint at the WTO. After the United States lost that case, the administration amended the regulation. But the new regulation, rather than bringing the United States into compliance, actually makes the law even more protectionist. Canada has made clear its intention to impose barriers on a wide range of U.S. products in retaliation.
Repealing this disastrous regulation through the farm bill was discussed during numerous stages of the legislative process, but no language on COOL was ever added to the bill.
Another program that could have been fixed by the farm bill was a bizarrely redundant and purely unnecessary catfish inspection regime. The new system would cost an estimated $14 million per year to administer and (by the USDA’s own admission) do nothing to improve the safety of catfish. However, the new institutional requirements imposed on catfish farmers to comply with the new regime would all but eliminate Vietnamese competitors from the market. The U.S. catfish industry and their allies in Congress are all for it.
Even though both house of Congress had at one point or another passed bills that repealed the new catfish regime, the final bill that came out of conference kept the redundant system in place.
The inspection issue has complicated negotiation of the Trans-Pacific Partnership, of which Vietnam will be a member, and could become the basis of a complaint at the World Trade Organization.
In the words of Sen. Mike Lee, the farm bill is “a monument to Washington dysfunction, and an insult to taxpayers, consumers, and citizens.” It is also the most popular vehicle for imposing protectionist regulations that serve a small set of businesses at the expense of the national economy.
There was hope that this bill could roll back some of the damage done in the past, at least for a handful of odious regulations. That hope was sorely misplaced.
John McGinnis has some kind words for work I oversee here at Cato in a recent blog post of his entitled: “The Internet–A Technology for Encompassing Interests and Liberty.”
As he points out, the information environment helps determine outcomes in political systems because it controls who is in a position to exercise power.
The history of liberty has been in no small measure the struggle between diffuse and encompassing interests, on the one hand, and special interests, on the other. Through their concentrated power, special interests seek to use the state to their benefit, while diffuse interests concern the ordinary citizen or taxpayer, or in William Graham Sumner’s arresting phrase, The Forgotten Man. When the printing press was invented, the most important special interests were primarily the rulers themselves and the aristocrats who supported them. The printing press allowed the middle class to discover and organize around their common interests to sustain a democratic system that limited the exactions of the oligarchs.
But the struggle between diffuse and special interests does not disappear with the rise of democracy. Trade associations, farmers’ associations and unions have leverage with politicians to obtain benefits that the rest of us pay for. As a successor to the printing press, however, the internet advances liberty by continuing to reduce the cost of acquiring information. Such advances help diffuse groups more than special interests.
The Internet is the new printing press, and we’re generating data here at Cato that should allow it to have its natural, salutary effects for liberty.
My favorite current example is the “Appropriate Appropriations?” page published by the Washington Examiner. It allows you to easily see what representatives have introduced bills proposing to spend taxpayer money, information that—believe it or not—was hard to come by until now.
In John McGinnis, we have a legal scholar who recognizes the potential ramifications for governance of our entry into the information age. Read his whole post and, for more in this area, his book, Accelerating Democracy: Transforming Governance Through Technology.
Last week, the Supreme Court of Michigan rejected a legal challenge to the Michigan Medical Marihuana Act (MMMA). Although limited to the state of Michigan, this precedent helps to build momentum for other states to move in the direction of marijuana legalization.
By way of background, in 2008 Michigan voters approved a state initiative that would allow medical marijuana for certain qualifying patients. In 2010, the City of Wyoming enacted an ordinance that essentially prohibited marijuana (no medical exceptions). John Ter Beek is a resident of the City of Wyoming and he claimed that he was a qualified patient under the state law and he argued that the state law preempted the city ordinance. Lawyers for the City of Wyoming responded with the argument that the state law was itself invalid because it violated the supremacy clause of the Federal Constitution. That is, since federal law (the Controlled Substances Act (CSA)) prohibits the possession of marijuana, no state can change its law to allow marijuana sales, or even possession.
The Supreme Court of Michigan unanimously sided with John Ter Beek. Writing for the court, Justice McCormack said, “[The MMMA] provides that, under state law, certain individuals may engage in certain medical marijuana use without risk of penalty…while such use is prohibited under federal law, [MMMA] does not deny the federal government the ability to enforce that prohibition, nor does it purport to require, authorize, or excuse its violation.” Thus, there is no violation of the federal supremacy doctrine.
Recall that after Colorado and Washington approved initiatives to legalize marijuana, some former DEA administrators argued that those initiatives were invalid under the federal supremacy clause. (One even said it was a ‘no-brainer.’) The Obama administration declined to bring such a challenge and we will be hearing it less and less as these precedents pile up.
Michael F. Cannon
Over at DarwinsFool.com, I summarize a lengthy report issued by two congressional committees on how the Treasury Department, the Internal Revenue Service, and the Department of Health and Human Services conspired to create a new entitlement program that is authorized nowhere in federal law. Here’s an excerpt in which I summarize the summary:
Here is what seven key Treasury and IRS officials told investigators.
In early 2011, Treasury and IRS officials realized they had a problem. They unanimously believed Congress had intended to authorize certain taxes and subsidies in all states, whether or not a state opted to establish a health insurance “exchange” under the Patient Protection and Affordable Care Act. At the same time, agency officials recognized: (1) the PPACA plainly does not allow those taxes and subsidies in non-establishing states; (2) the law’s legislative history offers no support for their theory that Congress intended to allow them in non-establishing states; and (3) Congress had not given the agencies authority to treat non-establishing states the same as establishing states.
Nevertheless, agency officials agreed, again with apparent unanimity, to impose those taxes and dispense those subsidies in states with federal Exchanges, the undisputed plain meaning of the PPACA notwithstanding. Treasury, IRS, and HHS officials simply rewrote the law to create a new, unauthorized entitlement program whose cost “may exceed $500 billion dollars over 10 years.” (My own estimate puts the 10-year cost closer to $700 billion.)
The full post includes details some pretty stunning examples of how agency officials were derelict in their duty to execute faithfully the laws Congress enacts.
Manhattan U.S. attorney Preet Bharara claimed another victory in his crusade against “insider trading,” a practice he once called “pervasive.” Last week he won a conviction against Mathew Martoma, formerly at SAC Capital.
Another big scalp was hedge fund billionaire Raj Rajaratnam, convicted in 2011 and sentenced to 11 years in prison. A decade ago Martha Stewart was convicted of obstruction of justice in an insider trading case.
Objectively, the insider trading ban makes no sense. It creates an arcane distinction between “non-public” and “public” information. It presumes that investors should possess equal information and never know more than anyone else.
It punishes traders for seeking to gain information known to some people but not to everyone. It inhibits people from acting on and markets from reacting to the latest information.
Martoma was alleged to have gotten advance notice of the test results for an experimental drug. Martoma then was accused of recommending that SAC dump its stock in the firms that were developing the pharmaceutical.
If true, SAC gained an advantage over other shareholders. But why should that be illegal? The doctor who talked deserved to be punished for his disclosure. However, Martoma’s actions hurt no one.
SAC avoided losses suffered by other shareholders, but they would have lost nonetheless. Even the buyers of SAC’s shares had no complaint: They wanted to purchase based on the information available to them and would have bought the shares from someone else had SAC not sold.
Of course, some forms of insider trading are properly criminalized—typically when accompanied by other illegal actions. For instance, fraudulently misrepresenting information to buyers/sellers. However, because of the usual anonymity of stock market participants in most cases it would be impossible to offer fraudulent assurances even if one wanted to.
The government has regularly expanded the legal definition of insider trading. For instance, in 1985 the government indicted a Wall Street Journal reporter for leaking his “Heard on the Street” columns to a stockbroker before publication.
Doing so might have violated newspaper policy, but that was a problem for the Journal, not the U.S. attorney. The information was gathered legally; the journalist had no fiduciary responsibility concerning the material; there was nothing proprietary about the scheduled columns.
Other cases also have expanded Uncle Sam’s reach. Information is currency on Wall Street and is widely and constantly traded. Punishing previously legitimate behavior after the fact unfairly penalizes individual defendants and disrupts national markets.
As applied, the insider trading laws push in only one direction: they punish action. It is virtually impossible to penalize someone for not acting, even if he or she did so in reliance on inside information. This government bias against action, whether buying or selling, is unlikely to improve investment decisions or market efficiency.
Indeed, it is impossible to equalize information. Does anyone believe that such markets ever will be a level playing field?
Wall Street professionals are immersed in the business and financial worlds. A part-time day trader knows more than the average person who invests haphazardly. Even equal information is not enough. It must be interpreted. And people vary widely in their experiences and abilities as well as access to those better able to do so.
A better objective for regulators would be to encourage markets to adjust swiftly to all the available information. Speeding the process most helps those with the least information, since they typically have the least ability to play the system.
Regulators speak of the need to protect investor confidence. But is there really any small investor who believes that imprisoning Martoma makes him or her equal on Wall Street? How many people put more money in their mutual fund because of the war on insider trading?
Enforcing insider trading laws does more to advance prosecutors’ careers than protect investors’ portfolios. Information will never be perfect or equal. However, adjustments to information can be more or less smooth and speedy. Washington should stop criminalizing actions which ultimately yield more benefits than costs to the rest of us.
The Tyranny of Good Intentions: How Politicians Waste Money, and Sometimes Kill People, With Kindness
If logic decided policy in Washington, federal spending would be low, the budget would be balanced, the benefits of regulations would exceed the costs, and policymakers would guard against unintended consequences. Unfortunately, the nation’s capital is largely impervious to logic, and the tragic results are obvious for all to see.
Emotion and intention seem to have become the principal determinants of government policy. People are poor. Increase the minimum wage. Not everyone can afford a home. Create a dozen housing subsidy programs.
Never mind the consequences as long as the officials involved mean well and their ideas sound good. No need to detain our leaders on white horses, who have other crusades to lead.
This widespread inability to compare consequences to intentions is a basic problem of humanity. In fact, it’s one of the reasons the Founders desired to limit government power and constrain politicians.
For instance, the newly created federal government possessed only limited, enumerated powers. Even if you had weird ideas for transforming the American people, it wouldn’t do you much good to get elected president or to Congress. The federal government wasn’t authorized by the Constitution to engage in soul-molding.
Moreover, there would be strong resistance to any attempt to expand federal power. The constitutional system preserved abundant state authority. Three federal branches offered “checks and balances” to abusive officials or majorities.
Most important, the majority of Americans shared the Founders’ suspicions. At the end of the 19th century a Democratic president still was willing to veto unemployment relief because he believed Congress had no authority to approve such a bill.
However, over the following century and more virtually every limitation on Washington was swept away. Equally important, as faith in religion ebbed faith in politics exploded. Today those who think with their hearts rather than their minds have largely taken control of the nation’s policy agenda.
No where has this been more destructive than in the area of poverty. How to deal with the poor who, Christ told us, would always be with us?
As Charles Murray demonstrated so devastatingly three decades ago in his famous book, Losing Ground, ever expanding federal anti-poverty initiatives ended up turning poor people into permanent wards of Washington. Worse, unconditional welfare benefits turned out to discourage education, punish work, inhibit marriage, preclude family formation, and, ultimately, destroy community. It took the 1996 reforms to reverse much of the culture of dependency.
Similar is the minimum wage, which may become a top election issue this fall. Unless businesses are charities, raising the price of labor will force them to adjust their hiring. How many low-skilled workers will be hired if employers are told to pay more than the labor is worth? There isn’t much benefit in having a theoretical right to a higher paying job if you are not experienced or trained enough to perform it.
There are similar examples in the regulatory field. No one wants to take unsafe, ineffective medicines. So the Food and Drug Administration was tasked with assessing the safety and efficacy of new compounds before they can be released. The intention is good, but ignores the inescapable trade-off between certainty and speed.
The rise of AIDS brought the problem into stark relief, as people faced an ugly death while the bureaucratic, rules-bound FDA denied them the one effective medicine, AZT, in order to make sure it didn’t have harmful side-effects. Years before the agency held up approval of beta-blockers, killing people lest they suffer some lesser harm from taking the drug.
Few people in politics fail to claim to be acting for the public good. In many cases they really believe it. But good intentions are never enough. Consequences are critical. What you intend often doesn’t matter nearly as much as what you actually accomplish.
President Obama has been expressing inordinate alarm about differences between income groups, and about mobility between such groups over time. “The combined trends of increased inequality and decreasing mobility,” he says, “pose a fundamental threat to the American Dream, our way of life, and what we stand for.”
A fundamental limitation of annual income distribution figures is that income in any given year may not be at all typical of a family’s normal or lifetime income. Job loss or illness can push one year’s income well below normal, for example, and asset sales can produce one-time windfalls. People are commonly much poorer when young than they are by middle age, after accumulating experience and savings. For such reasons, the President’s strong opinions about “decreasing mobility” could be important, if true.
We need to separate two concepts of mobility. One is intergenerational mobility – whether “a child born into poverty … may never be able to escape that poverty,” as the President put it. Another involves intertemporal mobility – whether starting with a low wage at your first job supposedly impedes moving up the ladder of opportunity.
The President’s opinion that intergenerational mobility has declined was rigorously debunked by Raj Chetty, Emmanuel Saez and others. As for inequality and mobility being related, they also found that, “the top 1 percent share is uncorrelated with upward mobility [p. 40].” Moreover, “The fraction of children living in single-parent households is the strongest correlate of upward income mobility among all the variables we explored [p.45].” Since other countries have fewer single-parent households, this is just one reason for being wary of facile international comparisons.
Intertemporal mobility is not about links between parents and children, but about the ease with which individuals move from a lower to a higher income group, and vice-versa. Are we stuck with the same paycheck we had just after leaving school, or can we move up with effort, experience, learning and saving? Did having a big gain in the stock market in 2007 ensure that would happen again in 2008-2009?
The Federal Reserve Board’s Survey of Consumer Finances (SCF) tracks income mobility of the same families over time. It turns out that mobility is surprisingly hectic even over short periods.
Table A, adapted from the latest SCF report shows changes in income by quintile (fifth) between 2007 and 2009. For example, only 45.1 percent of families with incomes in the middle fifth of the distribution in 2009 were also in the middle in 2007 (indicated by the bold font along the diagonal). Among the rest, slightly more moved up from a lower income group (28.9 percent) than slipped down from a higher group (25.9 percent).
With 50-55 percent of middle-income families changing places in just two years, there is obviously no shortage of “mobility” during recessions. This highlights one of two common fallacies in studies purporting to show that mobility was “higher” in some other time or place.
Studies about changes in mobility over the years often make no distinction between moving up and moving down. Is a quicker game of musical chairs really a “fairer” game? If so, deep recessions are the fairest years of all.
Periods with booms and busts such as 1969 to 1982 appear far more “mobile” in terms of movements between income groups than periods of stability and prosperity such as 1983 to 2000. As I wrote in Income and Wealth (p. 174), “The pace at which families moved between income quintiles over seven to ten years may tell us something about how volatile the economy was, but it provides no information about anyone’s ease or difficulty of earning a higher income.”
A second fallacy among mobility studies is to express concern that the pace at which families move between quintiles appears slower among top and bottom quintiles than it does in the middle. As the SCF report explains, however, “The movements of families across income groups in two years was more substantial for the three central percentile groups than for families with incomes in the two extreme groups, in part because families in one of the extreme groups could move in only one direction [emphasis added].”
The middle three quintiles are defined by both a floor and ceiling. Unlike the middle quintiles, however, movement in or out of the top or bottom income groups can be in only one direction. Anyone in a top income group in any particular year must have either been in the same or a lower group in previous years, because there is no higher group to move down from. Anyone in a bottom income group must likewise have either been in the same or higher groups in previous years, because there no lower group to move up from.
This simple mathematical distinction has led many careless observers to deplore the illusory fact that there appears to be less “mobility” among rich and poor than there is in the middle. Rather than indicating that the poor are stuck at the bottom and the rich secure at the top, this is simply the unavoidable consequence of the fact that only families at the top and bottom can move in only one direction. Like so much overheated rhetoric about inequality and mobility, this is just another example of people forming extremely strong opinions on the basis of extremely weak logic and evidence.
In case you missed it, in his Bloomberg column last week, law professor and former Obama administration OIRA head Cass Sunstein offered tips on “How to Spot a Paranoid Libertarian.” They’re people who “have a wildly exaggerated sense of risks to liberty, who adopt a presumption of bad faith on the part of government, who have a sense of victimization, who ignore the problem of tradeoffs, and who love slippery-slope arguments.” I probably know some folks who resemble that remark.
In the column and a follow-up blogpost, Sunstein distinguishes between “Paranoid Libertarians” and libertarians in general, who are “speaking on behalf of an important strand in America’s political culture.” And he’s right that virtually all ideologies, libertarianism included, attract some swivel-eyed, conspiratorial adherents who use too much ALLCAPS in their emails.
What Sunstein doesn’t have is anything resembling a case that “libertarian paranoia” is worth worrying about. In fact, beyond a few anodyne statements like “paranoia isn’t a good foundation for public policy,” he barely tries to make one.
I remember that paper very well, having blogged a fairly lengthy critique of it when it came out. It hasn’t improved with age.
The basic argument is plausible enough: Vermuele holds that the same biases and cognitive flaws that can make Americans hysterical about the risk of terror can also make us hysterical about the risks of government abuse. Thus, the salience of past examples of government overreaction to security threats—like WWII Japanese Internment—could lead us to overreact to liberty threats from government in the same way we might overreact to terrorist threats to security.
But when Vermuele gets to specific examples of destructive “libertarian panics,” there’s very little there there. The paper offers two: the American Revolution and the PATRIOT Act.
True, the Founders could be somewhat overeager to sniff out “design[s] to reduce them under absolute Despotism” (as the Declaration puts it) in every abuse perpetrated by the Crown. But when your lead example of irrational “tyrannophobia” is the country’s very Founding, you may have an uphill slog convincing Americans to panic about libertarian panics.
As for his second key example, in the light of subsequent developments, Vermuele’s discussion of the PATRIOT Act—opposition to which he characterized as “ignorant,” “irrational,” and “even hysterical”—now looks tragicomically off-base:
[C]onsider Section 215 of the Act, which allows courts to issue subpoenas for business records in national security investigations. Many have denounced the provision as a mechanism of governmental oppression. Yet the provision codifies a power that grand juries (typically dominated by prosecutors) have long exercised without judicial oversight.
Back then, the paranoids panicked about the government using 215 to get library records; hardly anyone thought the federal government would secretly invoke it for bulk collection of every American’s phone records and construction of what Sen. Ron Wyden (D-OR) has called “a federal human-relations database.”
I’m reminded of what the Johns Hopkins cryptography professor recently wrote about the Snowden revelations: “I’m no longer the crank. I wasn’t even close to cranky enough.” (See “Crypto prof asked to remove NSA-related blogpost.”)
But you don’t have to be a “Paranoid Libertarian” to worry about potential abuse of the NSA’s expanded powers, or to question how useful those powers are to Americans’ security. I mean, as sober and reasonable a fellow as Cass Sunstein has recently done just that as a member of the president’s post-Snowden NSA Review Group. So it’s strange that he apparently finds Vermuele’s paper convincing.
Then again, the two share some strange views on policy. Sunstein and Vermuele are occasional coauthors, most notably on a 2008 examination of “Conspiracy Theories.” Some of these theories are dangerous, they write, they can “create or fuel violence,” and “if government can dispel such theories, it should do so.”
How? “Our main policy idea is that government should engage in cognitive infiltration of the groups that produce conspiracy theories” [emphasis in original]. Government agents, possibly operating “anonymously or even with false identities,” could “enter chat rooms, online social networks, or even real-space groups and attempt to undermine percolating conspiracy theories by raising doubts about their factual premises, causal logic or implications for political action.”
Now, it seems to me that if you wanted to breathe new life into conspiracy theories, a great way to do that would be to encourage the impression that people making rational arguments against them are government agents. But it’s a great illustration of the point Jesse Walker makes in his 2013 book The United States of Paranoia: A Conspiracy Theory: that elite fear of alleged “cranks” is a potent political force in American life. As he cautions, “the most significant sorts of political paranoia are the kinds that catch on with people inside the halls of power, not the folks on the outside looking in.”
Daniel J. Mitchell
My main goal for fiscal policy is shrinking the size and scope of the federal government and lowering the burden of government spending. But I’m also motivated by a desire for better tax policy, which means lower tax rates, less double taxation, and fewer corrupting loopholes and other distortions.
One of the big obstacles to good tax policy is that many statists think that higher tax rates on the rich are a simple and easy way of financing bigger government. I’ve tried to explain that soak-the-rich tax policies won’t work because upper-income taxpayers have considerable ability to change the timing, level, and composition of their income.
Simply stated, when the tax rate goes up, their taxable income goes down. And that means it’s not clear whether higher tax rates lead to more revenue or less revenue. This is the underlying principle of the Laffer Curve.For more information, here’s a video from Prager University, narrated by UCLA Professor of Economics Tim Groseclose:Lower Taxes, Higher Revenue
Groseclose does an excellent job, and I particularly like the data showing that the rich paid more to the IRS following Reagan’s tax cuts.
But I do have one minor complaint: The video would have been even better if it emphasized that the tax rate shouldn’t be at the top of the “hump.” Why? Because as tax rates get closer and closer to the revenue-maximizing point, the economic damage becomes very significant. Here’s some of what I wrote about that topic back in 2012.
[L]abor taxes could be approximately doubled before getting to the downward-sloping portion of the curve. But notice that this means that tax revenues only increase by about 10 percent. …[T]his study implies that the government would reduce private-sector taxable income by about $20 for every $1 of new tax revenue. Does that seem like good public policy? Ask yourself what sort of politicians are willing to destroy so much private sector output to get their greedy paws on a bit more revenue.
The key point to remember is that we want to be at the growth-maximizing point of the Laffer Curve, not the revenue-maximizing point.
P.S.: Here’s my own video on the Laffer Curve:The Laffer Curve, Part I: Understanding the Theory
Since it was basically a do-it-yourself production, the graphics aren’t as fancy as the ones you find in the Prager University video, but I’m pleased that I emphasized on more than one occasion that it’s bad to be at the revenue-maximizing point on the Laffer Curve.
Not as bad as raising rates even higher, as some envy-motivated leftists would prefer, but still an example of bad tax policy.
From the NY Times:
Flying doesn’t come cheaply these days, particularly on long-haul flights across the Atlantic. But Norwegian Air Shuttle, which specializes in low-cost flights within Europe, plans to bring its pared-down model to the United States and Asia.
Its strategy, however, comes with a few twists: Norwegian is moving its long-haul operations from Norway to Ireland, basing some of its pilots and crew in Bangkok, hiring flight attendants in the United States, and flying the most advanced jetliner in service—the Boeing 787 Dreamliner. In the process, it has infuriated established carriers and pilots.
… Norwegian started flying between Oslo and Kennedy Airport in New York in May and has round-trip fares starting at $509. The second-lowest price found recently was $895 on United Airlines flying out of Newark Liberty International Airport. Norwegian plans to add more than a dozen new routes this year, including direct service from London to New York and Copenhagen to Fort Lauderdale, Fla., once regulators approve its new registration in Dublin.
Not surprisingly, there is resistance:
But Norwegian’s novel model has raised stiff opposition from American labor groups, airlines and pilots who see it as a backhanded attempt to outsource cheaper labor and undercut competition. Norwegian, these critics argue, is unfairly taking advantage of an open-skies agreement between the United States and Europe even though Norway is not a member of the European Union.
I’m always shocked by the price of flights to Europe, so best of luck to Norwegian as it tries to navigate this regulatory process and bring lower fares to consumers.
To encourage the purchase of health insurance, the Affordable Care Act added a number of deductions, exemptions, and penalties to the federal tax code. As might be expected from a 2,700-page law, these new tax laws have the potential to interact in unforeseen and counterintuitive ways. As first discovered by Michael Cannon and Jonathan Adler, one of the new tax provisions, when combined with state decisionmaking and Interal Revenue Service rulemaking, has given Obamacare yet another legal problem.
Here’s the deal: The legislation’s §1311 provides a generous tax credit for anyone who buys insurance from an insurance exchange “established by the State.” The provision was supposed to be an incentive for states to create their own exchanges, but only 16 states have opted to do so. In the other states, the federal government established its own exchange, as another section of the ACA specifies. But where §1311 only explicitly authorized a tax credit for people who buy insurance from a state exchange, the IRS issued a rule interpreting §1311 as also applying to purchases from federal exchanges.
This creative interpretation most obviously hurts employers, who are fined for every employee who receives such a tax credit/subsidy to buy an exchange plan when their employer fails to comply with the mandate to provide health insurance. But it also hurts some individuals, such as David Klemencic, a lead plaintiff in one of the lawsuits challenging the IRS’s tax-credit rule. Klemencic lives in a state, West Virginia, that never established an exchange, and for various reasons he doesn’t want to buy any of the insurance options available to him. Because buying insurance would cost him more than 8% of his income, he should be immune from Obamacare’s tax on the decision not to buy insurance. After the IRS expanded §1311 to subsidize people in states with federal exchanges, however, Klemencic could’ve bought health insurance for an amount low enough to again subject him to the tax for not buying insurance.
Klemencic and his fellow plaintiffs argue that they face these costs only because the IRS exceeded the scope of its powers by extending a tax credit not authorized by Congress. The district court rejected that argument, ruling that, under the highly deferential test courts apply to actions by administrative agencies, the IRS only had to show that its interpretation of §1311 was reasonable—which the court was satisfied it had.
Cato and the Pacific Research Institute have now filed an amicus brief supporting the plaintiffs on their appeal to the U.S. Court of Appeals for the D.C. Circuit. While it is manifestly the province of the judiciary to say “what the law is,” where the law’s text leaves no question as to its meaning—as is the case here with the phrase “established by the State”—it is neither right nor proper for a court to replace the laws passed by Congress with those of its own invention or the invention of civil servants. If Congress wants to extend the tax credit beyond the terms of the Affordable Care Act, it can do so by passing new legislation. The only reason for executive-branch officials not to go back to Congress for clarification, and instead legislate by fiat, is to bypass the democratic process, thereby undermining constitutional separation of powers.
This case ultimately isn’t about money, the wisdom of individual health care decisionmaking, or even political opposition to Obamacare. It’s about who gets to create the laws we live by: the democratically elected members of Congress or the bureaucrats charged with no more than executing the laws that Congress passes and the president signs.
Halbig v. Sebelius will be heard by the D.C. Circuit on March 25 (the same day that the Supreme Court hears the Hobby Lobby contraceptive-mandate cases).
Christopher A. Preble
I could not write that headline without chuckling to myself, but this is no laughing matter for some members of Congress. They are asking the Pentagon to describe what it would take to eliminate all risk in the world—or at least all the risks to the United States.
POLITICO’s “Morning Defense” reports that Rep. Duncan Hunter (R-CA) is calling on Secretary of Defense Chuck Hagel to return to the practice of submitting to Congress the list of “unfunded requirements” (i.e., all those things that the military services would want if they were unconstrained by budgets—and reality). Then-SecDef Bob Gates eliminated the practice in 2009.
“By not providing an unfunded requirements list,” Hunter wrote in a letter to Hagel, “the department and all of the service chiefs would be suggesting that the budget provides zero risk.”
Hunter’s letter reminded Morning D of a memorable exchange Hunter had with Gates in 2011. Basically, Hunter asked Gates how much money he’d need to reduce U.S. national security risk to zero.
“If I had a trillion dollar budget, I’d still have unfunded requirements. The services would still be able to come up with a list of things they really need,” Gates replied.
You can see a clip of the exchange here.
This exchange, and others like it, accurately describes the process of developing military requirements, notwithstanding Gates’ obvious skepticism. The legislation that calls on the Pentagon to produce a Quadrennial Defense Review every four years stipulates that the document not take budget numbers into account.
The end result is not a strategy document at all. It is a laundry list of horribles (without any sense of their likelihood) and an associated wish list of desired capabilities (without any sense that they will ever be used).
In the reality-based world, budgets compel prioritization—a differentiation between must haves and nice to haves. Inevitably, some things are left off the list entirely. The defenders of the current model don’t want that to happen (and are still busy adding to the list), so they don’t want economic considerations to be taken into account.
Dwight David Eisenhower, one of Gates’ heroes (and Hagel’s too), disagreed:
Our problem is to achieve adequate military strength within the limits of endurable strain upon our economy. To amass military power without regard to our economic capacity would be to defend ourselves against one kind of disaster by inviting another. (State of the Union Address, February 2, 1953)
What would it cost to eliminate all the risk in the world? The fact that the question is even asked tells us a lot about the dreadful state of this country’s strategic dialogue.
One of President Obama’s favorite rhetorical tactics is to claim that there is no serious evidence pointing in any direction other than his preferred policy. The president had occasion to deploy this tactic in an interview earlier this week, when Bill O’Reilly asked him why he opposed school vouchers:
O’REILLY - The secret to getting a … good job is education. … Now, school vouchers is a way to level the playing field. Why do you oppose school vouchers when it would give poor people a chance to go to better schools?
PRESIDENT OBAMA - Actually — every study that’s been done on school vouchers, Bill, says that it has very limited impact if any —
O’REILLY - Try it.
PRESIDENT OBAMA - On — it has been tried, it’s been tried in Milwaukee, it’s been tried right here in DC —
O’REILLY [OVERLAP] - And it worked here.
PRESIDENT OBAMA - No, actually it didn’t. When you end up taking a look at it, it didn’t actually make that much of a difference. ... As a general proposition, vouchers has not significantly improved the performance of kids that are in these poorest communities —
The most charitable interpretation of the president’s blatantly false remarks is that he’s simply unaware that 11 of 12 gold-standard studies of school choice programs found a positive impact while only one found no statistically significant difference and none found a negative outcome. Jason Riley summarized the findings of a few recent studies:
A 2013 study by Matthew Chingos of the Brookings Institution and Paul E. Peterson of Harvard found that school vouchers boost college enrollment for blacks by 24%. A 2006 evaluation of a school choice program in Dayton, Ohio, found that “after two years, black voucher students had combined reading and math scores 6.5 percentile points higher than the control group.” A 2010 study in the Journal of Educational and Behavioral Statistics found that voucher recipients had math scores 5 points higher than the control group after just one year. A 2008 study of vouchers in Charlotte, N.C., found that “after one year, voucher students had reading scores 8 percentile points higher than the control group and math scores 7 points higher.”
What about the voucher programs in Milwaukee and Washington that Mr. Obama dismissed as ineffective? A 1998 Brookings Institution study found that “After four years, voucher students had reading scores 6 Normal Curve Equivalent (NCE) points higher than the control group, and math scores 11 points higher. NCE points are similar to percentile points.” And the Obama administration itself released a report on the D.C. voucher program in 2010. “The students offered vouchers graduated from high school at a rate 12 percentage points higher (82 percent) than students in the control group (70 percent), an impact that was statistically significant at the highest level,” according to a summary. “Students in three of six subgroups tested showed significant reading gains because of the voucher offer after four or more years.”
But even Obama’s own faulty reading of the evidence does not warrant opposing school choice. Putting aside the fact that voucher students are more likely to perform better academically and to graduate high school, even if their academic outcomes were roughly the same as government school students, as Obama claims, he should still support school choice because it expands freedom, parents are more satisfied, the schools are safer, and it costs so much less.
According to the U.S. Census Bureau, Washington D.C. spends nearly $30,000 per pupil annually at its government-run schools, more than double the national average. For this “investment” they get practically the worst schools in the nation. By contrast, the low-income D.C. voucher students receive up to $8,256 for K-8 students and $12,385 for students in grades 9-12.
Perversely, whereas the Obama administration ignores mountains of evidence when opposing school choice programs, the president once again promised universal pre-school in his State of the Union address despite the overwhelming evidence that federal preschool programs do not work. And that’s according to the federal government’s own research!
It takes a special kind of chutzpah for the Obama administration to repeatedly tout their “evidence-based approach” to policy when they so consistently adopt policies that run counter to the evidence.