In today’s issue of Nature, scientists from the National Ignition Facility (NIF) in California are trumpeting their advance in achieving fusion ignition. However, the National Ignition Facility is just like so many other projects from the Department of Energy. It’s behind schedule and over budget.
Approved by Congress in 1993, the lab did not officially open until 2009 after numerous delays. According to a report in 2000 by the Government Accountability Office, “NIF’s cost increases and schedule delays were caused by poor Lawrence Livermore management and inadequate DOE oversight.”
The completed lab has cost taxpayers $5 billion, up from the initial estimates of $2.1 billion. It costs an additional $330 million to operate annually.
In 2009, scientists proclaimed that the NIF would achieve fusion within three years. Unsurprisingly for a government-funded project, NIF announced in 2012 that it failed to meet its goal. The New York Times said “the output of the experiments consistently fell short of what was predicted, suggesting that the scientists’ understanding of fusion was incomplete.”
NIF announced that it would spend the next three years trying to evaluate why it hadn’t achieved it. But in a moment of honesty in its report to Congress, NIF conceded “it is too early to assess whether or not ignition can be achieved at the National Ignition Facility.”
So while some news reports herald the advance at NIF as bringing it closer to achieving a sun-like power source, the project is still years away from its goal and billions over budget.
How do you know the Common Core is in trouble? You could religiously follow the news in New York, Indiana, Florida, and many other states. Or you could read just two new op-eds by leading Core supporters who fear their side is getting bludgeoned. Not bludgeoned in the way they describe – an education hero assaulted by kooks and charlatans – but clobbered nonetheless. As Delaware governor Jack Markell (D) and former Georgia governor Sonny Perdue (R) put it:
This is a pivotal moment for the Common Core State Standards.
Although 45 states quickly adopted the higher standards created by governors and state education officials, the effort has begun to lose momentum. Some are now wavering in the face of misinformation campaigns from people who misrepresent the initiative as a federal program and from those who support the status quo. Legislation has been introduced in at least 12 states to prohibit implementation and states have dropped out of the two major Common Core assessment consortia.
Sadly, Markell and Perdue’s piece, and one from major Core bankroller Bill Gates, illustrate why the Core may well be losing: Defenders offer cheap characterizations of their opponents while ignoring basic, crucial facts. Meanwhile, the public is learning the truth.
Both pieces employ the most hulking pro-Core deception, completely ignoring the massive hand of Washington behind state Core adoption. For all intents and purposes, adoption was compulsory to compete in the $4.35-billion Race to the Top program, a part of the “stimulus” at the nadir of the Great Recession. While some states may have eventually adopted the Core on their own, Race to the Top was precisely why so many “quickly adopted the higher standards.” Indeed, many governors and state school chiefs promised to adopt the Core before it was even finished. Why? They had to for Race to the Top! And let’s not pretend federal coercion wasn’t intended all along: In 2008 the Core-creating Council of Chief State School Officers and National Governors Association published a report calling for just such federal pressure.
The coercion didn’t stop with the Race, though. If states wanted waivers from the despised No Child left Behind Act, their only choices were take the Core or have their largest state college system declare their standards “college- and career-ready.” Oh, and the Feds selected consortia to write national tests to go with the Core.
These aren’t minor details. They are absolutely central facts about what happened, and failing to even mention them screams “dodge, dodge, dodge!” Making matters worse, Core defenders seem to revel in calling their adversaries “misinformed,” or purveyors of “myths,” while they pretend basic reality never happened.
The problems go on. Gates, for instance, suggests we need national standards because “Americans move more than 10 times over the course of a lifetime.” But Gates’ source indicates the average American younger than 18 will only move 2.6 times, and deeper mobility data show the large majority of moves are in-state. Since all states have their own standards, even among movers very few lose standards “consistency.”
Then there is the question of whether the Core is “standards” or “curriculum.” Both pieces insist it is the former, as if the whole idea of the Core weren’t to direct what schools teach – curricula. But suppose one were to ignore that. Is it true, as Gates writes, that the Core is just “a blueprint of what students need to know, but they have nothing to say about how teachers teach that information”?
While the degree of specificity varies between the language arts and math standards, at least in math the Core doesn’t just say what students should be able to do. It prescribes how. Look at the 3rd grade standards, which don’t just say students should be able to multiply or divide, but do so using “arrays, and area models.” Moreover, it is ultimately what gets tested that gets taught, and the federally funded Core tests are coming next school year.
Strangely, Markell and Perdue escalate the curriculum debate to a whole new level, insisting not just that the Core leaves local districts in charge of curriculum, but crediting it with encouraging innovative lessons. Among their anecdotes: Elementary schools in Delaware are teaching physics by having kids build “toy sail cars.” The only problem? Physics for elementary kids isn’t even in the Core!
Core supporters are waking up to the fact their project in trouble. Unfortunately, they seem happy to let crucial facts, and civil debate, continue to slumber.
From the 1939 case of United States v. Miller until 2008’s District of Columbia v. Heller, the Supreme Court left unclear what right the Second Amendment protects. For nearly 70 years, the lower courts were forced to make do with Miller’s vague guidance, which in many jurisdictions resulted in a cramped and limited right to keep and bear arms, erroneously restricted to militia service. While Heller did eventually clarify that the Second Amendment secures an individual right to keep and bear arms for self-defense, the ruling left many questions about the scope of that right unanswered (and 2010’s McDonald v. City of Chicago merely extended the right to people living in the states, without further defining it).
Since then, several courts have made clear that they plan to take only as much from Heller as they explicitly have to. One of these is the U.S. Court of Appeals for the Third Circuit, which last year in Drake v. Filko upheld New Jersey’s “may-issue” handgun law, which says that an individual may be granted a carry license—read: may be permitted to exercise her Second Amendment rights—only if she proves an urgent need to do so to the satisfaction of a law enforcement officer. In order to show this need, one must prove, with documentation, that there are specific, immediate threats to one’s safety that cannot be avoided in any way other than through possession of a handgun. If an individual can actually persuade the local official—who has total discretion to accept or deny the claim—then she gets a license for two years, at which time the gun owner must repeat the entire discretionary process (proving an imminent threat, etc.) to renew the permit.
The effect of this regulatory regime is that virtually nobody in New Jersey can use a handgun to defend themselves outside their home. The state law inverts how fundamental rights are supposed to work—that the government must justify restrictions, not the right-holder the exercise—and the Third Circuit saw no problem with that. The court applied a deferential review far from the heightened scrutiny normally due an individual right enshrined in the Bill of Rights. It also assumed the legislature’s good faith without requiring the state to show any evidence that a restrictive-carry regime lowers the rate of gun crime, and excused what constitutional infringements the law may cause because legislators were acting before Heller clarified that the Second Amendment protected an individual right.
The Third Circuit’s opinion makes clear that it, like some other lower courts, is “willfully confused” about the scope of the right to keep and bear arms as recognized in Heller and the proper judicial methodology to apply when evaluating Second Amendment cases. We think it’s time that circuit and state courts got some guidance from the Supreme Court, so we filed a brief, joined by the Madison Society Foundation, supporting the challengers’ petition for review.
This is an excellent case for the Court to take up to begin clarifying many of the unanswered questions involving the Second Amendment—such as to what extent it extends beyond the home and whether it can be conditioned on a showing of need. The Court has been hesitant to flesh out the contours of the Second Amendment. This hesitance has caused errant rulings that leave the right to bear arms hollow. Unless the Court intends the Second Amendment to lapse back into the second-class status it had before Heller, it needs to set the wayward courts straight.
New Jersey will now have an opportunity to respond to the cert petition, and then the Supreme Court is expected to decide later this winter whether to take Drake v. Jerejian and hear it in the fall.
This blogpost was co-authored by Cato legal associate Julio Colomba.
Andrew J. Coulson
Advocates and critics of universal government Pre-K seem to strongly disagree about what the research shows. Upjohn Institute economist and government Pre-K advocate Tim Bartik, for instance, claims to have a very different view of that research from Russ Whitehurst, an early education expert at the Brookings Institution who is critical of the case for universal government Pre-K.
At least some of that disagreement is illusory, because Bartik and Whitehurst are asking different questions. Bartik seeks to prove that at least a few high-quality early education programs have shown lasting success. Whitehurst wants to know about the long term effects of large scale Pre-K programs, particularly government programs.
Bartik is right that there are two early education programs in particular, High Scope/Perry and Abecedarian, that showed substantial long term benefits. But these were tiny programs operated by the people who had designed them and serving only a few dozen or a few score children. Since it is difficult to massively replicate any service without compromising its quality, the results of these programs cannot be confidently generalized to large scale government Pre-K programs.
In other words, Bartik is providing evidence that is largely irrelevant to the merits of universal government Pre-K, the policy he seems to be championing. Whitehurst and others focus on the results of large scale federal and state programs, because these are relevant to the present policy debate.
So far, there have been four randomized controlled trials of large-scale government Pre-K programs. The first two examined the same group of Head Start students, one observing them at the end of first grade and the other observing them at the end of the third grade. Both studies show initial effects enjoyed during the Head Start program to essentially vanish by the early elementary grades. The next examined Early Head Start and found much the same thing. The fourth looked at Tennesee’s Pre-K program and found it to have a statistically significant negative effect (and the other, statistically insignificant point estimates were mostly negative as well).
When Bartik deals with the evidence on Head Start he juxtaposes the negative results of the gold standard randomized controlled experiments with several non-experimental studies of the same program. But Bartik selectively neglects to discuss the many other non-experimental studies of large scale government Pre-K programs that haven’t shown lasting benefits. Indeed when the federal government reviewed a generation of non-experimental research in the 1980s, its own meta-analysis concluded that the consensus showed Head Start’s effects fading out during the K-12 years.
To sum up, there is at best no favorable consensus among non-experimental studies of large scale government Pre-K programs, but there is a consensus among the more reliable experimental studies: program effects fade out by the elementary school years, sometimes by the end of kindergarten. That is the evidence that matters when discussing proposals for expanding government Pre-K.
Daniel J. Mitchell
I feel a bit like Goldilocks.
Think about when you were a kid and your parents told you the story of Goldilocks and the Three Bears.
And then she found a bed that was too hard, and then another that was too soft, before finding one that was just right.
Well, the reason I feel like Goldilocks is because I’ve shared some “Rahn Curve” research suggesting that growth is maximized when total government spending consumes no more than 20 percent of gross domestic product. I think this sounds reasonable, but Canadians apparently have a different perspective.
Back in 2010, a Canadian libertarian put together a video that explicitly argues that I want a government that is too big.
Now we have another video from Canada. It was put together by the Fraser Institute, and it suggests that the public sector should consume 30 percent of GDP, which means that I want a government that is too small.Measuring the Size of Government in the 21st Century by Livio Di Matteo
My knee-jerk reaction is to be critical of the Fraser video. After all, there are examples - both current and historical - of nations that prosper with much lower burdens of government spending.
Singapore and Hong Kong, for instance, have public sectors today that consume less than 20 percent of economic output. Would those fast-growing jurisdictions be more prosperous if the burden of government spending was increased by more than 50 percent?
Or look at Canadian history. As recently as 1920, government outlays were 16.7 percent of economic output. Would Canada have grown faster if lawmakers at the time had almost doubled the size of government?
And what about nations such as the United States, Germany, France, Japan, Sweden, and the United Kingdom, all of which had government budgets in 1870 that consumed only about 10 percent of GDP. Would those nations have been better off if the burden of government spending was tripled?
I think the answer to all three questions is no. So why, then, did the Fraser Institute conclude that government should be bigger?
There are three very reasonable responses to that question. First, the 30 percent number is actually a measurement of where you theoretically maximize “social progress” or “societal outcomes.” If you peruse the excellent study that accompanies the video, you’ll find that economic growth is most rapid when government consumes 26 percent of GDP.
Second, the Fraser research - practically speaking - is arguing for smaller government, at least when looking at the current size of the public sector in Canada, the United States, and Western Europe. According to International Monetary Fund data, government spending consumes 41 percent of GDP in Canada, 39 percent of GDP in the United States, and 55 percent of GDP in France.
The Fraser Institute research even suggests that there should be significantly less government spending in both Switzerland and Australia, where outlays total “only” 34 percent of GDP.
Third, you’ll see if you read the underlying study that the author is simply following the data. But he also acknowledges “a limitation of the data,” which is that the numbers needed for his statistical analysis are only available for OECD nations, and only beginning in 1960.
This is a very reasonable point, and one that I also acknowledged when writing about some research on this topic from Finland’s Central Bank.
…those numbers…are the result of data constraints. Researchers looking at the post-World War II data generally find that Hong Kong and Singapore have the maximum growth rates, and the public sector in those jurisdictions consumes about 20 percent of economic output. Nations with medium-sized governments, such as Australia and the United States, tend to grow a bit slower. And the bloated welfare states of Europe suffer from stagnation. So it’s understandable that academics would conclude that growth is at its maximum point when government grabs 20 percent of GDP. But what would the research tell us if there were governments in the data set that consumed 15 percent of economic output? Or 10 percent, or even 5 percent? Such nations don’t exist today.
For what it’s worth, I assume the author of the Fraser study, given the specifications of his model, didn’t have the necessary post-1960 data to include small-state, high-growth, non-OECD jurisdictions such as Hong Kong and Singapore. If that data had been available, I suspect he also would have concluded that government should be closer to 20 percent of economic output.
I explore all these issues in my video on this topic.The Rahn Curve and the Growth-Maximizing Level of Government
The moral of the story is that government is far too large in every developed nation.
I suspect even Hong Kong and Singapore have public sectors that are too large, causing too many resources to be diverted from the private sector.
But since I’m a practical and moderate guy, I’d be happy if the burden of government spending in the United States was merely reduced back down to 20 percent of economic output.
P.S. Though I would want the majority of that spending at the state and local level.
P.P.S. Since I’m sharing videos today, here’s an amusing video from American Commitment about the joy of being “liberated” from employment.Trapped
Some people say innovation is dead in America, but NASA is always looking for innovative ways to extract more money from the taxpayers. The Wall Street Journal reports on some of their innovations in using our tax dollars to persuade us to give them even more of those tax dollars:
In William Forstchen’s new science fiction novel, “Pillar to the Sky,” there are no evil cyborgs, alien invasions or time travel calamities. The threat to humanity is far more pedestrian: tightfisted bureaucrats who have slashed NASA’s budget.
The novel is the first in a new series of “NASA-Inspired Works of Fiction,” which grew out of a collaboration between the National Aeronautics and Space Administration and science fiction publisher Tor. The partnership pairs up novelists with NASA scientists and engineers, who help writers develop scientifically plausible story lines and spot-check manuscripts for technical errors.
The plot of Mr. Forstchen’s novel hinges on a multibillion-dollar effort to build a 23,000-mile-high space elevator—a quest threatened by budget cuts and stingy congressmen….
It isn’t the first time NASA has ventured into pop culture. NASA has commissioned art work celebrating its accomplishments from luminaries like Norman Rockwell and Andy Warhol. …
Some see NASA’s involvement in movies, music and books as an attempt to subtly shape public opinion about its programs.
“Getting a message across embedded in a narrative rather than as an overt ad or press release is a subtle way of trying to influence people’s minds,” says Charles Seife, author of “Decoding the Universe,” who has written about NASA’s efforts to rebrand itself. “It makes me worry about propaganda.”
Lobbying with taxpayers’ money isn’t new. But as Thomas Jefferson wrote in the Virginia Statute of Religious Liberty: “To compel a man to furnish contributions of money for the propagation of opinions which he disbelieves is sinful and tyrannical.” To compel him to furnish contributions of money to petition his elected officials to demand more contributions from him just adds insult to injury.
Bryan Caplan has an interesting post on the recent Swiss referendum to restrict immigration from the European Union. Tyler Cowen also blogged on the same issue twice. Caplan’s point is that the Swiss imposed restrictions because there was insufficient immigration rather than too much. Areas of Switzerland that had fewer immigrants voted to restrict immigration while areas with many immigrants voted to keep the doors open.
A similar theory could explain why immigration quotas were first imposed in the United States after World War I. That war substantially reduced immigration from Europe. From 1904 through 1914, almost 1 million immigrants arrived annually in the United States – a total of 10.9 million. This large population, combined with their children, opposed numerous legislative efforts to restrict immigration from Europe.1st Gen % 2nd Gen % 1st+2nd Gen % 1870 14.4 14.0 28.4 1880 13.3 18.3 31.6 1890* 14.8 ? ? 1900 13.7 20.9 34.6 1910 14.8 21.0 35.8 1920 13.4 21.9 35.3 1930 11.8 21.4 33.2 1940 11.8 18.2 30.0 1950 9.6 16.6 26.2 1960 6.0 13.7 19.7 1970 5.9 11.8 17.7 1980* 6.2 ? ? 1990^ 8.7 8.8 17.5 2000 12.2 10.3 22.5 2010 13.7 11.3 25.0 *Data unavailable ^1990 = 1993 Source: iPums
World War I erupted in August 1914, slowing immigration and causing the percentage of immigrants to decline more than the increase in the second generation. During the four years of the war, slightly more than one million immigrants arrived. That minor decline, especially in the 1st generation, might be part of the reason why anti-immigration politicians succeeded in passing the first immigration quotas in 1921. During that time many non-citizens could vote and it was much easier to naturalize than it is today.
The post-war U.S. recession, the continuing blockade of Germany, and chaos in Europe prevented immigration from rebounding until 1921 when 805,228 people immigrated – the same year that numerical quotas restricted immigration for the first time. If the pre-war pace of immigration was uninterrupted by World War I, 4.6 million additional immigrants would have landed in America by that time – boosting the immigrant share of the population to somewhat less than 17.7 percent of the total population and the second generation by a smaller amount too. Combined, the first and second generations would have been equal to around 40 percent of the American population. Supporters of immigration restrictions might have understood this and known that immigration from Europe was about to rapidly accelerate, meaning that they only had a narrow window to approve restrictions before the changing nativity of the population made that more politically difficult.
Several reasons would have made it more difficult to achieve the 1921 vote to restrict immigration if there were that many more immigrants.
First, 66.9 percent of the House of Representatives voted for the bill. President Harding supported the law but a previous attempt to restrict immigration like this was vetoed, making the 2/3 threshold important. If there were 4.6 million more immigrants, it would have been more difficult to clear that threshold.
Second, redistricting in 1920 gerrymandered Congressional districts to reduce the political power of immigrants – which was aided by the slight decrease in the percentage of foreign born. Representatives who voted against the bill came from states that had, on average, 20.4 percent of their population as immigrants according to the 1920 census. Representatives who voted for the restriction came from states that had, on average, 10.7 percent of their population as foreign born. The 104 Representatives who did not vote came from states that had, on average, 15.2 percent of their populations who were foreign born.
Looking at Massachusetts offers a puzzle though. In 1917, Congress voted on an immigration restrictionist bill called the Literacy Act. In that year, only four of Massachusetts’ 15 Congressmen voted “yea” on the Literacy Act with 11 voting “nay.” For the 1921 Act, however, Massachusetts only had 13 seats. Of those 13, five voted “yea,” three voted “nay,” and five did not vote. Between 1910 and 1920, the immigrant population of the state increased by 2.5 percent and it voted for more immigration restrictions. Gerrymandering could explain this shift, although I do not have the data to show that, or something else might have changed.
Other factors contributed to the end of the first era of immigration in the United States. Southern politicians opposed it because it gave more electoral weight to the Northeast. Labor unions opposed immigration and some business interests began to turn against it in fear that immigrants would bring socialism with them. The growing state-based welfare programs might have contributed to the public turning against immigration, Italian immigrants who went on a wave of terrorism, the Eugenics movement, and numerous other factors likely contributed to ending native support for immigration.
If this hypothesis is true, U.S. voters will support a more liberalized immigration policy as the percentage of the population that is foreign born and the second generation continues to increase.
Different states have also produced more strict and more lenient immigration-related laws over the years despite large differences in the immigrant percentages across states. Here are partially complete lists of state laws:
Pro Immigration Laws
Sources: American Community Survey, U.S. Census, Immigration Policy Center, and dreamact.org.
Anti Immigration Laws
Anti-DL for DACA
Anti-DL for DACAAverage
Sources: American Community Survey, U.S. Census, and Immigration Policy Center.
These lists do not include the states that didn’t pass laws or tried to pass them but failed, but they provide a starting point to analyze. States that pass pro-immigration laws typically have more immigrants as a percentage of their populations. Interestingly, the difference is not huge, although there is a great deal of variance. The average for anti-immigration states is 10.7 percent – pretty high. Compared to the Swiss example, anti-immigration American states have some immigrants – just enough to make some natives dislike them – and certainly aren’t devoid of them.
California and Arizona are odd cases because so much of their population was foreign born when they created their anti-immigration laws; but there is a good argument for excluding them from the anti-immigration list because they were the trend setters. California really pioneered the anti-immigration state law through Proposition 187 (it was certainly viewed that way by the public) in 1994. There was a higher fixed cost for Californians to develop the concept of an anti-immigration state law so it likely would have taken more of a public outrage to spur that bit of legislative innovation. The case is similar for Arizona and their anti-immigration laws in 2008 and 2010. After Californians and Arizonans developed the framework for opposing immigration on the state level, the marginal costs of other states copying anti-immigration laws were smaller, making it easier for others to adopt such laws – which is exactly what happened. Arizona also developed the anti-driver license idea for DACA recipients.
Excluding California and Arizona from that already short list lowers the average immigrant population of anti-immigrant states to 7.4 percent – almost half of the pro-immigration states – and the standard deviation to just 2.9 percent. Just eyeballing it, there might be a Kuznets curve for immigration restriction with the percent of a state’s population that is immigrant on the X-axis and support for immigration restrictions on the Y-axis. A state needs some immigrants to pass anti-immigrant laws, but after the immigrant population grows past a point, pro-immigration laws are instead passed. Getting to the far side of that curve makes further immigration restrictions very difficult to impossible.
This brings us back to Switzerland.
Although 27.3 percent of Switzerland’s population is foreign born, far fewer than that can vote. Switzerland also doesn’t have birthright citizenship, so the number of second-generation Swiss who could vote against restrictions was likely small. This could also explain why despite Switzerland having a relatively high percentage of its population foreign born, it was able to pass anti-immigration laws like American states with low immigrant populations did.
This is not enough data to support my theory in the United States, but it is a starting point. To avoid some of the worst anti-immigration laws, it seems that the immigrant population only has to literally outgrow them.
Canada released a new federal budget yesterday. The ruling Conservatives are centrists and far too supportive of the welfare state. Nonetheless, the government is expected to balance the budget next year while steadily reducing spending and debt as a share of GDP.
The contrast with the huge and unreformed federal budget in Washington is stark.
In Canada, federal spending fell to just 15.1 percent of GDP in 2013 and the government projects that the ratio will decline steadily to 14.0 percent by 2019 (p. 268). Federal debt as a share of GDP fell to just 33 percent this year.
In the United States, federal spending was 20.8 percent of GDP in 2013, and the CBO projects that the ratio will gradually rise to 21.4 percent by 2019. Federal debt held by the public as a share of GDP is 74 percent this year—more than twice the Canadian level.
On federal fiscal policy, Canada has had pragmatic centrist leadership for the last two decades, with voters keeping the loony left out of power. In the United States, we’ve had power divided between centrist Republicans and loony left Democrats in recent years.
Actually, the federal leadership of both U.S. parties is loony. The debt crisis in Europe illustrated that endlessly running large deficits when government debt is already high is dangerous. It is playing with fire. Yet congressional majorities have recently signed off on a big-spending appropriations deal, a big-spending farm bill, and a debt limit bill that does nothing to combat runaway red ink.
Pundits often claim that the Republicans are controlled by radical Tea Party elements. I wish that were true, but in terms of policy results there is no evidence of it. Republican and Democratic leaders are apparently satisfied with federal spending, deficits, and debt far larger than acceptable to the centrists in Canada.
The chart shows the remarkable gap in federal spending between the two countries in recent years.
Andrew M. Grossman
Faulting the IRS for attempting to “unilaterally expand its authority,” the D.C. Circuit today affirmed a district court decision tossing out the agency’s tax-preparer licensing program. Under the program, all paid tax-return preparers, hitherto unregulated, were required to pass a certification exam, pay annual fees to the agency, and complete 15 hours of continuing education each year.
The program, of course, had been backed by the major national tax-return preparers, chiefly as a way of driving up compliance costs for smaller rivals and pushing home-based “kitchen table” preparers out of business. Dan Alban of the Institute for Justice, lead counsel to the tax preparers challenging the program, called the decision “a major victory for tax preparers—and taxpayers—nationwide.”
The licensing program was not only a classic example of corporate cronyism, but also of agency overreach. IRS relied on an 1884 statute empowering it to “regulate the practice of representatives or persons before [it].” Prior to 2011, IRS had never claimed that the statute gave it authority to regulate preparers. Indeed, in 2005, an IRS official testified that preparers fell outside of the law’s reach.
But IRS reversed course in 2011. The problem, Judge Kavanaugh’s opinion for the court explains, is not that the agency changed its mind but that its action had no basis in the text of the statute. Preparer are not a “representatives” because they have no authority at all to act on behalf of the taxpayer, who is still responsible for signing his or her own return. Preparers also aren’t engaged in “practice…before” IRS because they do not present any sort of case to the agency, such as in an investigation or hearing. And finally, the court observed that IRS’s broad view of the statute would render superfluous other statutes that do allow the agency to impose penalties on preparers for certain conduct.
A victory for liberty in itself, the decision may have broader legal import, in two respects. First, it embraces the concept that, while Congress may delegate broad authority to agencies, “courts should not lightly presume congressional intent to implicitly delegate decisions of major economic or political significance to agencies.” This principle, applied most forcefully in the Supreme 2000 Court’s Brown and Williamson decision, is one that the D.C. Circuit has lately declined to apply in big-ticket challenges to agency action, such as EPA’s greenhouse gas regulatory scheme. It may well come in handy as the Obama Administration carries out an aggressive second term agenda through executive action, often at odds with its statutory authority.
The second value of the decision is illustrating the duty of courts to take statutory text seriously even while giving deference to agencies for their policy decisions. This was the issue that confronted the Supreme Court last term in City of Arlington v. FCC—which I wrote about here—and Justice Scalia’s majority opinion drew substantial criticism for its holding that agencies’ interpretations regarding the scope of their jurisdictions are due the same deference as with anything else. But Scalia’s point was not that agencies are free to do as they please, with no real judicial check, but only that courts should place a thumb on the scale one way or the other concerning statutory authority. Courts’ heavy lifting, Justice Scalia explained, is statutory interpretation (for legal geeks, Chevron step one), and leave the policy questions to the political branches.
Judge Kavanaugh’s opinion does just that, and should be a model to courts (particularly his own) in how to balance respect for the other branches with the rule of law.
Volume 15 of the Collected Works of F. A. Hayek has just been published by the University of Chicago Press. This volume, edited by series editor and Hayek biographer Bruce Caldwell, is The Market and Other Orders. It contains many of Hayek’s most important papers:
- The Use of Knowledge in Society
- The Meaning of Competition
- The Results of Human Action but Not of Human Design
- Competition as a Discovery Procedure
- The Pretence of Knowledge, his Nobel Prize lecture
- and The Political Ideal of the Rule of Law, lectures delivered in Egypt in 1954-55 that served as early drafts of chapters 11, 12, 13, 14, and 16 of The Constitution of Liberty
That’s only the beginning in this impressive volume, which should be of interest to any Hayek scholar, and indeed any student of economics or complex social orders.
Lawrence Summers, former secretary of the Treasury and president of Harvard, said in an interview for The Commanding Heights, Daniel Yergin and Joseph Stanislaw’s 1998 study of the resurgence of economic liberalism,
What’s the single most important thing to learn from an economics course today? What I tried to leave my students with is the view that the invisible hand is more powerful than the hidden hand. Things will happen in well-organized efforts without direction, controls, plans. That’s the consensus among economists. That’s the Hayek legacy.
This volume is a great introduction to those key ideas.
Daniel J. IkensonMedia have been reporting lately about the public’s burgeoning opposition to the Congress granting President Obama fast track trade negotiating authority. Among the evidence of this alleged opposition is a frequently cited survey, which finds that 62 percent of Americans oppose granting fast track to President Obama. Considering that the survey producing that figure was commissioned by a triumvirate of anti-trade activist groups – the Communication Workers of America, the Sierra Club, and the U.S. Business and Industry Council – I had my doubts about the accuracy of that claim. After all, would lobbyists who devote so much of their efforts to derailing the trade agenda risk funding a survey that might produce results contrary to their objectives? My skepticism – it turns out – was warranted. The 62 percent who allegedly “oppose giving the president fast-track authority for TPP [the Trans-Pacific Partnership agreement]” actually oppose giving the president a definition of fast track that is woefully inaccurate. The graphic below shows the question and response tally, as presented in the report showing the survey’s results, which is here. Read the question that begins with “As you may know…”
Convinced? Are you with the 62 percent? I would be, if fast track were really as the question implies. But the question includes an incomplete and misleading description of fast track. The question is being asked, presumably, of a random sample of Americans, which means that the average respondent has no idea about the purpose of fast track, and knows even less about its language and details. Thus, the phrasing of the question is highly determinative of the answer. The thrust of fast track as implied by the question above is that Congress has no role in the process whatsoever and sits by passively while the president negotiates deals to his liking, submits them to Congress, and says take it or leave it. Most thinking people who cherish our republican form of government should and would oppose legislation that sanctions the abdication of responsibility from one branch of government to another. But that’s not what fast track does. Under the Constitution, Congress is authorized to regulate foreign commerce and the Executive is authorized to make treaties. Negotiating, finalizing, and ratifying trade treaties involve both sets of authorities. The survey question, however, leaves out entirely the balancing term of this traditional sharing of authority. It is true that under the terms of fast track, Congress agrees to a timely, up-or-down vote without amendments. But that happens only after the Congress has conveyed its negotiating objectives and parameters to the president. Under the recently introduced legislation to restore fast track, Congress is demanding that 147 negotiating objectives be met before allowing fast track consideration of any trade deals that come before it. If those objectives are not met, consideration of the trade deal is taken off the fast track and subject to normal procedures. By presenting a severely misleading (and more menacing definition) of fast track to its survey respondents, and then representing and publicizing the results as the attitudes of Americans toward fast track, the survey designers (Hart Research Associates and Chesapeake Beach Consulting) and sponsors have done the public a major disservice. As a result, we are further from an informed debate than we were before the survey was conducted.
K. William Watson
There’s plenty of criticism flying around about the new farm bill. It spends unprecedented amounts of money to prop up one of the most successful industries in the country. It uses Soviet-style central planning to maintain food prices and make rich farmers richer. Its commodity programs distort trade in violation of global trade rules.
But this year’s the farm bill had the potential to mitigate some these sins by repealing a number of high-profile protectionist regulations. Despite a few close calls, however, the final version of the bill kept these programs in place, exposing the United States to possible retaliation.
One of those programs is the mandatory country-of-origin labeling (COOL) law. This requirement was first imposed by the 2002 farm bill. Ostensibly designed to increase consumer awareness, the true impact of the program is to push foreign-born cattle out of the market. The law requires meat packers to keep track of, and process separately, cattle that was born and/or raised for some time in Canada. The added expense benefits a portion of U.S. cattle ranchers at the expense of meat industry as a whole.
The negative impact on the Canadian and Mexican cattle industries was enough to prompt a complaint at the WTO. After the United States lost that case, the administration amended the regulation. But the new regulation, rather than bringing the United States into compliance, actually makes the law even more protectionist. Canada has made clear its intention to impose barriers on a wide range of U.S. products in retaliation.
Repealing this disastrous regulation through the farm bill was discussed during numerous stages of the legislative process, but no language on COOL was ever added to the bill.
Another program that could have been fixed by the farm bill was a bizarrely redundant and purely unnecessary catfish inspection regime. The new system would cost an estimated $14 million per year to administer and (by the USDA’s own admission) do nothing to improve the safety of catfish. However, the new institutional requirements imposed on catfish farmers to comply with the new regime would all but eliminate Vietnamese competitors from the market. The U.S. catfish industry and their allies in Congress are all for it.
Even though both house of Congress had at one point or another passed bills that repealed the new catfish regime, the final bill that came out of conference kept the redundant system in place.
The inspection issue has complicated negotiation of the Trans-Pacific Partnership, of which Vietnam will be a member, and could become the basis of a complaint at the World Trade Organization.
In the words of Sen. Mike Lee, the farm bill is “a monument to Washington dysfunction, and an insult to taxpayers, consumers, and citizens.” It is also the most popular vehicle for imposing protectionist regulations that serve a small set of businesses at the expense of the national economy.
There was hope that this bill could roll back some of the damage done in the past, at least for a handful of odious regulations. That hope was sorely misplaced.
John McGinnis has some kind words for work I oversee here at Cato in a recent blog post of his entitled: “The Internet–A Technology for Encompassing Interests and Liberty.”
As he points out, the information environment helps determine outcomes in political systems because it controls who is in a position to exercise power.
The history of liberty has been in no small measure the struggle between diffuse and encompassing interests, on the one hand, and special interests, on the other. Through their concentrated power, special interests seek to use the state to their benefit, while diffuse interests concern the ordinary citizen or taxpayer, or in William Graham Sumner’s arresting phrase, The Forgotten Man. When the printing press was invented, the most important special interests were primarily the rulers themselves and the aristocrats who supported them. The printing press allowed the middle class to discover and organize around their common interests to sustain a democratic system that limited the exactions of the oligarchs.
But the struggle between diffuse and special interests does not disappear with the rise of democracy. Trade associations, farmers’ associations and unions have leverage with politicians to obtain benefits that the rest of us pay for. As a successor to the printing press, however, the internet advances liberty by continuing to reduce the cost of acquiring information. Such advances help diffuse groups more than special interests.
The Internet is the new printing press, and we’re generating data here at Cato that should allow it to have its natural, salutary effects for liberty.
My favorite current example is the “Appropriate Appropriations?” page published by the Washington Examiner. It allows you to easily see what representatives have introduced bills proposing to spend taxpayer money, information that—believe it or not—was hard to come by until now.
In John McGinnis, we have a legal scholar who recognizes the potential ramifications for governance of our entry into the information age. Read his whole post and, for more in this area, his book, Accelerating Democracy: Transforming Governance Through Technology.
Last week, the Supreme Court of Michigan rejected a legal challenge to the Michigan Medical Marihuana Act (MMMA). Although limited to the state of Michigan, this precedent helps to build momentum for other states to move in the direction of marijuana legalization.
By way of background, in 2008 Michigan voters approved a state initiative that would allow medical marijuana for certain qualifying patients. In 2010, the City of Wyoming enacted an ordinance that essentially prohibited marijuana (no medical exceptions). John Ter Beek is a resident of the City of Wyoming and he claimed that he was a qualified patient under the state law and he argued that the state law preempted the city ordinance. Lawyers for the City of Wyoming responded with the argument that the state law was itself invalid because it violated the supremacy clause of the Federal Constitution. That is, since federal law (the Controlled Substances Act (CSA)) prohibits the possession of marijuana, no state can change its law to allow marijuana sales, or even possession.
The Supreme Court of Michigan unanimously sided with John Ter Beek. Writing for the court, Justice McCormack said, “[The MMMA] provides that, under state law, certain individuals may engage in certain medical marijuana use without risk of penalty…while such use is prohibited under federal law, [MMMA] does not deny the federal government the ability to enforce that prohibition, nor does it purport to require, authorize, or excuse its violation.” Thus, there is no violation of the federal supremacy doctrine.
Recall that after Colorado and Washington approved initiatives to legalize marijuana, some former DEA administrators argued that those initiatives were invalid under the federal supremacy clause. (One even said it was a ‘no-brainer.’) The Obama administration declined to bring such a challenge and we will be hearing it less and less as these precedents pile up.
Michael F. Cannon
Over at DarwinsFool.com, I summarize a lengthy report issued by two congressional committees on how the Treasury Department, the Internal Revenue Service, and the Department of Health and Human Services conspired to create a new entitlement program that is authorized nowhere in federal law. Here’s an excerpt in which I summarize the summary:
Here is what seven key Treasury and IRS officials told investigators.
In early 2011, Treasury and IRS officials realized they had a problem. They unanimously believed Congress had intended to authorize certain taxes and subsidies in all states, whether or not a state opted to establish a health insurance “exchange” under the Patient Protection and Affordable Care Act. At the same time, agency officials recognized: (1) the PPACA plainly does not allow those taxes and subsidies in non-establishing states; (2) the law’s legislative history offers no support for their theory that Congress intended to allow them in non-establishing states; and (3) Congress had not given the agencies authority to treat non-establishing states the same as establishing states.
Nevertheless, agency officials agreed, again with apparent unanimity, to impose those taxes and dispense those subsidies in states with federal Exchanges, the undisputed plain meaning of the PPACA notwithstanding. Treasury, IRS, and HHS officials simply rewrote the law to create a new, unauthorized entitlement program whose cost “may exceed $500 billion dollars over 10 years.” (My own estimate puts the 10-year cost closer to $700 billion.)
The full post includes details some pretty stunning examples of how agency officials were derelict in their duty to execute faithfully the laws Congress enacts.
Manhattan U.S. attorney Preet Bharara claimed another victory in his crusade against “insider trading,” a practice he once called “pervasive.” Last week he won a conviction against Mathew Martoma, formerly at SAC Capital.
Another big scalp was hedge fund billionaire Raj Rajaratnam, convicted in 2011 and sentenced to 11 years in prison. A decade ago Martha Stewart was convicted of obstruction of justice in an insider trading case.
Objectively, the insider trading ban makes no sense. It creates an arcane distinction between “non-public” and “public” information. It presumes that investors should possess equal information and never know more than anyone else.
It punishes traders for seeking to gain information known to some people but not to everyone. It inhibits people from acting on and markets from reacting to the latest information.
Martoma was alleged to have gotten advance notice of the test results for an experimental drug. Martoma then was accused of recommending that SAC dump its stock in the firms that were developing the pharmaceutical.
If true, SAC gained an advantage over other shareholders. But why should that be illegal? The doctor who talked deserved to be punished for his disclosure. However, Martoma’s actions hurt no one.
SAC avoided losses suffered by other shareholders, but they would have lost nonetheless. Even the buyers of SAC’s shares had no complaint: They wanted to purchase based on the information available to them and would have bought the shares from someone else had SAC not sold.
Of course, some forms of insider trading are properly criminalized—typically when accompanied by other illegal actions. For instance, fraudulently misrepresenting information to buyers/sellers. However, because of the usual anonymity of stock market participants in most cases it would be impossible to offer fraudulent assurances even if one wanted to.
The government has regularly expanded the legal definition of insider trading. For instance, in 1985 the government indicted a Wall Street Journal reporter for leaking his “Heard on the Street” columns to a stockbroker before publication.
Doing so might have violated newspaper policy, but that was a problem for the Journal, not the U.S. attorney. The information was gathered legally; the journalist had no fiduciary responsibility concerning the material; there was nothing proprietary about the scheduled columns.
Other cases also have expanded Uncle Sam’s reach. Information is currency on Wall Street and is widely and constantly traded. Punishing previously legitimate behavior after the fact unfairly penalizes individual defendants and disrupts national markets.
As applied, the insider trading laws push in only one direction: they punish action. It is virtually impossible to penalize someone for not acting, even if he or she did so in reliance on inside information. This government bias against action, whether buying or selling, is unlikely to improve investment decisions or market efficiency.
Indeed, it is impossible to equalize information. Does anyone believe that such markets ever will be a level playing field?
Wall Street professionals are immersed in the business and financial worlds. A part-time day trader knows more than the average person who invests haphazardly. Even equal information is not enough. It must be interpreted. And people vary widely in their experiences and abilities as well as access to those better able to do so.
A better objective for regulators would be to encourage markets to adjust swiftly to all the available information. Speeding the process most helps those with the least information, since they typically have the least ability to play the system.
Regulators speak of the need to protect investor confidence. But is there really any small investor who believes that imprisoning Martoma makes him or her equal on Wall Street? How many people put more money in their mutual fund because of the war on insider trading?
Enforcing insider trading laws does more to advance prosecutors’ careers than protect investors’ portfolios. Information will never be perfect or equal. However, adjustments to information can be more or less smooth and speedy. Washington should stop criminalizing actions which ultimately yield more benefits than costs to the rest of us.
The Tyranny of Good Intentions: How Politicians Waste Money, and Sometimes Kill People, With Kindness
If logic decided policy in Washington, federal spending would be low, the budget would be balanced, the benefits of regulations would exceed the costs, and policymakers would guard against unintended consequences. Unfortunately, the nation’s capital is largely impervious to logic, and the tragic results are obvious for all to see.
Emotion and intention seem to have become the principal determinants of government policy. People are poor. Increase the minimum wage. Not everyone can afford a home. Create a dozen housing subsidy programs.
Never mind the consequences as long as the officials involved mean well and their ideas sound good. No need to detain our leaders on white horses, who have other crusades to lead.
This widespread inability to compare consequences to intentions is a basic problem of humanity. In fact, it’s one of the reasons the Founders desired to limit government power and constrain politicians.
For instance, the newly created federal government possessed only limited, enumerated powers. Even if you had weird ideas for transforming the American people, it wouldn’t do you much good to get elected president or to Congress. The federal government wasn’t authorized by the Constitution to engage in soul-molding.
Moreover, there would be strong resistance to any attempt to expand federal power. The constitutional system preserved abundant state authority. Three federal branches offered “checks and balances” to abusive officials or majorities.
Most important, the majority of Americans shared the Founders’ suspicions. At the end of the 19th century a Democratic president still was willing to veto unemployment relief because he believed Congress had no authority to approve such a bill.
However, over the following century and more virtually every limitation on Washington was swept away. Equally important, as faith in religion ebbed faith in politics exploded. Today those who think with their hearts rather than their minds have largely taken control of the nation’s policy agenda.
No where has this been more destructive than in the area of poverty. How to deal with the poor who, Christ told us, would always be with us?
As Charles Murray demonstrated so devastatingly three decades ago in his famous book, Losing Ground, ever expanding federal anti-poverty initiatives ended up turning poor people into permanent wards of Washington. Worse, unconditional welfare benefits turned out to discourage education, punish work, inhibit marriage, preclude family formation, and, ultimately, destroy community. It took the 1996 reforms to reverse much of the culture of dependency.
Similar is the minimum wage, which may become a top election issue this fall. Unless businesses are charities, raising the price of labor will force them to adjust their hiring. How many low-skilled workers will be hired if employers are told to pay more than the labor is worth? There isn’t much benefit in having a theoretical right to a higher paying job if you are not experienced or trained enough to perform it.
There are similar examples in the regulatory field. No one wants to take unsafe, ineffective medicines. So the Food and Drug Administration was tasked with assessing the safety and efficacy of new compounds before they can be released. The intention is good, but ignores the inescapable trade-off between certainty and speed.
The rise of AIDS brought the problem into stark relief, as people faced an ugly death while the bureaucratic, rules-bound FDA denied them the one effective medicine, AZT, in order to make sure it didn’t have harmful side-effects. Years before the agency held up approval of beta-blockers, killing people lest they suffer some lesser harm from taking the drug.
Few people in politics fail to claim to be acting for the public good. In many cases they really believe it. But good intentions are never enough. Consequences are critical. What you intend often doesn’t matter nearly as much as what you actually accomplish.
President Obama has been expressing inordinate alarm about differences between income groups, and about mobility between such groups over time. “The combined trends of increased inequality and decreasing mobility,” he says, “pose a fundamental threat to the American Dream, our way of life, and what we stand for.”
A fundamental limitation of annual income distribution figures is that income in any given year may not be at all typical of a family’s normal or lifetime income. Job loss or illness can push one year’s income well below normal, for example, and asset sales can produce one-time windfalls. People are commonly much poorer when young than they are by middle age, after accumulating experience and savings. For such reasons, the President’s strong opinions about “decreasing mobility” could be important, if true.
We need to separate two concepts of mobility. One is intergenerational mobility – whether “a child born into poverty … may never be able to escape that poverty,” as the President put it. Another involves intertemporal mobility – whether starting with a low wage at your first job supposedly impedes moving up the ladder of opportunity.
The President’s opinion that intergenerational mobility has declined was rigorously debunked by Raj Chetty, Emmanuel Saez and others. As for inequality and mobility being related, they also found that, “the top 1 percent share is uncorrelated with upward mobility [p. 40].” Moreover, “The fraction of children living in single-parent households is the strongest correlate of upward income mobility among all the variables we explored [p.45].” Since other countries have fewer single-parent households, this is just one reason for being wary of facile international comparisons.
Intertemporal mobility is not about links between parents and children, but about the ease with which individuals move from a lower to a higher income group, and vice-versa. Are we stuck with the same paycheck we had just after leaving school, or can we move up with effort, experience, learning and saving? Did having a big gain in the stock market in 2007 ensure that would happen again in 2008-2009?
The Federal Reserve Board’s Survey of Consumer Finances (SCF) tracks income mobility of the same families over time. It turns out that mobility is surprisingly hectic even over short periods.
Table A, adapted from the latest SCF report shows changes in income by quintile (fifth) between 2007 and 2009. For example, only 45.1 percent of families with incomes in the middle fifth of the distribution in 2009 were also in the middle in 2007 (indicated by the bold font along the diagonal). Among the rest, slightly more moved up from a lower income group (28.9 percent) than slipped down from a higher group (25.9 percent).
With 50-55 percent of middle-income families changing places in just two years, there is obviously no shortage of “mobility” during recessions. This highlights one of two common fallacies in studies purporting to show that mobility was “higher” in some other time or place.
Studies about changes in mobility over the years often make no distinction between moving up and moving down. Is a quicker game of musical chairs really a “fairer” game? If so, deep recessions are the fairest years of all.
Periods with booms and busts such as 1969 to 1982 appear far more “mobile” in terms of movements between income groups than periods of stability and prosperity such as 1983 to 2000. As I wrote in Income and Wealth (p. 174), “The pace at which families moved between income quintiles over seven to ten years may tell us something about how volatile the economy was, but it provides no information about anyone’s ease or difficulty of earning a higher income.”
A second fallacy among mobility studies is to express concern that the pace at which families move between quintiles appears slower among top and bottom quintiles than it does in the middle. As the SCF report explains, however, “The movements of families across income groups in two years was more substantial for the three central percentile groups than for families with incomes in the two extreme groups, in part because families in one of the extreme groups could move in only one direction [emphasis added].”
The middle three quintiles are defined by both a floor and ceiling. Unlike the middle quintiles, however, movement in or out of the top or bottom income groups can be in only one direction. Anyone in a top income group in any particular year must have either been in the same or a lower group in previous years, because there is no higher group to move down from. Anyone in a bottom income group must likewise have either been in the same or higher groups in previous years, because there no lower group to move up from.
This simple mathematical distinction has led many careless observers to deplore the illusory fact that there appears to be less “mobility” among rich and poor than there is in the middle. Rather than indicating that the poor are stuck at the bottom and the rich secure at the top, this is simply the unavoidable consequence of the fact that only families at the top and bottom can move in only one direction. Like so much overheated rhetoric about inequality and mobility, this is just another example of people forming extremely strong opinions on the basis of extremely weak logic and evidence.
In case you missed it, in his Bloomberg column last week, law professor and former Obama administration OIRA head Cass Sunstein offered tips on “How to Spot a Paranoid Libertarian.” They’re people who “have a wildly exaggerated sense of risks to liberty, who adopt a presumption of bad faith on the part of government, who have a sense of victimization, who ignore the problem of tradeoffs, and who love slippery-slope arguments.” I probably know some folks who resemble that remark.
In the column and a follow-up blogpost, Sunstein distinguishes between “Paranoid Libertarians” and libertarians in general, who are “speaking on behalf of an important strand in America’s political culture.” And he’s right that virtually all ideologies, libertarianism included, attract some swivel-eyed, conspiratorial adherents who use too much ALLCAPS in their emails.
What Sunstein doesn’t have is anything resembling a case that “libertarian paranoia” is worth worrying about. In fact, beyond a few anodyne statements like “paranoia isn’t a good foundation for public policy,” he barely tries to make one.
I remember that paper very well, having blogged a fairly lengthy critique of it when it came out. It hasn’t improved with age.
The basic argument is plausible enough: Vermuele holds that the same biases and cognitive flaws that can make Americans hysterical about the risk of terror can also make us hysterical about the risks of government abuse. Thus, the salience of past examples of government overreaction to security threats—like WWII Japanese Internment—could lead us to overreact to liberty threats from government in the same way we might overreact to terrorist threats to security.
But when Vermuele gets to specific examples of destructive “libertarian panics,” there’s very little there there. The paper offers two: the American Revolution and the PATRIOT Act.
True, the Founders could be somewhat overeager to sniff out “design[s] to reduce them under absolute Despotism” (as the Declaration puts it) in every abuse perpetrated by the Crown. But when your lead example of irrational “tyrannophobia” is the country’s very Founding, you may have an uphill slog convincing Americans to panic about libertarian panics.
As for his second key example, in the light of subsequent developments, Vermuele’s discussion of the PATRIOT Act—opposition to which he characterized as “ignorant,” “irrational,” and “even hysterical”—now looks tragicomically off-base:
[C]onsider Section 215 of the Act, which allows courts to issue subpoenas for business records in national security investigations. Many have denounced the provision as a mechanism of governmental oppression. Yet the provision codifies a power that grand juries (typically dominated by prosecutors) have long exercised without judicial oversight.
Back then, the paranoids panicked about the government using 215 to get library records; hardly anyone thought the federal government would secretly invoke it for bulk collection of every American’s phone records and construction of what Sen. Ron Wyden (D-OR) has called “a federal human-relations database.”
I’m reminded of what the Johns Hopkins cryptography professor recently wrote about the Snowden revelations: “I’m no longer the crank. I wasn’t even close to cranky enough.” (See “Crypto prof asked to remove NSA-related blogpost.”)
But you don’t have to be a “Paranoid Libertarian” to worry about potential abuse of the NSA’s expanded powers, or to question how useful those powers are to Americans’ security. I mean, as sober and reasonable a fellow as Cass Sunstein has recently done just that as a member of the president’s post-Snowden NSA Review Group. So it’s strange that he apparently finds Vermuele’s paper convincing.
Then again, the two share some strange views on policy. Sunstein and Vermuele are occasional coauthors, most notably on a 2008 examination of “Conspiracy Theories.” Some of these theories are dangerous, they write, they can “create or fuel violence,” and “if government can dispel such theories, it should do so.”
How? “Our main policy idea is that government should engage in cognitive infiltration of the groups that produce conspiracy theories” [emphasis in original]. Government agents, possibly operating “anonymously or even with false identities,” could “enter chat rooms, online social networks, or even real-space groups and attempt to undermine percolating conspiracy theories by raising doubts about their factual premises, causal logic or implications for political action.”
Now, it seems to me that if you wanted to breathe new life into conspiracy theories, a great way to do that would be to encourage the impression that people making rational arguments against them are government agents. But it’s a great illustration of the point Jesse Walker makes in his 2013 book The United States of Paranoia: A Conspiracy Theory: that elite fear of alleged “cranks” is a potent political force in American life. As he cautions, “the most significant sorts of political paranoia are the kinds that catch on with people inside the halls of power, not the folks on the outside looking in.”
Daniel J. Mitchell
My main goal for fiscal policy is shrinking the size and scope of the federal government and lowering the burden of government spending. But I’m also motivated by a desire for better tax policy, which means lower tax rates, less double taxation, and fewer corrupting loopholes and other distortions.
One of the big obstacles to good tax policy is that many statists think that higher tax rates on the rich are a simple and easy way of financing bigger government. I’ve tried to explain that soak-the-rich tax policies won’t work because upper-income taxpayers have considerable ability to change the timing, level, and composition of their income.
Simply stated, when the tax rate goes up, their taxable income goes down. And that means it’s not clear whether higher tax rates lead to more revenue or less revenue. This is the underlying principle of the Laffer Curve.For more information, here’s a video from Prager University, narrated by UCLA Professor of Economics Tim Groseclose:Lower Taxes, Higher Revenue
Groseclose does an excellent job, and I particularly like the data showing that the rich paid more to the IRS following Reagan’s tax cuts.
But I do have one minor complaint: The video would have been even better if it emphasized that the tax rate shouldn’t be at the top of the “hump.” Why? Because as tax rates get closer and closer to the revenue-maximizing point, the economic damage becomes very significant. Here’s some of what I wrote about that topic back in 2012.
[L]abor taxes could be approximately doubled before getting to the downward-sloping portion of the curve. But notice that this means that tax revenues only increase by about 10 percent. …[T]his study implies that the government would reduce private-sector taxable income by about $20 for every $1 of new tax revenue. Does that seem like good public policy? Ask yourself what sort of politicians are willing to destroy so much private sector output to get their greedy paws on a bit more revenue.
The key point to remember is that we want to be at the growth-maximizing point of the Laffer Curve, not the revenue-maximizing point.
P.S.: Here’s my own video on the Laffer Curve:The Laffer Curve, Part I: Understanding the Theory
Since it was basically a do-it-yourself production, the graphics aren’t as fancy as the ones you find in the Prager University video, but I’m pleased that I emphasized on more than one occasion that it’s bad to be at the revenue-maximizing point on the Laffer Curve.
Not as bad as raising rates even higher, as some envy-motivated leftists would prefer, but still an example of bad tax policy.