Patrick J. Michaels and Paul C. "Chip" Knappenberger
You Ought to Have a Look is new a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best articles and essays in recent days, along with our color commentary.
We have a couple of new introductions to make to our You Ought to Have a Look line-up.
We’re big fans of Daniel Botkin. He is an environmental biologist with a panoramic view of nature. He started his career as a forest modeler (that’s someone who predicts the future composition and structure of forests) and was a Government-Issue global warmer. Since then, he has written 16 books on the environment and has become a champion lukewarmer—a person who, like us, synthesizes the climate data and comes to the hypothesis that warming will be modest and readily adapted to. On May 29, he testified before the House Committee on Science, Space, and Technology, on systematic problems with the United Nations’ Intergovernmental Panel on Climate Change. On June 18, he was before a subcommittee of the Senate Environment and Public Works Committee.
Botkin has a thought-provoking piece this week in the National Parks Traveler—a website dedicated to all things National Parks. In his article, he critiques a report issued by the Union of Concerned Scientists (UCS) with the predictably alarming title, “National Landmarks at Risk: How Rising Seas, Floods, and Wildfires Are Threatening the United States’ Most Cherished Historic Sites.” The paleolithic media were all over the UCS report when it came out six months ago, and it headlined several news shows on the dinosaur networks. For “balance,” we managed a few soundbites.
Botkin’s article is more in-depth than the UCS report, concluding that human-caused global warming gets far more attention than it deserves in the universe of environmental issues, which precludes appropriate attention to real issues.
However, global warming has become the sole focus of so much environmental discussion that it risks eclipsing much more pressing and demonstrable environmental problems. The major damage that we as a species are doing here and now to the environment is not getting the attention it deserves.
You ought to have a look at Botkin’s complete article!
Next we bring your attention to Watts Up With That, “the world’s most viewed site of global warming and climate change,” the result of (now retired) broadcast meteorologist Anthony Watts’ blood, sweat, and tears over the past several years. WUWT, as it is known, features a large array of climate-related articles, about four or five per day. A recent story that caught our eye was one featuring a collection of newsbites highlighting record agricultural output from around the world during the past year. The article “World Food Production at Record Levels” reinforces a point that we like to repeat as often as we can: the world is thriving in the face of, or even because of, climate changes.
Again, we recommend a click on Judith Curry’s Climate Etc. blog. Recently, she featured a guest post by Matt Skaggs, who presents an entirely new (to climate people, anyway) way to isolate (or, perhaps, not find) the signature of carbon dioxide-induced warming. “Root Cause Analysis of the Modern Warming” says that documenting a human fingerprint on global warming is much more uncertain than it is typically made out to be, as it is based upon a faulty line of reasoning. Curry offers this teaser:
The main point of relevance here is that there are different ways to frame and approach the climate change attribution problem, and the one used by the IPCC and mainstream climate scientists isn’t a very good one.
The post is lengthy and technical in spots, but it does make you wonder whether everyone has been trying to solve the attribution issue the wrong way.
The interwebs also provide a rapid-response platform when one’s scientific work is challenged in the refereed literature, as shown in a recent post at www.drroyspencer.com. Roy Spencer is writing about a recent publication in the journal Climate Dynamics claiming that his satellite-sensed temperatures—which show much less warming than do surface thermometers—are confounded by cloudiness. According to author Fuzhong Weng and colleagues, when the effects of cloudcover are accounted for, the warming trend in the satellite record increases by about 30%, putting it more in line with the surface records.
(Actually, that wouldn’t cut it for at least one obvious reason—Spencer’s satellite record does not have as much of the “pause” in warming since 1997 that appears in the surface temperature history that scientists prefer over others.)
Spencer’s rejoinder is pretty powerful, noting that he and his co-worker John Christy have visited and revisited the cloud issue for decades and find it to be nugatory. Further, Spencer questions why Weng et al. only looked at one of the very many sensing units that have been launched since 1978, and that only covers 13 of the 35 years of the operational data set. There’s no particular reason cited for this, which tends to confirm that the Weng et al. paper is consistent with the paradigm theory of science first put forth by Thomas Kuhn in the 1962 (and many reprints) classic, The Structure of Scientific Revolutions. Kuhn’s thesis is that most scientists spend their careers trying to defend the established order, and when that order is challenged, they resort to some pretty bizarre attempts to demonstrate that everything is hunky-dory with the established paradigm. In this case, that would be that the surface temperature trends shown in the University of East Anglia are more reliable than the satellite data—providing more (but declining) support for the notion that the human influence on climate is large and dangerous.
Observant readers of our ramblings will notice that we specifically ignore the surface temperature history from NASA, initially developed by Sergei Lebedeff and Jim Hansen. Those data have been processed (perhaps “Jimmied” is more appropriate) in ways that flunk Physics 101, and, la-dee-da, Jimmying produces more warming, more consistent with high-end fantasies about climate change that keep Hansen flying in the front of the plane. For a bit on that, see this story from the WUWT archives.
The big higher education news this week is that the Obama administration released its “gainful employment” rules aimed squarely at beleaguered for-profit colleges, which are the schools most likely to offer programs that are explicitly about supplying job skills. This attack does not seem to come because for-profits are objectively worse performers than the rest of the decrepit Ivory Tower, but because it is easy to demonize institutions that—unlike much of higher ed—are honest about trying to make a profit. Oh, and because going after the real culprit—an aid system that gives almost any person almost any amount of money to go to college—would require federal politicians to take on a system they created, and that makes them look ever-so-caring.
Perhaps the only unexpected thing about the regulations is that they do not include cohort default rates—the percentage of an institution’s borrowers defaulting on their loans within two or three years of entering repayment—among the assessments of aid worthiness. Instead, they just use debt-to-earnings ratios. The American Association of Private Sector Colleges and Universities—proprietary colleges’ advocacy arm—suspects this was done because including the default rate was projected to ensnare some community colleges, and the administration wanted this to be all about for-profit institutions.
There is reason to believe this may be true. The administration has lauded community colleges as the Little Schools That Could for a long time, and, indeed, directly compared them to for-profit schools in its press release for the new regulations. “The situation for students at for-profit institutions is particularly troubling,” they wrote. “On average, attending a two-year for-profit institution costs a student four times as much as attending a community college.” What didn’t they mention? According to federal data, completion rates at community colleges are around 20 percent, versus 63 percent at two-year for-profits. The data aren’t perfect—they capture only first-time, full-time students who finish at the institution where they started—but it is a yawning gap that illustrates a crucial point not just about gainful employment, but overall higher education policy: emotions and political concerns, not objective analysis, seem to drive it.
And speaking of objective analysis: We will be hosting what should be a great, diverse panel discussion on Wednesday, November 5, that will look at the changing face of higher education—including, no doubt, gainful employment—as well as offer predictions about what the previous night’s election results might mean for higher education. Hope to see you there!
Patrick J. Michaels
Increasingly, federal monies have been disbursed to the various departments and agencies in support of the Obama administration’s politically strange perseveration on global warming. Specifically, many millions go out each month for “public outreach,” more properly labeled propaganda, on the horrors of climate change.
To show how well-spent this money is, we draw attention to today’s posting from the Department of Energy’s communication director Marissa Newhall, featuring pumpkins with windmills (the correct name for “wind turbine”) and solar panels carved on them. A quote:
Last week, we shared some energy-themed pumpkin carving stencils to help you “energize” your neighborhood—and teach trick-or-treaters about energy—this Halloween. On our own time after work, we put the patterns to the test and carved some energy pumpkins of our own.
We’re wondering: were they also ”on their own time after work” when they came up with the “energy-themed pumpkin carving stencils”?Energyween Pumpkin Carving
Good news: after nearly three months of airstrikes in Iraq and Syria, the branding’s finally caught up to the bombing. Our latest war in the Middle East finally has a name: “Operation Inherent Resolve” is what we’re calling it, the Pentagon recently announced. DoD planners had initially rejected that name as uninspiring and “just kind of bleh,” but after several weeks of fruitless searching, they’ve decided it’s the best we can do.Here’s Defense.gov’s banner graphic for “Operation Inherent Resolve”: simple, spare, sort of Sisyphean.
Actually, with its air of uninspired resignation, “Inherent Resolve” suits well enough, even if something like “Operation Eternal Recurrence” might have fit better. But it surely says something that, as with hurricanes, we’re running out of cool names for the wars presidents launch.
Now that we know what to call it, what should we make of Obama’s latest military intervention and how it fits into the president’s emerging legacy on constitutional war powers? Jack Goldsmith and Matthew Waxman have an important piece on that subject in the New Republic, arguing that “it is Obama, not Bush, who has proven the master of unilateral war.” “The war powers precedents Obama has established,” they explain, “will constitute a remarkable legacy of expanded presidential power to use military force.”
It’s a remarkable legacy, all right, though I might put somewhat less emphasis on “precedent” as such. Taken individually, as Goldsmith and Waxman acknowledge, very few of Obama’s actions are wholly unprecedented. But taken as a whole, the president’s approach to war powers begins to look like something new under the sun. As I argued recently at The Federalist, Obama will “go down in history as a ‘transformational’ president, having completed America’s transformation into a country where continual warfare is the post-constitutional norm.”
Presidents and Precedents
Goldsmith and Waxman identify “new precedents in three areas.” First, they argue that the 2011 Libya intervention (“Operation Odyssey Dawn”) marked an expansion of presidential power to launch airstrikes without congressional authorization, pointing to the Obama Office of Legal Counsel (OLC) argument that “such large-scale, non-consensual ‘airstrikes and associated support missions’ did not amount to ‘War’ that required congressional consent.”
The “police action” Truman ordered in Korea was war on a much larger scale (and with a less awful euphemism than the Obama team’s preferred coinage, “kinetic military action”—what’s the alternative, “static action”?) But perhaps Korea is something of an anti-precedent, as no president since has dared launch a ground invasion of that magnitude without seeking congressional cover. As Goldsmith and Waxman note, Bill Clinton’s 1999 air war over Kosovo is probably the closest parallel to Obama’s Libyan adventure 12 years later. The Obama OLC opinion on Libya “brought the Kosovo rationale out of the legal shadows and probably extended it. It will stand as the major precedent for unilateral presidential war from the air.”
It’s worth noting that both presidents waged war in the face of congressional votes refusing to authorize military action. In the Kosovo case, on March 24, 1999, the Senate passed a resolution supporting the bombing, but a month later the House voted down a declaration of war (427 to 2) and authorization for the airstrikes (213 to 213). “There’s broad support for this campaign among the American people, so we sort of just blew by” the House votes, the National Security Council spokesman said at the time. The House muddied the waters considerably by also voting down a resolution requiring the president to terminate the airstrikes immediately. I’d originally thought that Obama conducted the Libyan intervention amid congressional silence, but actually, in June 2011 (three months after the Tomahawks started flying), the House got around to voting on authorization and overwhelmingly rejected it (while rejecting, by a similar margin, a funding bill that would have ended direct combat operations. Sigh.)
The second precedent Goldsmith and Waxman identify is “the hole [the Obama administration] blew in the 60-day limit on unauthorized presidential uses of force imposed by the 1973 War Powers Resolution (WPR).” But here again, Clinton went first with Kosovo, with a 79-day bombing campaign that made him the first president to conduct a war beyond the WPR’s 60-day barrier. Still, Goldsmith and Waxman are right to identify the legal rationale the Obama administration advanced as especially troubling.
In Kosovo, the Clinton OLC argued that Congress had implicitly authorized a continuation of the Kosovo operation past the 60-day limit by appropriating funds for the mission. That argument was unavailable to the Obama team as the WPR clock ran out. Its solution, accomplished with an end-run around an objecting OLC, was to rely on then-State Department legal adviser Harold Koh’s risible argument that if you bomb a country, but they probably can’t hit you back, you’re not engaged in “hostilities” within the meaning of the War Powers Resolution. As James Mann has pointed out, “by that logic, a nuclear attack would not be a war.” It’s a far broader rationale than the one advanced by the Clinton team, and one that opens the door for any future president to make war at will, for extended periods, so long as he does it from a great height.
The final precedent Goldsmith and Waxman identify is the most important: the president’s staggeringly broad interpretation of the authorization for the use of military force (AUMF) Congress passed three days after the September 11 attacks, empowering the president to wage war against the perpetrators of 9/11 and those who “harbored” them. The Obama administration “extended the AUMF’s mandate dramatically, and gave its most expansive interpretation, when it pronounced last month that the statute applied to the Islamic State,” despite the fact that ISIS is an even worse fit with the plain language of the AUMF than the AQ “associated forces” targeted under the AUMF, having neither “planned, authorized, committed, or aided” 9/11 attacks, nor “harbored” a group that’s excommunicated them.
Going Permanently ‘Kinetic’
Recently, a report from Politifact evaluated the claim that Obama had bombed more countries than Bush. They rated it “True”—he’s bombed at least seven countries, possibly eight. Politifact couldn’t settle on a precise number. Their report included this intriguing sentence: “both presidents may have bombed the Philippines.” Here again, secret warfare isn’t unprecedented. But given the frequency and pace of operations under the 2001 AUMF, at some point a quantitative difference becomes a qualitative one.
Throughout the 20th century, “presidential wars” were geographically limited, often short and sharp departures from the peacetime norm. In the 21st century, however, we’ve gone permanently “kinetic.” Presidential wars are no longer temporary departures from a baseline of peace. As a “war president,” Barack Obama has institutionalized—and accelerated—a trend that began in the Bush administration: war without temporal or spacial boundaries. The “operational tempo” can range from steady to frantic, but the beat goes on, unceasingly. Perpetual presidential war is becoming “the new normal.”
Under the AUMF, Obama has launched eight times as many drone strikes as Bush. At a Senate Foreign Relations Committee hearing last year, a top DoD official affirmed that the AUMF would allow the president to put “boots on the ground” in the Congo without further authorization from Congress; Indeed, the Pentagon envisions a war on terror that will go on “at least 10 or 20 years more.” Possibly the AUMF will serve as the basis for President Chelsea Clinton’s (or George P. Bush’s) “kill list” in 2033.
As a candidate for the Democratic nomination in 2008, Barack Obama stood out as one of the few serious contenders who hadn’t voted for the Iraq War; as a state senator in 2002, he’d decried it as a “dumb war.” Now his administration cites the 2002 authorization for that “dumb war” as a possible source of authority for another one, 12 years later. Meanwhile, as Goldsmith and Waxman write, “the man who hoped to end the war under the 2001 AUMF, and who pledged to fight expansions of its mandate, [has] unilaterally interpreted it to broaden its substantive reach geographically and its temporal reach far into the future.”
It’s said that Obama privately worries that expanded executive powers will lie around like a “loaded weapon” for future presidents to abuse. If so, he’s apparently decided it’s a worry he can live with, through all the “dumb wars” to come.
Steve H. Hanke
Every country aims to lower inflation, unemployment, and lending rates, while increasing gross domestic product (GDP) per capita. Through a simple sum of the former three rates, minus year-on-year per capita GDP growth, I constructed a misery index that comprehensively ranks 109 countries based on “misery.” Below the jump are the index scores are for 2013. Countries not included in the table did not report satisfactory data for 2013.
Steve H. Hanke
For a clear snapshot of a country’s economic performance, a look at my misery index is particularly edifying. The misery index is simply the sum of the inflation rate, unemployment rate and bank lending rate, minus per capita GDP growth.
The epicenter of the Ebola crisis is Liberia. My October 15, 2014 blog reported on the level of misery in and prospects for Liberia.
This blog contains the 2012 misery indexes for Guinea and Sierra Leone, two other countries in the grip of Ebola. Yes, 2012; that was the last year in which all the data required to calculate a misery indexes were available. This inability to collect and report basic economic data in a timely manner is bad news. It simply reflects the governments’ lack of capacity to produce. If governments can’t produce economic data, we can only imagine their capacity to produce public health services.
With Ebola wreaking havoc on Guinea and Sierra Leone, the level of misery is, unfortunately, very elevated and set to soar.
Daniel R. Pearson
The U.S. Department of Commerce (DOC) announced Oct. 27 that it had reached draft agreements with Mexican sugar exporters and the Mexican government to suspend antidumping and countervailing duty (AD/CVD) investigations on imports of sugar from that country. Commerce has requested comments from interested parties by Nov. 10, with Nov. 26 indicated as the earliest date on which the final agreements could be signed. Given the obvious level of consultation by governments and industries on both sides of the border leading up to this announcement, it’s reasonable to presume that the agreements will enter into effect within a few weeks.
Suspension agreements that set aside the AD/CVD process in favor of a managed-trade arrangement are relatively rare. They sometimes are negotiated when the U.S. market requires some quantity of imports, and when the implementation of high AD/CVD duties would be expected to curtail trade severely. This would have been the case, assuming the duties actually had entered into effect. However, as this recent blog post indicates, it’s not at all clear that the U.S. International Trade Commission (ITC) would have determined that imports from Mexico were injuring the U.S. industry. A negative vote (a vote finding no injury) by the ITC would have ended these cases and left the U.S. market open to imports of Mexican sugar.
What are the key provisions of the agreements? There are restrictions on both the price and quantity of imports from Mexico. Sugar will only be allowed to be imported into the United States if it is priced above certain levels: 20.75 cents per pound (at the plant in Mexico) for raw sugar, and 23.75 cents per pound for refined sugar. (For comparison, U.S. and world prices for raw sugar currently are about 26 cents and 16 cents, respectively; for refined sugar about 37 cents and 19 cents.) Additional price controls on individual Mexican exporters based on their alleged prior dumping (selling at a price the DOC determines to be less than fair value) will further raise the prices at which they will be allowed to sell.
Quantity restrictions on imports will be imposed through a formula related to supply and demand conditions in the U.S. market. A knowledgeable sugar industry analyst has calculated that Mexican exporters would be allowed to sell a minimum of approximately 1.3 million metric tons raw value (MMTRV) during the 2014-15 marketing year (Oct. 1, 2014 to Sept. 30, 2015). Depending on market conditions next spring and summer, that figure may rise to around 1.45 MMTRV. (Over the past seven years, imports from Mexico ranged between 0.629 MMTRV and 1.927 MMTRV.) No more than 60 percent of Mexico’s exports may be in the form of refined sugar. The timing of import arrivals will be controlled. Mexico will utilize export licenses to prevent more than 30 percent of its allowed sales from arriving in the United States during the October-December quarter, and no more than an additional 25 percent during January-March.
Since 2008 when NAFTA’s sugar provisions were fully implemented, there has been an open border for bilateral trade in sweeteners. Now that trade will be subject to a tightly controlled regime in which both governments will play important roles in making sure that market forces are not allowed to operate.
Who are the likely winners and losers from this new arrangement? As might be expected, the U.S. sugar industry got pretty much everything it wanted. Both price and quantity will be constrained in ways that keep the U.S. market isolated from the world. U.S. growers can be expected to continue to enjoy artificially inflated earnings.
Mexican growers got perhaps half of what they wanted. True, they have at least temporarily given up the open access to the U.S. sugar market that was negotiated under NAFTA. However, they have staved off what may have been the complete loss of their most important export market, in the event the ITC had ruled against them. They have obtained guaranteed access for slightly more than the quantity of sugar that had been exported to the United States on average in the seven years since NAFTA’s full implementation. And, as an additional benefit, the price restrictions imposed by the DOC will mean that they are likely to sell at higher prices in this managed market than would otherwise have been the case.
Officials in the U.S. Department of Agriculture (USDA) who run the sugar program also likely see themselves as benefitting from the suspension agreements. They have the rather unenviable task of trying to manage sugar supplies from all sources. This not only includes imports from the 41 countries that have rights to export sugar to the United States under the tariff-rate quota (TRQ) system. It also encompasses commercial deliveries of sugar produced by U.S. growers, who accepted marketing limits years ago in order to retain their high level of government price support. Up until now, the only unregulated source of supply to the U.S. sugar market has been imports from Mexico. Managers of the U.S. market may find it easier to maintain a tight enough balance between supply and demand to prevent the price from falling to the support level. Low domestic prices lead to costs for USDA, which no doubt generates flak for the people running the program. (Note: Making it easier for officials to supplant the invisible hand of the marketplace likely wouldn’t be seen as a good thing by the late free trader, Adam Smith.)
It’s not hard to identify losers from a tightly managed U.S. marketplace. Anyone who uses sugar is paying more for it than it is worth in the outside world. A press release by the Sweetener Users Association indicates their concerns that additional import restrictions will lead to greater market uncertainty and higher prices. Consumers can expect to pay hundreds of millions of extra dollars per year for sugar-containing products. This cost increase will act like a regressive tax. Low-income people will forfeit a higher percentage of their incomes to pay for this new consumption “tax” than will people with relatively higher incomes. (Will the White House criticize the deal because it leads to greater inequality?)
Another likely group of losers are U.S. producers and exporters of high-fructose corn syrup (HFCS). The United States generally is believed to be the world’s lowest-cost producer of HFCS, which has become the preferred sweetener for soft drinks and other liquid applications in North America. U.S. exports of HFCS to Mexico have risen more than three-fold since 2007 and recently have amounted to a million metric tons per year. (Liberalization under NAFTA has led to active sweetener trade in both directions.) The suspension agreement generates uncertainty for HFCS producers because sugar that otherwise would have been exported from Mexico to the United States now may stay south of the border and be used instead of HFCS in soft drinks. It would not be surprising to see a notable decline in HFCS exports in the coming years.
On the other hand, Mexico had made clear its intention to retaliate in some form in the event AD/CVD duties were implemented against sugar, with HFCS being a likely target. (Note: Such retaliation likely would not be consistent with Mexico’s obligations under NAFTA and the WTO, but those commitments have not been much of a restraint in the past. The history of bilateral sweetener disputes provides ample evidence of Mexico’s ability to discriminate against imports of HFCS.) Thus, the U.S. HFCS industry may be hurt less by the suspension agreement than it would have been hurt by Mexico’s reaction to an adverse decision at the ITC. (Why is it that efficient industries often seem to suffer harm when governments try to protect inefficient industries?)
Of course, the U.S. and Mexican economies also will be losers under the settlement agreement. Both will tend to see scarce resources being allocated more poorly. GDP will be lower in each country, although minimally. Since the effects will be small, should we be concerned? The main concern is that this is one more among many policy choices in which the U.S. government has sided with special interests at the expense of the public interest. Could that be a reason that the economy has struggled to get back on its feet?
Which leads to the final loser: U.S. international trade policy. The United States currently is negotiating trade agreements including the Trans-Pacific Partnership (TPP), the Trans-Atlantic Trade and Investment Partnership (TTIP), and (at least still in theory) the World Trade Organization (WTO) Doha Round. Does making a public statement to the effect that protecting U.S. sugar growers is the central organizing principle of U.S. trade policy do anything to strengthen the hand of U.S. negotiators? Hardly. Rather, the suspension agreement with Mexico likely will make it more difficult to persuade Japan to eliminate tariffs on its sensitive agricultural products. Our Canadian neighbors are being challenged in the TTP to end their highly restrictive dairy and poultry programs. What kind of message does the sugar suspension agreement send to them? It might be best if U.S. trade policy was simply to take two aspirin and go to bed until 2017.
Daniel J. Mitchell
Having a vision of a free society doesn’t mean libertarians are incapable of common-sense political calculations.
For example, the long-run goal is to dramatically shrink the size and scope of the federal government, both because that’s how the Founding Fathers wanted our system to operate and because our economy will grow much faster if labor and capital are allocated by economic forces rather than political calculations. But in the short run, I’m advocating for incremental progress in the form of modest spending restraint.
Why? Because that’s the best that we can hope for at the moment.
Another example of common-sense libertarianism is my approach to tax reform. One of the reasons I prefer the flat tax over the national sales tax is that I don’t trust that politicians will get rid of the income tax if they decide to adopt the Fair Tax. And if the politicians suddenly have two big sources of tax revenue, you better believe they’ll want to increase the burden of government spending.
And that’s a good segue to today’s topic, which deals with a common-sense analysis of the value-added tax.
Here’s the issue: I’m getting increasingly antsy because some very sound people are expressing support for the VAT.
I don’t object to their theoretical analysis. They say they don’t want the VAT in order to finance bigger government. Instead, they argue the VAT should be used only to replace the corporate income tax, which is a far more destructive way of generating revenue.
And if that was the final–and permanent–outcome of the legislative process, I would accept that deal in a heartbeat. But notice I added the requirement about a “permanent” outcome. That’s because I have two requirements for such a deal:
1. The corporate income tax could never be reinstated.
2. The VAT could never be increased.
And this shows why theoretical analysis can be dangerous without real-world considerations. Simply stated, there is no way to guarantee those two requirements without amending the Constitution, and that obviously isn’t part of the discussion.
So my fear is that some good people will help implement a VAT, based on the theory that it will replace a worse form of taxation. But in the near future, when the dust settles, the bad people will somehow control the outcome and the VAT will be used to finance bigger government.
Here are examples to show why I am concerned.
Here’s some of what Tom Donlan wrote for Barron’s.
…the U.S. imposes the highest corporate tax rate in the developed world. Make no mistake, corporations pay no tax. That is a tax on American consumers, American workers, and American shareholders. Don’t think that the corporate income tax eases your personal tax burden. Add your share of the corporate income tax to the other taxes you pay. Better yet, create a business tax we can all understand. A value-added tax is a tax on consumption. We would pay it according to the amount of the economic resources we choose to enjoy, and we would not pay it when we choose to save and invest in making the economy bigger and more productive. We would pay it on imported goods as much as on those domestically produced. The makers of goods for export would receive a rebate on their value-added tax. Trading the corporate income tax for the value-added tax is one of the best fiscal deals the U.S. could make.
I agree in theory.
America’s corporate tax system is a nightmare.
But I think giving Washington a new source of tax revenue is an even bigger nightmare.
Professor Greg Mankiw at Harvard, writing for the New York Times, also thinks a VAT is better than the corporate income tax.
…here’s a proposal: Let’s repeal the corporate income tax entirely, and scale back the personal income tax as well. We can replace them with a broad-based tax on consumption. The consumption tax could take the form of a value-added tax, which in other countries has proved to be a remarkably efficient way to raise government revenue.
Once again, I can’t argue with the theory.
But in reality, I simply don’t trust that politicians won’t reinstate the corporate tax. And I don’t trust that they’ll keep the VAT rate reasonable.
At this point, some of you may be thinking I’m needlessly worried. After all, journalists and academic economists aren’t the ones who enact laws.
I think that’s a mistaken attitude. You don’t have to be on Capitol Hill to have an impact on the debate.
Besides, there are elected officials who already are pushing for a value-added tax! Congressman Paul Ryan, the Chairman of the House Budget Committee, actually has a “Roadmap” plan that would replace the corporate income tax with a VAT, which is exactly what Donlan and Mankiw are proposing.
…this plan does away with the corporate income tax, which discourages investment and job creation, distorts business activity, and puts American businesses at a competitive disadvantage against foreign competitors. In its place, the proposal establishes a simple and efficient business consumption tax [BCT].
At the risk of being repetitive, Paul Ryan’s plan to replace the corporate income tax with a VAT is theoretically very good. Moreover, the Roadmap not only has good tax reform, but it also includes genuine entitlement reform.
But I’m nonetheless very uneasy about the overall plan because of very practical concerns about the actions of future politicians.
In the absence of (impossible to achieve) changes to the Constitution, how do you ensure that the corporate income tax doesn’t get re-imposed and that the VAT doesn’t become a revenue machine for big government?
By the way, this susceptibility to the VAT is not limited to Tom Dolan, Greg Mankiw, and Paul Ryan. I’ve previously expressed discomfort about the pro-VAT sympathies of Kevin Williamson, Josh Barro, and Andrew Stuttaford.
This video sums up why a value-added tax is wrong for America.The Value Added Tax: A Hidden New Tax to Finance Much Bigger Government
Last but not least, let me preemptively address those who will say that corporate tax reform is so important that we have to roll the dice and take a chance with the VAT.
I fully agree that the corporate income tax is a self-inflicted wound to American prosperity, but allow me to point out that incremental reform is a far simpler–and far safer–way of dealing with the biggest warts plaguing the current system.
Lower the corporate tax rate.
Replace depreciation with expensing.
Replace worldwide taxation with territorial taxation.
So here’s the bottom line: If there’s enough support in Congress to get rid of the corporate income tax and impose a VAT, that means there’s also enough support to implement these incremental reforms.
There’s a risk, to be sure, that future politicians will undo these reforms. But the adverse consequences of that outcome are far lower than the catastrophic consequences of future politicians using a VAT to turn America into France.
P.P.S. I also very much recommend what George Will wrote about the value-added tax.
P.P.P.S. I’m also quite amused that the IMF accidentally provided key evidence against the VAT.
The New York Times launches a series of investigative reports on corporate lobbying of state attorneys general. But you have to read fairly far down in the story to find the “nut graf” on why this is happening now. Radley Balko summed it up in a tweet: “As prosecutors get increasingly powerful, lobbyists will increasingly spend money to try to influence them.” And the article does note that:
A robust industry of lobbyists and lawyers has blossomed as attorneys general have joined to conduct multistate investigations and pushed into areas as diverse as securities fraud and Internet crimes….
The increased focus on state attorneys general by corporate interests has a simple explanation: to guard against legal exposure, potentially in the billions of dollars, for corporations that become targets of the state investigations.
It can be traced back two decades, when more than 40 state attorneys general joined to challenge the tobacco industry, an inquiry that resulted in a historic $206 billion settlement.
Microsoft became the target of a similar multistate attack, accused of engaging in an anticompetitive scheme by bundling its Internet Explorer with the Windows operating system. Then came the pharmaceutical industry, accused of improperly marketing drugs, and, more recently, the financial services industry, in a case that resulted in a $25 billion settlementin 2012 with the nation’s five largest mortgage servicing companies.
The trend accelerated as attorneys general — particularly Democrats — began hiring outside law firms to conduct investigations and sue corporations on a contingency basis.
I wrote about this 30 years ago in the Wall Street Journal, citing Hayek’s assessment from 40 years before that:
Nobel laureate F.A. Hayek explained the process 40 years ago in his prophetic book The Road to Serfdom: “As the coercive power of the state will alone decide who is to have what, the only power worth having will be a share in the exercise of this directing power.”
As the size and power of government increase, we can expect more of society’s resources to be directed toward influencing government.
Those who work to increase the size, scope, and power of government need to recognize: This is the business you have chosen. If you want the federal government to tax (and borrow) and transfer – and reallocate through prosecution – $3.8 trillion a year, if you want it to supply Americans with housing and health care and school lunches and retirement security and local bike paths, then you have to accept that such programs come with incentive problems, politicization, corruption, and waste. And that special interests will find ways to influence such momentous decisions, no matter what lobbying restrictions and campaign finance regulations are passed.
David J. Armor
W. Steven Barnett’s attempt to rebut my review of preschool research begins with an ad hominem attack on my (and Cato’s) motives for publishing this piece, calling it an “October Surprise” with an aim “to raise a cloud of uncertainty regarding preschool’s benefits that is difficult to dispel in the time before the election.” He omits that my first review of preschool research was published in January, the same month Cato sponsored a public forum on the topic with both pro and con speakers. The current, expanded review was published now because it took me that long to finish it.
Of course, it is crucial to let the research and arguments speak for themselves, but for what it is worth, I have no formal affiliation with Cato or any other organization other than George Mason University, while Barnett is Director of The National Institute for Early Education Research (NIEER), whose mission is to “support high-quality, effective early childhood education for all young children.” Barnett is a long-time advocate of universal preschool, while I had no position on pre-k until I read reports from the national Head Start Impact Study (HSIS).
Moving on to substantive matters, Barnett says that because the successful Perry and Abecedarian programs were small and more intensive than current proposals, we should devote more resources to replicate them at scale, not discount them as of limited value in indicating how much larger, and different, programs would work. But current “high quality” pre-K programs, including Abbott pre-K, do not in fact replicate either of these programs. Moreover, Barnett ignores the national Early Head Start demonstration, a program similar to Abecedarian, which found no significant long-term effects in Grade 5 except for a few social behaviors of black parents–hardly an endorsement to make it universal. Moreover, this one area of positive effects is tempered by significant negative effects on certain cognitive skills for the most at-risk students.
The difference in outcomes between the tiny Abecedarian project and the national Early Head Start demonstration program may be simply one of scale and bureaucracy. There is an enormous difference between designing and implementing a program for a few dozen mothers and infants in a single community and doing the same for thousands of children in many different communities across the country. In a national implementation, there are many more opportunities for implementation problems in leadership, staffing, program design, and so forth.
About my criticism of Regression Discontinuity Design (RDD) studies, Barnett says the flaws I describe are “purely hypothetical and unsubstantiated.” But I’m not alone in perceiving them; my concerns are shared by Russ Whitehurst, former Director of the Institute for Education Research in the U.S. Department of Education. More importantly, Barnett says my criticism about attrition (or program dropouts) is pure speculation, which is simply untrue.
In my review, I reported that the Tulsa, Oklahoma, and Georgia treatment and control groups differed significantly in family background characteristics (Mother’s education and limited English proficiency) that are known to be related to achievement test scores. The Boston study reported a 20 percent dropout rate from the treatment group, and those students were more disadvantaged than the stay-ins. It is true, though, that I can’t estimate the dropout problem in Barnett’s 2007 Abbott study; he does not report or describe attrition rates, nor does he provide any benchmark data that would allow a reader to compare the treatment and control group prior to testing. For the RDD studies that provide data (many do not), there is no empirical support for Tom Bartik’s suggestion that the dropouts could be children from wealthier families. Where data has been reported, the dropouts are more disadvantaged on one or more socioeconomic characteristics.
Barnett next claims that the Head Start and Tennessee evaluations are “not experimental” but quasi-experimental, like the Chicago Longitudinal Study. Regarding Head Start, Barnett (relying on Tom Bartik), misunderstands a reanalysis of the Head Start data by Peter Bernardy, which I cited. The original Head Start study found no significant long-term effects. Bernardy simply did a sensitivity analysis by excluding control group children who had some type of preschool; he found no long-term effects, same as the original Head Start study.
Regarding the Tennessee experiment, Barnett is correct that I omitted a small positive effect for a single outcome: grade retention. But he conveniently fails to mention that the Tennessee experiment found no long-term effects for the major outcome variables, including cognitive performance and social behaviors, and there was even a statistically significant negative effect for one of the math outcomes.
He then complains that my cost figures are miscalculated, saying I should subtract costs for existing pre-K programs and use “marginal” rather than average costs per child. On the first, point, my review simply says that “states could be spending nearly $50 billion per year to fund universal preschool”; there is no need to subtract existing costs because it is simply an estimate of possible peak expenditures. Regarding marginal vs. average costs, my $12,000 figure is based on 2010 per pupil costs. Current average costs are most certainly higher – the federal Digest of Education Statistics actually places total costs per-pupil at roughly $13,000 – and $12,000 is not unreasonable for marginal costs since 80% of education costs are for teacher salaries and benefits, the primary marginal cost components.
Barnett then argued that I “omit[ted] much of the relevant research,” implying I would get different results had I included other reviews, especially one by the Washington State Institute for Public Policy (WSIPP) published in January. This is simply inaccurate. The WSIPP report breaks programs down by state/city, Head Start, and “Model” programs. Of the 13 state preschool evaluations, my review included 8 (WSIPP counted three different reports for the Tulsa, Oklahoma, program as separate studies). Of the remaining five programs not in my review, three are RDD studies for Arkansas, New Mexico, and North Carolina with no information on attrition or relevant statistics to compare the treatment and control group at the time of testing. I did include the New Jersey Abbott program, despite this problem of interpretation, because it is frequently mentioned and promoted as a high-quality preschool program.
Of the three model programs in the WSIPP study, my review included two, Perry Preschool and the Abecedarian project. The third is the IDS program mentioned by Barnett and also reviewed by him in other reports. I have been unable to obtain a copy of this 1974 study, and Barnett’s review provides very little information about it. He reports a standardized effect size of .4 at the end of pre-K, but with no documentation about treatment and control group equivalence at the start of preschool. Neither is there information about attrition or dropout rates during the preschool year. It is therefore hard to assess the reliability of this effect. He acknowledges that a later follow-up study to document long term effects in adulthood suffered from “severe attrition” and may not be reliable.
Most important, the average standardized effect that WSIPP found for all test scores across all state/city programs was .31, which is only somewhat higher that the average Head Start standardized effect of about .2 across all tests. One reason the WSIPP effect is higher than Head Start is the inclusion of the extraordinary standardized effects (.9 or so) for the three Tulsa studies. Furthermore, the WSIPP study also documented the fade-out effect.
I do not understand Barnett’s claim that the New Jersey Abbott program has effects “three times as large” as the Head Start study. His 2007 report says the gain in reading at age four “…represents an improvement of about 28 percent of the standard deviation for the control (No Preschool) group” and the gain in math “…represents an improvement of about 36 percent of the standard deviation for the control (No Preschool) group” These represent standardized effects of .28 and .36, which averages out to .33 and thus is about the same as the WSIPP average for all state/city programs. This is somewhat higher than the Head Start Impact Study (.2) but certainly not three times higher. Moreover, his study presents no information about attrition for the treatment group, nor does he provide the reader with a table that compares the treatment and control group on socioeconomic characteristics prior to or at the time of testing.
Perhaps the most important point in this debate is something that Barnett does not explain, which is how any of the studies we have discussed support universal preschool. The only studies that give reliable information – meaning valid research designs with statistically significant results – on long-term benefits such as crime, educational attainment, and employment are the Abecedarian and Perry Preschool programs. Even assuming that these programs could be generalized to larger populations (holding aside the contrary implications of Early Head Start), these programs apply only to disadvantaged children who need a boost. There is little justification based on these programs to claim that middle class children will experience the same benefits.
What does public choice theory say about responding to Ebola?
That is: What are the costs and benefits of various policies – not to the public – but to self-interested politicians? Public choice theory holds that politicians’ interests don’t always coincide with the public’s, and sometimes they diverge quite sharply. When interests diverge, politicians will often side with their own self-interest, even at the expense of the public.
So what do they want? Politicians want public esteem. They want above all to be seen as heroes. If that means sacrificing civil liberties - to little or no public benefit - then they will do so.
This remains true even if the “heroic” measures at hand amount to Ebola security theater. It would appear that’s what we’re getting - a set of state-level quarantines that are actually contrary to what doctors and epidemiologists recommend. (No, the public probably won’t care what the experts say. I mean, look – the public still buys antibacterial soaps, and public health experts don’t recommend those either.)
In general, then, we can expect politicians to be eager to quarantine. This eagerness will be completely independent of the specific facts of any particular disease. Recall that lots of politicians once wanted to be able to set up an HIV quarantine, too, even long after it was well known that HIV can’t be transmitted by hugging, kissing, sharing utensils, sharing toilet seats, non-euphemistic cuddling, or what have you. (Wasn’t that a loooong time ago? No: It was just last year. And they got what they wanted.)
In short, whether or not a quarantine is right in any particular case – and it might be right in some cases, though I wouldn’t know – public choice theory says that politicians will err on the side of quarantine.
If that seems cynical, consider the flip side: Politicians also don’t want to look like the ones who let Ebola into the country. Note that one might look like the person who brought Ebola into the country even when one’s policies are responsible for exactly zero additional Ebola risk. Life is unfair sometimes. Even to politicians.
To look like a screwup, all you have to do… is nothing. The public will be left to stew in its fears, and they hate it when that happens. So they will punish you, and your party, at the next possible opportunity. (When is that again?)
The costs of doing nothing here are especially high if your constituency happens to be made up of conservatives – in whom Jonathan Chait has pointed out a strong emotional preference for purity and cleanliness. We should thus expect to find fear of contamination at or near the top of the to-do list for conservatives, who will try, first, to intensify these fears, and second, to promote their own policies as the only ones capable of relieving them.
Much as I may hate to say it, this model explains very well the actions of New Jersey Governor Chris Christie, who enacted an Ebola quarantine against consensus medical opinion. Nurse Kaci Hickox, herself quarantined, has since delivered a harrowing account of her chaotic re-entry experience. Hardly the hero’s welcome that she deserved.
Now, we might well expect Hickox to protest. After all, she was the one actually spending the days in isolation. We should consider then, the opinions of disinterested experts, who understand the risks but who did not have their personal liberty at stake. This letter in the New England Journal of Medicine seems especially on point:
[Quarantine for health workers] is not scientifically based, is unfair and unwise, and will impede essential efforts to stop these awful outbreaks of Ebola disease at their source, which is the only satisfactory goal. The governors’ action is like driving a carpet tack with a sledgehammer: it gets the job done but overall is more destructive than beneficial.
When Christie appeared to abandon his quarantine policy – and let’s be honest about it, that’s basically what he did – he explained himself as follows:
We’re trying to be careful here,” Christie said on NBC’s “Today,” referring to his state’s policy. “This is common sense, and … the American public believes it is common sense. And we’re not moving an inch. Our policy hasn’t changed, and our policy will not change.
It’s common sense! And yet common sense isn’t necessarily what’s called for here. Common sense may win elections, but viruses are a lot more like chemistry than they are like common sense. Common sense doesn’t vary, but viruses’ properties do vary, often tremendously. The appropriate measures for containing each of them will likewise, and these measures will not always include quarantine. In a case like this, politicians, who must run on common sense, and on common fears, are unfortunately the last people we should be listening to. We know their biases too well.
Just in time for Halloween, a vampire lawsuit against school choice has risen from the dead.
Nearly a month ago, a Florida judge dismissed the Florida Education Association’s (FEA) lawsuit against a bill amending the state’s school choice laws, ruling that the plaintiffs lacked the standing to sue because they were not harmed. The union wanted to block the creation of the Personalized Learning Scholarship Accounts program for students with special needs, and “in particular” the so-called “expansion” of the Florida Tax Credit Scholarship (FTCS) law, which provides tax credits to corporations in return for donations to nonprofit scholarship organizations that help low-income children attend the schools of their choice. There are two additional lawsuits against school choice in Florida, including another involving the FEA.
This year, nearly 70,000 low-income students received FTCS scholarships. One former scholarship recipient, Denisha Merriweather, recently wrote an op-ed for the Wall Street Journal explaining how the FTCS allowed her to switch from her assigned district school, which failed to meet her needs, to a private school where she thrived.
Last week, the FEA filed an amended complaint with additional plaintiffs. The union argues that the new plaintiffs have standing as district school teachers and parents of district school students because they “are threatened by the implementation of […] the expansion of the Florida Tax Credit Scholarship Program,” which they claim would cause the district schools to “[lose] considerable funding” since the scholarship funds “that otherwise would go to support the public schools are instead redirected through an intermediary to provide vouchers [sic] for Florida children to attend private schools.” (The FEA’s complaint did not discuss the impact of the Personalized Learning Scholarship Accounts.)
The union’s argument suffers from at least two fatal flaws.
First, the FTCS does not “redirect” any state funds. The state of Florida allocates funds to school, in part, on a per-pupil basis, but the fiscal impact of a student leaving her assigned district school to accept a tax-credit scholarship is no different than the fiscal impact of a student moving out of the district, attending private school without a scholarship, or homeschooling. Moreover, if the funds were actually “redirected” then the state would not realize any savings. In fact, the state’s own Office of Program Policy Analysis and Government Accountability found that the FTCS generates significant savings ($36.2 million in 2008-09) because the forgone revenue is less than the reduction in state expenditures.
Second, the union is factually incorrect in asserting that the challenged legislation, SB 850, “expanded” the FTCS. The bill loosened eligibility requirements by eliminating the requirement that recipients spend the prior academic year in a district school; allowing foster students to continue receiving scholarships if adopted; and raising the income thresholds for eligibility for full and partial scholarships. However, the bill did not expand the amount of tax credits available nor did it add any new credits against other taxes. In other words, while the bill increased the number of students who can apply for scholarships, it did not increase the actual amount of available tax credits or scholarship funds.
The FEA’s vampire lawsuit misunderstands how the FTCS law works and misstates the facts about what the legislation does. The judge should drive a stake through its heart.
Unsurprisingly, Nevada officials are cracking down on Uber. Last Friday, the San Francisco-based transport technology company announced its launch in Sin City. On the day of the launch eight Uber drivers in Las Vegas had their cars impounded and were issued citations for providing an “unlicensed for-hire transportation service.” In addition, District Court Judge James Russell banned Uber drivers from offering ridesharing services in Nevada until at least early November.
Instead of spending time and money on impounding ridesharing vehicles Nevada officials should turn to their attention to reforming taxi regulations, which make it difficult for taxis in Las Vegas to compete with Uber.
As I noted last week, Las Vegas has especially over burdensome taxi regulations in place. Taxi drivers in Las Vegas are restricted in regards to where, how, and sometimes (depending on which medallion they have) when they pick can up passengers. Uber drivers are not nearly as restricted. Given such an environment it shouldn’t be surprising that yesterday Uber was reporting an “insane” level of demand.
According to an Uber spokeswoman, the company is financially and legally supporting drivers dealing with citations and impounded vehicles, as it has done in other jurisdictions where drivers have run afoul of regulators. According to one Uber driver in Las Vegas, five Nevada Taxicab Authority vehicles and two undercover officers with black ski masks were used to impound his Ford Focus while he was trying to drop off passengers.
In an ideal regulatory framework taxis and ridesharing driverswould fairly compete and Uber drivers would not have their cars impounded. Unfortunately, many taxi regulations in the U.S. allow for incumbent protection and do little to encourage competition and innovation that would benefit consumers.
Uber drivers’ experiences in Las Vegas highlight the regulatory grey area that Uber and other sharing economy companies occupy. Before the rise of the sharing economy the distinction between private car owners and taxis was clear. Today, Uber, Lyft, and Sidecar make that distinction more difficult to make, and regulators across the U.S. are struggling to keep up with the changes in technology that allow for the sharing economy to exist.
Yet rather than deal with companies like Uber by reexamining and updating existing taxi regulations or taking steps to make taxis more competitive, the Nevada Taxicab Authority has deployed officials to crack down on drivers using Uber. This is an overreaction to the emergence of ridesharing. The Nevada Taxicab Authority ought to consider a range of changes to existing regulations such as not restricting how and where Las Vegas taxi drivers can pick up passengers.
The technology that allows Uber and other ridesharing companies to operate is not going anywhere. The Nevada Taxicab Authority cannot possibly expect the impounding of ridesharing vehicles to be an effective long-term strategy. In the short term, however, it shouldn’t be surprising if the Nevada Taxicab Authority continues to use the existing outdated regulatory environment to its advantage in order to protect taxi drivers from competition.
The victory of the secular party Call of Tunisia (Nidaa Tounes) in the parliamentary election on Sunday carries two lessons for observers of transitions in the Middle East and North Africa (MENA). The first one is broadly optimistic, but the second one should be a cause for concern, heralding economic, social, and political troubles ahead.
1. The Arab Spring was not a one-way street to religious fundamentalism.
In spite of the unexpected and often violent turns that political events have taken in countries such as Syria or Libya, the revolutions across the MENA countries were not just thinly disguised attempts to impose theocratic rule on Arab societies. While Islam is an important cultural and social force, most people in the region have little appetite for a government by Islamist extremists. In fact, much of the headway that Islamist politicians made shortly after the fall of authoritarian regimes in the region can be explained by their track records as community organizers or providers of public services.
Tunisia is a case in point. Already in 2011, the country’s leading Islamic party, Ennahda, featured numerous women candidates in the election, and following a political crisis last year it negotiated a peaceful handover to a caretaker government that led the country to yesterday’s election.
Tunisia’s new leading political force, Nidaa Tounes, may have gained as many as 80 seats in the 217-seat parliament. It describes itself as a ‘modernist’ party. It unites secular politicians of various stripes, including labor union members, or former officials of the regime of president Zine el-Abidine Ben Ali. The leader of the party, the 87-year old Beji Caid el-Sebsi (who served as interim prime minister in 2011) had a long political career prior to the revolution, including an ambassadorship in Berlin after Ben Ali’s ascent to power.
2. Don’t expect radical economic reforms.
For those who feared that democratization in the MENA region could bring about theocracy and extremism, the status-quo nature of Nidaa Tounes is probably good news. At the same time, however, it seems unlikely that the party, whose sympathizers largely overlap with those of the country’s influential labor unions, will bring about the deep institutional and economic changes that Tunisia needs in order to extend access to economic opportunity to ordinary Tunisians by dismantling Byzantine red tape and corruption and freeing up its economy.
For example, while it is certainly praiseworthy that the party has promised to improve the economic situation of women, one should worry that it plans to do so by what are likely to be popular yet ineffective measures: creating a new government bureau fighting discrimination, investing in social housing for young female workers, and extending statutory maternity leave.
More importantly, in many areas the exact economic platform of Nidaa Tounes remains blurry. The party promises to foster consensus among the government, civil society, labor unions, and employers. It also promised to cut public spending – in part by reforming the system of fuel subsidies – increase industrial exports and promote industries with high value added, most notably hi-tech and renewable energy, and to subsidize economic development in poorer regions by an amount of 50 billion dinars ($28 billion) over the next five years, 30 billion of which would be coming from the public budget.
Heavy on clichés and light on specifics, these promises are reminiscent of electoral manifestos of social democratic parties of Europe. Regardless of whether that would be a good thing under normal circumstances, what Tunisia needs now is a bold agenda of economic liberalization, as well as a Leszek Balcerowicz-like figure to implement it. With a mushy economic program and Mahmoud Ben Romdhane – former deputy head of Tunisia’s ex-communist party, Ettajdid –as the key economic policy figure on the party, Nidaa Tounes offers neither.
Michael F. Cannon
Here are ten reasons everyone should attend this Thursday’s Cato Institute conference, “Pruitt, Halbig, King & Indiana: Is Obamacare Once Again Headed to the Supreme Court?”
- The very next day – October 31 – the Supreme Court could grant certiorari in King v. Burwell. Reporters who attend will be able to write their stories in advance.
- Our luncheon keynote speaker, Oklahoma attorney general Scott Pruitt, filed the first Halbig-style challenge in September 2012. (Does that mean I should call them “Pruitt-style challenges”?) Last month, a federal district court sided with Pruitt against the federal government. Our morning keynote speaker, Indiana attorney general Greg Zoeller, filed the fourth such challenge, Indiana v. IRS. A ruling is expected at any time. Pruitt and Zoeller will discuss why they have asked the Supreme Court to grant cert in King.
- We’ve already been King-ed! The Center for American Progress and Families USA were so impressed (or worried) about our conference that they scheduled a conference call with reporters to piggyback on (or drown out) any coverage of our conference. Their teleconference is on Wednesday, October 29, at 10am ET. Dial in: 888-576-4398. Confirmation code: 1635383.
- Case-Western Reserve University law professor Jonathan Adler, an intellectual father of the Halbig cases, will discuss recent and future court rulings. So will law professor Jim Blumstein, intellectual father of the Supreme Court’s Medicaid ruling in NFIB v. Sebelius, who also played a seminal role in the Halbig cases.
- Len Nichols, who advised the Senate on state-run vs. federal Exchanges will explain why all this is nonsense.
- Health-insurance industry expert Bob Laszewski will explore the possible impact of Halbig.
- University of Washington law professor David Ziff will discuss how Halbig critics could improve their arguments.
- The Constitutional Accountability Center’s Brianne Gorod, who wrote the amicus briefs filed by the members of Congress who wrote Obamacare, will explain what Congress really intended.
- AEI’s Tom Miller, who helped launch the Halbig cases, will explore how states might respond to a Halbig win.
And finally, the number-one reason you should attend this conference…
- Obamacare architect Jonathan Gruber will explain his flip-flop on Halbig. Ha! Just kidding. The real number-one reason is: these lawsuits have more of a shot than you thought, and you need to get up to speed.
Juan Carlos Hidalgo
There were no surprises in Brazil’s runoff election: just as the polls had predicted in the days leading to the vote, President Dilma Rousseff beat Senator Aécio Neves by over 3 percentage points (51.6% to 48.6%). Despite high inflation, widespread corruption charges, and threats of a recession, the incumbent Workers’ Party (PT) won an unprecedented fourth term in power. Now what?
Brazil’s electoral map shows a divided country: the poor north and northeast states voted for Rousseff while most of the rich south and south-eastern states went for Neves. This divide has become more pronounced during the years of PT rule, as the incumbent party increases welfare spending every election cycle and warns voters about how the opposition would get rid of these programs if elected.
President Rousseff gave a conciliatory speech where she talked about bringing together Brazilians, being a better president than the previous four years, and the need for economic reform. Can she do it? The acrimonious tone of her campaign will make it hard for Rousseff to win over the half of the electorate that voted for Neves. Her appeal to voters wasn’t based on promises of a better future but on scaremongering of what a Neves victory would represent to Brazil’s poor. Moreover, new revelations on the growing corruption scandal at Petrobras that seem to show that Rousseff and her predecessor Lula da Silva were aware of the shenanigans at the state-owned oil giant threaten to taint her second term.
As for the economy, during the campaign Rousseff said that she would replace her Finance Minister, Guido Mantega, who is blamed for Brazil’s lackluster economic performance. Still, the stock market took a beating today and the real fell by 3%. Two reasons the bad shape of the economy didn’t play a decisive role in the election is that unemployment is low —which has a lot to do with many younger Brazilians going to university instead of looking for a job— and the fact that the government held back on the publication of bad statistics until after the election.
Can Rousseff deliver reform? Doubtful. As Mary O’Grady points out today in the Wall Street Journal, “Ms. Rousseff ran as the anti-market, welfare-state candidate.” With an economy not even growing by 1% and a stubbornly high inflation rate, the question Brazilians are asking themselves is whether Rousseff will reform or instead double-down on interventionist policies. One area to pay particular attention to is freedom of the press. What we’ve seen in a number of other Latin American countries ruled by left-wing governments is that, as the economy sours and corruption scandals mushroom, the authorities push for more regulations on the media. Will Brazil follow this pattern?
There are good reasons not to be optimistic about Brazil in the next four years.
For decades, the federal government has struggled with the issue of storing waste from commercial nuclear reactors and defense-related nuclear activities. The government has spent billions of dollars planning for nuclear waste disposal, but the creation of a permanent storage site is years behind schedule due to federal mismanagement and safety concerns. A new report confirms that the current proposed site, Yucca Mountain in Nevada, is safe for use.
The United States has more than 65,000 metric tons of spent nuclear fuel with the volume expected to double by 2055. The Nuclear Waste Policy Act of 1982 aimed to create a permanent disposal site for radioactive waste by 1998. After many studies, Yucca Mountain was chosen as the single national disposal site in 1987, and engineers and construction crews went to work. Between 2001 and 2007 the project’s total life-cycle cost estimate increased from $77 billion to $106 billion, measured in constant 2012 dollars.
To fund the project, the 1982 Act created a fee or tax on all nuclear electric utilities charged on the basis of kilowatt hours generated. The fee generated $750 million annually for the Nuclear Waste Fund, which accumulated a balance on paper of more than $25 billion.
In 2010, the Obama administration, with strong urging from Senate Majority Leader Harry Reid, a Democrat from Nevada, decided to close down the Yucca Mountain site. The Government Accountability Office (GAO) said that the administration did not cite any “technical or safety issues” for the closure. The administration also did not include other options for storage, but instead set up a committee to study the issue. Apparently, Reid did not want the site in his state under any circumstances, regardless of any previous agreements between the nuclear industry and the government.
The abrupt closure created a bizarre situation; electric utilities and their customers were paying $750 million annual tax to store nuclear waste at Yucca, but those storage plans were halted. In November 2013 an appeals court ordered the Department of Energy to stop collecting the Nuclear Waste Fund fee and resume planning for the site. Energy Secretary Moniz suspended the fee in 2014.
The appeals court also ordered the Nuclear Regulatory Commission to resume the site’s licensing process. Now, the long-awaited safety report confirms that the Yucca Mountain site meets project requirements. The New York Times summarized the report saying that it “concluded that the design had the required multiple barriers, to assure long-term isolation of radioactive materials.” Storage is expected to be safe within the site for one million years.
The administration must now decide how to proceed. It can ignore the report and try to push the issue of nuclear storage onto the next administration, or it can reopen the permitting process and ignore the wishes of Majority Leader Reid.
The issue of nuclear waste is complex, but federal mismanagement and the actions of the Obama administration have delayed a long-term solution.
Yesterday, Ukrainian voters went to the polls to elect a new parliament, replacing the deputies elected prior to the Euromaidan protests of early 2014. In a piece at Al-Jazeera America published on Sunday, I highlighted a few ways in which the election results could impact Ukraine’s future relations with Europe, Russia, and the resolution of the ongoing crisis in Eastern Ukraine. Prior to the vote, a high level of uncertainty about the likely makeup of the Rada - especially the election of far right (ie, Svoboda or Right Sector) or populist parties (ie, Oleh Lyashko’s Radical Party) – was a major concern, as was the uncertainty over whether they might be represented in government. A new governing coalition will be instrumental in the resolution of the conflict, shaping how aggressively Ukraine pursues the rebels in the Donbas region.
Fortunately, initial exit polls today indicate reasonably positive results. The three mainstream pro-Western parties did well, with the Poroshenko bloc polling around 22.2%, the Popular Front at 21.8%, and surprise contender Samopomich, a Lviv-based moderate party, polling at 14%. These results are excellent news, as a governing coalition with no far right or populist elements should be possible. The far right party Svoboda will be represented in parliament, as will the populist Radical Party, but the latter did worse than expected, taking home only around 6% of the vote. Rounding out the major parties, Yulia Timoshenko’s Fatherland party also did worse than expected, taking just over 5% of the vote. The main surprise is the success of the Opposition Bloc, a successor to Yanokovich’s Party of Regions, which was not expected to obtain seats, but instead took around 7% of the vote.
These results are extremely preliminary, and as with pre-election polling, only give a broad national figure for how people voted. Thus, they predict the 225 seats which are allotted by proportional representation from them, but the remaining 225 seats are elected in each individual district, for which we have no exit polling data. The parties associated with Petro Poroshenko are expected to do well, but these are also likely to yield high numbers of independent candidates. Full results are expected by Thursday morning.
Until we know the final makeup of the new Rada, as well as which parties ultimately will form the coalition government, it’s difficult to assess how the results will impact the ongoing crisis. Many citizens in Crimea and the Donbas were indeed unable to vote, disenfranchising as much as 19% of the population. The overwhelmingly pro-Western nature of the parties elected may be a double-edged sword: it will be popular with Western politicians, but it is in part a reflection of the disenfranchisement of Eastern Ukraine, and will not be truly representative. Despite this, Russian leaders appear to have accepted the results, signaling, hopefully, a willingness to work with Kiev in the future. Whether any government will be able to tackle Ukraine’s myriad problems is unclear. But while full electoral results will give us a better idea of what to expect from a new Ukrainian government, for now, the indications are reasonably positive.
Fifty years ago today, the actor Ronald Reagan gave a nationally televised speech on behalf of the Republican presidential nominee, Senator Barry Goldwater. It came to be known to Reagan fans as “The Speech” and launched his own, more successful political career.
And a very libertarian speech it was:
This idea that government was beholden to the people, that it had no other source of power is still the newest, most unique idea in all the long history of man’s relation to man. This is the issue of this election: Whether we believe in our capacity for self-government or whether we abandon the American Revolution and confess that a little intellectual elite in a far-distant capital can plan our lives for us better than we can plan them ourselves.
You and I are told we must choose between a left or right, but I suggest there is no such thing as a left or right. There is only an up or down. Up to man’s age-old dream – the maximum of individual freedom consistent with order – or down to the ant heap of totalitarianism. Regardless of their sincerity, their humanitarian motives, those who would sacrifice freedom for security have embarked on this downward path. Plutarch warned, “The real destroyer of the liberties of the people is he who spreads among them bounties, donations and benefits.”
The Founding Fathers knew a government can’t control the economy without controlling people. And they knew when a government sets out to do that, it must use force and coercion to achieve its purpose.
Video versions of the speech here.
For libertarians, Reagan had his faults. But he was an eloquent spokesman for a traditional American philosophy of individualism, self-reliance, and free enterprise at home and abroad, and words matter. They change the climate of opinion, and they inspire people trapped in illiberal societies. And these days, when people claiming the Reagan mantle push for wars or military involvement in Iraq, Iran, Syria, and other danger spots, we remember that Reagan challenged the Soviet Union mostly in the realm of ideas; he used military force only sparingly. George W. Bush, whom some call “Reagan’s true political heir,” increased federal spending by more than a trillion dollars even before the financial crisis. We watch the antigay crusading of today’s conservative Republicans and remember that Reagan publicly opposed the early antigay Briggs Initiative of 1978 (featured in the movie Milk).
Cato and the Constitutional Accountability Center have filed another amicus brief in a marriage case, this one challenging Louisiana’s restriction of marriage licenses to opposite-sex couples and its non-recognition of out-of-state same-sex marriages. Filed in the U.S. Court of Appeals for the Fifth Circuit—where last month we filed in a case out of Texas—this is an appeal from the only ruling to uphold a state marriage law since the Supreme Court’s decision in United States v. Windsor struck down part of the Defense of Marriage Act. (A federal judge in Puerto Rico also recently upheld that commonwealth’s law.)
Our previous briefs, including in that Texas case and also regarding the marriage laws of Oklahoma, Utah, Virginia, Michigan, Tennessee, Kentucky, Indiana, and Wisconsin in the Tenth, Fourth, Sixth, and Seventh Circuits, respectively, focused on the original public meaning of the Fourteenth Amendment’s Equal Protection Clause and its guarantee of “equality under law” for all. Here, however, we focus on federalism, democracy, and why states shouldn’t automatically get judicial deference when they pass legislation.
That is, the Fourteenth Amendment significantly reworked the constitutional order such that the U.S. Constitution now protects individual liberty against state infringement (which wasn’t the case before the Civil War). When the district court held that Louisiana was free to deny loving, committed same-sex couples the freedom to marry because the state “has a legitimate interest … for addressing the meaning of marriage through the democratic process,” it empowered the people of the states to use the democratic process to oppress disfavored minorities and thus overturned the constitutional order we’ve had since 1868.
Since the ratification of the Reconstruction-Era amendments, the Constitution has required states to respect fundamental constitutional principles, curtailing the power of majorities to violate individual rights. Consistent with these first principles, the Supreme Court has repeatedly recognized that constitutional guarantees that protect the individual from abuse by the government cannot be left to the democratic process. As it said in the foundational case of West Virginia v. Barnette (1943), “[o]ne’s right to life, liberty, and property, to free speech, a free press, freedom of worship and assembly, and other fundamental rights may not be submitted to vote; they depend on the outcome of no elections.” The right to equal protection of the laws similarly trumps majoritarian rule. Indeed, if majority approval were enough to make state-sponsored discrimination constitutional, the Fourteenth Amendment would be a dead letter.
Nobody doubts, as the district court recognized—and as Cato is the first to trumpet—that federalism is a “vibrant and essential component of our nation’s constitutional structure.” Federalism “is more than an exercise in setting boundaries between different institutions of government for their own integrity,” wrote Justice Anthony Kennedy in his unanimous opinion in Bond v. United States just three years ago. But state sovereignty “is not an end in itself: Rather, federalism secures to citizens the liberties that derive from the diffusion of sovereign power.”
In other words, where constitutional limits apply, state prerogatives necessarily end. As a long line of Supreme Court precedent makes clear, even when states act in an indisputably state sphere, they can’t use the democratic process to write inequality into law and deny to some people core aspects of liberty.
Instead of applying these well-established constitutional precepts, the district court deferred to the outcome of the “democratic process,” suggesting that any other result would be to “read personal preference[s] into the Constitution.” But there’s no “marriage exception” to the Fourteenth Amendment. Equal rights under law is not a policy preference; it’s a constitutional mandate. By allowing the people of Louisiana to impose a class-based badge of inferiority on committed same-sex couples and their children, the district court misapprehended the Fourteenth Amendment’s guarantee of equal protection—which protects all persons from state-sponsored discrimination, including the plaintiffs in this case and all gay men and lesbians who wish to exercise their right to marry—and disregarded vital principles of constitutional supremacy.
The Fifth Circuit will hear argument in Robicheaux v. Caldwell later this year.