Cato Conference: "Pruitt, Halbig, King & Indiana: Is ObamaCare Once Again Headed to the Supreme Court?"
Michael F. Cannon
On October 30, the Cato Institute will host a conference featuring leading experts on four legal challenges that critics understandably yet mistakenly describe as “the most significant existential threat to the Affordable Care Act”:
Thursday, October 30, 2014, 9:00AM – 1:30PM.
Luncheon to follow.
Featuring: Oklahoma Attorney General Scott Pruitt; Indiana Attorney General Greg Zoeller; Robert Barnes, The Washington Post; Jonathan Adler, Case Western Reserve University School of Law; David Ziff, University of Washington School of Law; Brianne Gorod, Constitutional Accountability Center; James Blumstein, Vanderbilt University; Michael F. Cannon, Cato Institute; Len Nichols, George Mason University; Tom Miller, American Enterprise Institute; and Robert Laszewski, Health Policy and Strategy Associates, LLC.
In Pruitt v. Burwell and Halbig v. Burwell, federal courts have ruled that the Internal Revenue Service is misinterpreting the Patient Protection and Affordable Care Act, unlawfully paying billions of dollars to private health insurance companies, and unlawfully subjecting more than 50 million individuals and employers to the Act’s individual and employer mandates. In King v. Burwell, another federal court found the IRS’s interpretation is permissible. A fourth lawsuit, Indiana v. IRS, is due a ruling at any time.
While these cases attempt to uphold the ACA by challenging the Obama administration’s interpretation, supporters and critics agree they could have as large an impact on the law as any constitutional challenge. Is the IRS acting within the confines of the law? Is the ACA unworkable as written? Is it inevitable that the Supreme Court will hear one of these cases, or a similar challenge yet to be filed? What is the impact of the IRS’s (mis)interpretation? What impact would a ruling for the plaintiffs have on the health care sector and the ACA? Leading experts, including the attorneys general behind Pruitt v. Burwell and Indiana v. IRS, will discuss these and other dimensions of this litigation.
To register to attend this event, click here and then submit the form on the page that opens, or email events [at] cato [dot] org, or fax (202) 371-0841, or call (202) 789-5229 by 9:00 a.m. on Wednesday, October 29, 2014.
Patrick J. Michaels and Paul C. "Chip" Knappenberger
You Ought to Have a Look is a recurring feature from the Cato’s Center for the Study of Science that briefly highlights a few interesting blog posts from around the web that are comments on subject areas we are currently emphasizing. Climate change issues currently top the list. Here we post a few of the best in recent days, along with our color commentary. This is the first installment of You Ought to Have a Look
We start off with the estimable Judith Curry, former chairwoman of the highly regarded School of Earth and Atmospheric Sciences at Georgia Institute of Technology (aka “Georgia Tech”). Her musings, published every few days on her blog “Climate Etc.” have a wide following amongst climate geeks (like us), while oftentimes her postings should be of interest to a wider, more general audience.
Judith scored big last week with an excellent op-ed in the Wall Street Journal. In her subsequent blog post “My WSJ op-ed: Global warming statistical meltdown,” she takes you through the version that appeared in print as well as some of the earlier drafts of it highlighting lessons she learned along the way. The article focuses on her recent blockbuster publication in which she and co-researcher Nic Lewis peg the earth’s climate sensitivity—how much warming will occur as a result of a doubling of the atmospheric concentration of carbon dioxide—at a value about one-half that which is produced by the collection of “state-of-the-art” climate models used by the UN and the Obama Administration to underpin their calls to mitigate carbon dioxide emissions from the production of energy.
And nearly every Friday, she posts her “Week in Review” where she highlights things that have recently caught her eye or events that she was involved in. In the current issue, she describes her recent travels which included a trip to Ohio’s Oberlin College where she “debated” me (PJM). As she describes it:
The debate went fine, we each had 10 minutes to make opening statements on the science, and then an additional 10 minutes to discuss broader implications. I used my time to discuss the values issues and decision making under deep uncertainties. PJM discussed the increasingly perverse incentives in academia and government funded science, see [link] for some of his recent writing on this topic. He definitely makes some valid points.
Next, you might want to check out the witty Matt Briggs (“Statistician to the Stars”) post on “Don’t Say ‘Hiatus’” in which he takes us (and virtually everyone else) to task for using the terms “pause” and/or “hiatus” to refer to the past 18 years or so of no statistically significant overall change in the earth’s average surface temperature. Briggs’ main point is that since climate change models are so bad (unskillful), there is no reason for a priori expectations of the temperature behavior one way or the other. In other words, a “pause” from what?
Be aware that Briggs is a very twisty writer, often leading the reader down a path that takes a sharp turn further down his somewhat detailed essays. But there is always some gem to find at the end!
Briggs is an interesting character. He is associated with Cornell, where he teaches on advanced statistics course. He was an editor of Journal of Climate, the American Meteorological Society’s flagship climate journal, but he quit after getting tired of the terrible manuscripts that were sent in (and often published).
It’s hard not to like Matt’s style, and you will be seeing a lot of his work highlighted here.
Finally, our friend Roy Spencer’s wide ranging drroyspencer.com (usually on things atmospheric, but sometimes otherwise) has an interesting article pointing out that size matters when it comes to climate change. In his post “Climate Change: A Meaningless Artifact of Technology?,” Roy notes that if you can’t distinguish the signal of climate change from the noise of natural variability (or even if you can, if it is exceedingly tiny), then there is really nothing worth getting worked up about. Roy worries:
“This seems to be the fate of our advanced society — we must find increasingly obscure things to fret over as we solve our major problems…hunger, disease, water-borne illness, infant mortality. But with real problems now appearing – renewed terrorist threats, Ebola — I fear we are straining gnats as we swallow camels.”
In a case of good timing, Roy followed-up that post with one illustrating a prime example of all this from Secretary of State John Kerry, who recently said something along the lines of “Life as you know it on Earth ends if climate change skeptics are wrong.” In his post “Life as You Know It Will End if John Kerry is Wrong…OR Right,” Roy demonstrates that, given the catastrophically high cost of converting even half of our fossil-fuel based energy to renewables, most of us will be living in poverty if Kerry’s solutions are implemented. Roy includes a video in which he and other energy policy experts discuss how the premature push toward renewable energy on a large scale will increase human suffering. It is well worth watching.
Stay tuned to our You Ought to Have a Look series for more blog highlights like these in the days ahead!
Craig D. Idso and Patrick J. Michaels
Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
With all of the negative effects predicted to occur in response to the ongoing rise in the air’s carbon dioxide (CO2) concentration—a result of burning fossil fuels to produce energy—it is only natural to want to see what has been happening to our Earth’s many ecosystems as the atmospheric carbon dioxide load has risen. (Its atmospheric concentration has risen from around 280 parts per million to nearly 400 ppm, an increase of about 43 percent).
A new study by the University of California’s Christopher Dolanc and colleagues does just that, for the diverse Sierra Nevada forests of California.
Dolanc and his colleagues analyzed two periods: historic measurements between 1929 and 1936, and modern data from 2001 through 2010. And when we said “diverse,” we meant it. They “classified 4,321 historical plots and 1,000 modern plots into nine broad groups of vegetation types that are widely used by land managers and researchers in the region.” This is what grad students are for!
They compared tree density and composition between the two periods, within and between the nine types of forest. The results shown in Figure 1 below.
Figure 1. Percent change in tree density by forest type in the Sierra Nevada Range, USA, as determined from historic (1929-1936) and modern (2001-2010) measurements. Green bars denote a statistically significant change. You might want to call this “California Greening.” Source: Dolanc et al. (2014).
As you can see, they found that tree density was significantly higher (in eight of nine forest types) in the current era of high carbon dioxide, when compared to the early period when concentrations were around 306 ppm, only about 10 percent above the pre-industrial background.
In addition, by looking across forest types, they find that density was higher in all western slope forests. Note to the numbers people: a 128 percent increase in tree density, as seen for montane hardwood forests, is a huge greening.
With respect to why there was a significant increase in tree density over the past several decades, Dolanc offers that the changes in the density and composition of lower-elevation forests are consistent with fire suppression; but that the density increases in high-elevation vegetation types (subalpine forests generally don’t burn) are “more likely to be caused by changing climate.”
Some of this climate change may be due to the CO2 emitted to the atmosphere by the burning of fossil fuels over the past century, although an inspection of regional climate data shows most of the Sierra warming occurred from 1910 through the early 1930s, long before the major emissions. Also, recent research has tied 20th century temperature fluctuations in northern California, and much of the Pacific Northwest, to naturally occurring atmospheric/ocean circulation patterns over the Pacific Ocean.
Let’s be charitable and say that the Sierra vegetation is responding favorably to all kinds of climate change, whether induced by humans or natural variability. At a minimum, the observations of this study do not support fears of widespread forest decline in the face of rising temperatures and atmospheric CO2. It is noteworthy that this occurs over the substantial diversity of ecosystems that occurs along this altitudinal gradient.
Another factor not considered in this report is the direct fertilization effect of carbon dioxide, a well-known phenomenon documented in thousands of laboratory and field experiments around the world. That’s a different subject altogether. But, from this study, it is apparent that whatever subtle changes in Sierra Nevada climate are occurring, we are witnessing a California Greening.
Dolanc, C.R., Safford, H.D., Dobrowski, S.Z. and Thorne, J.H. 2014. Twentieth century shifts in abundance and composition of vegetation types of the Sierra Nevada, CA, US. Applied Vegetation Science, 17, 442-455.
It’s not just high-profile culture-war issues like same-sex marriage and the right to bear arms that the Supreme Court is avoiding like the plague. On issues ranging from federalism to property rights to criminal law, the justices increasingly decline to hear any case they don’t absolutely have to – no matter how important the issues presented – especially if there’s a threat of an irreconcilable split. Such is the brave new world of John Roberts’s minimalism/unanimity.
The latest such example came yesterday morning, in a criminal procedure case called Jones v. United States, in which Cato filed an amicus brief that I previously blogged about. The issue here is whether, pursuant to the Sixth Amendment, a judge can base a sentence on facts that the jury did not find beyond a reasonable doubt. (The Court ruled in a 2000 case called Apprendi v. New Jersey that judges can’t enhance sentences beyond statutory maximums based on facts, other than prior convictions, not decided by the jury – but in Jones the sentences in question, while seemingly harsh and unreasonable, were still within the sentencing guidelines.)
While normally we don’t know what the justices are thinking when they deny a cert petition, or even how the vote went (four votes are needed to grant), but in the Jones denial, Justice Antonin Scalia wrote a rare dissenting opinion, joined by Justices Clarence Thomas and Ruth Bader Ginsburg. Here’s the salient bit:
The Sixth Amendment, together with the Fifth Amendment’s Due Process Clause, “requires that each element of a crime” be either admitted by the defendant, or “proved to the jury beyond a reasonable doubt.” Any fact that increases the penalty to which a defendant is exposed constitutes an element of a crime, and “must be found by a jury, not a judge.” We have held that a substantively unreasonable penalty is illegal and must be set aside. It unavoidably follows that any fact necessary to prevent a sentence from being substantively unreasonable—thereby exposing the defendant to the longer sentence—is an element that must be either admitted by the defendant or found by the jury. It may not be found by a judge. [emphasis original; internal citations omitted.]
And so the petitioners came one vote short. The three dissenters may seem like an unusual grouping, but actually these justices are often together on issues relating criminal defendants’ jury-trial rights. (It’s sort of the left/right versus the center, or the principled versus the pragmatic.) They were in the Apprendi majority, for example, as well as in the majority for the case that struck down the mandatory nature of the sentencing guidelines, United States v. Booker (2005), and recent cases involving the right to confront witnesses against you. Alas, they were joined in those cases by Justices John Paul Stevens and David Souter, who have since been replaced by Justices Sonia Sotomayor and Elena Kagan, respectively. It’s not a big surprise that Kagan seems to have joined the “prgamatic” bloc for these purposes, but Sotomayor’s vote is disappointing. Some commentators point to her background as a prosecutor to explain such deference, but Justice Sotomayor is one of the most pro-defendant votes on Fourth Amendment and habeas corpus cases.
In any event, whatever the reason for the lack of a crucial fourth vote to grant, this was another opportunity lost by the Court, another responsibility shirked. For more commentary, see here, here, here, and here.
The problems with federal highway spending are well documented. The program distorts project incentives and distributes money inefficiently. A new report from the Government Accountability Office (GAO) adds to the list of problems, detailing improper fund management within the Federal Highway Administration (FHWA).
In 2014 the highway trust fund collected $39 billion in fuel tax revenues, but spent $53 billion, creating a $14 billion deficit. This was not an isolated event. Since 2008 Congress has provided $50 billion of general federal revenues to prop up the highway trust fund. The Congressional Budget Office estimates that the highway trust fund will require another $157 billion of general revenues by 2024.
Funds from the highway trust fund are spent by several agencies, with the bulk allocated by FHWA. In fiscal year 2013, FHWA spent $41 billion, or 80 percent, of all highway funds. Of that, $39 billion was sent to states; the majority going to road and bridge construction and improvement.
But GAO found that FHWA is poorly tracking how this money is being spent. According to GAO, “FHWA tracks and reports aggregate obligations for its “major projects” (projects with a total cost of $500 million or more), it does not collect and report aggregate obligations for other projects, which represented nearly 88 percent of all fiscal year 2013 spending.” GAO’s analysis states that $36 billion in federal highway funding is not being properly tracked by FHWA.
Tracking this information would allow FHWA to provide greater oversight on projects, and to provide Congress and the public greater detail about spending projects and priorities. The agency could track cost growth on projects and perhaps stop cost overruns, which are common on transportation projects.
FWHA already has the technology to make these improvements. GAO notes that FHWA’s computer system already has the capability to track this data, but the agency does not feel inclined to use this capability.
Congress’ current short-term patch to the highway trust fund expires in May. FHWA’s adoption of GAO’s recommendations would allow both Congress and the agency to better control spending and lower the imbalance between spending and revenues.
The Secret Service is scandal prone. It spends excessively on foreign presidential trips, and it has agents who get in trouble with prostitutes and liquor bottles. The recent White House fence-jumping incident was a stunning failure. Despite the Service spending $1.9 billion a year, a guy with a knife jumped the fence, sprinted across the lawn, pushed open the front door, galloped through the Entrance Hall, danced across the East Room, and almost had time to sit down for a cup of tea in the Green Room.
In the wake of the incident, the head of the Secret Service resigned. But the Service is an agency within the Department of Homeland Security (DHS), and the head of DHS, Jeh Johnson, did not resign. Indeed, he said very little about it, presumably to evade responsibility. So what is the purpose of having the DHS bureaucratic superstructure on top of agencies such as the Secret Service? If DHS does not correct problems at agencies when they fester for years, and if DHS leaders do not take responsibility for agency failures, why do we need it?
A new survey of 40,000 DHS employees reported in the Washington Post finds that the department has severe management problems:
The government’s 2014 Federal Employee Viewpoint Survey portrays a Department of Homeland Security still facing debilitating morale problems that have plagued it for years but worsened during the Obama administration — and which have grown more serious since Johnson took over in December.
While the survey shows that the vast majority of DHS employees are hard-working and dedicated to their mission to protect the homeland, many say the department discourages innovation, treats employees in an arbitrary fashion and fails to recruit skilled personnel.
At the DHS Science and Technology Directorate, for example, only 21 percent of employees in this year’s survey held positive views of their leadership’s ability to “generate high levels of motivation and commitment in the workforce,’’ according to results for that division.
Since its inception, the department has been plagued by poor morale and a work environment widely seen as dysfunctional, which has contributed to an exodus of top-level officials in recent years, many of whom have been lured by private security companies.
Only 41.6 percent of DHS employees said they were satisfied with the department, down from 44.4 percent a year earlier.
In 2013, only 29.9 percent of employees department-wide had a positive view of their leaders’ ability to “generate high levels of motivation and commitment in the workforce.’’ That number plunged to 24.9 percent this year.
That’s all pretty pathetic. But this is the most stunning result from the survey: “Asked if their leaders ‘maintain high standards of honesty and integrity,’ just 39.1 percent of employees responded positively.”
The 2002 creation of DHS was a mistake. Congress should revisit its handiwork, and begin unbundling the department and closing it down. Some DHS agencies, such as the Secret Service, should be stand-alone organizations that report directly to the president. Some DHS agencies should be moved to existing departments. And some DHS agencies, including TSA, ought to be abolished.
Michael F. Cannon
Today, the Supreme Court hears a case about whether dentists and other professions should be allowed to use state licensing boards to engage in anti-competitive behavior that would be illegal if not done under the auspices of state governments. The case is North Carolina State Board of Dental Examiners v. FTC, and involves actions taken by that state’s dental board to prevent non-dentists from providing teeth-whitening services.
A majority of the courts of appeals gives state licensing boards and similar entities considerable latitude to engage in anticompetitive conduct, even when that conduct would be clearly unlawful were it undertaken individually by the licensed providers that typically dominate these licensing boards…
[T]he North Carolina Board of Dental Examiners (N.C. Board) became concerned that non-dentists were providing teeth whitening services. In North Carolina, teeth-whitening was available from dentists, either in-office or in take-home form; as an over-the-counter product; and from non-dentists in salons, malls, and other locations. The version provided by dentists was more powerful and required fewer treatments, but was significantly more expensive and less convenient. In response to complaints by dentists that non-dentists were providing lower-cost teeth-whitening services, the N.C. Board sent dozens of stern letters to non-dentists, asserting that the recipients were engaged in the unlicensed practice of dentistry, ordering them to cease and desist, and, in some of the letters, raising the prospect of criminal sanctions if they did not do so. The N.C. Board also sent letters to mall owners and operators, urging them not to lease space to non-dentist providers of teeth whitening services.
- The Supreme Court will decide whether the North Carolina dental board should be able to claim a “state action” exemption to federal laws against anti-competitive conduct. Hyman and Svorny argue they should not, noting that doctors, lawyers, and other professions have used government licensing to stamp out competition, to the detriment of consumers:
Other occupations provide no shortage of similar examples, whether it is states requiring hair braiders to obtain cosmetology licenses (even though the requisite training has absolutely nothing to do with hair braiding), laws prohibiting anyone other than licensed funeral directors from selling coffins, states prohibiting anyone other than veterinarians from “floating” horse teeth, or ethics rules prohibiting client poaching by music teachers.
“Antitrust has historically focused on private restraints on competition, but publicly imposed limitations can pose greater peril,” they write, “since they are likely to be both more effective and more durable.”
Hyman and Svorny make three further recommendations for the courts:
First, in reviewing the decisions of licensing boards, courts should presume that states were not actively supervising the boards, absent compelling evidence to the contrary. Second, defendant–licensing boards should be required to present persuasive evidence of actual harm that their proposed licensing restrictions or restraints will prevent and should be required to show that private market and non-regulatory forces (including brand names, private certification, credentialing, and liability) are insufficient to ensure that occupations maintain a requisite level of quality. Finally, we argue that legislators should take steps to roll back existing licensing regimes.
Hyman signed onto an amicus brief filed by antitrust scholars. (Here are two more amicus briefs filed by public-choice economists and the Cato Institute.) Svorny argues for the complete repeal of government licensing of medical professionals, and illustrates how the market for medical-malpractice liability insurance does more to promote health care quality than licensing.
(Cross-posted at Darwin’s Fool.)
In a brief filed to the Fifth Circuit Court of Appeals on Friday, Texas attorney general Greg Abbott says that the state’s gay marriage ban may help to reduce out-of-wedlock births:
Texas’s marriage laws are rationally related to the State’s interest in reducing unplanned out-of-wedlock births. By channeling procreative heterosexual intercourse into marriage, Texas’s marriage laws reduce unplanned out-of-wedlock births and the costs that those births impose on society. Recognizing same-sex marriage does not advance this interest because same-sex unions do not result in pregnancy.
As I’ve written before, this is a remarkably confused argument. There are costs to out-of-wedlock births. Too many children grow up without two parents and are less likely to graduate from high school, less likely to find stable jobs, and more likely to engage in crime and welfare dependency. All real problems. Which have nothing to do with bans on same-sex marriage. One thing gay couples are not doing is filling the world with fatherless children. Indeed, it’s hard to imagine that allowing more people to make the emotional and financial commitments of marriage could cause family breakdown or welfare spending.
The brief repeatedly says that “same-sex marriage fails to advance the State’s interest in reducing unplanned out-of-wedlock births.” Well, that may be true. But lots of state policies fail to advance that particular interest, from hunting licenses to corporate welfare. Presumably Abbott doesn’t oppose them because they don’t serve that particular purpose.
The brief does note that same-sex marriage may very well produce other societal benefits, such as increasing household wealth or providing a stable environment for children raised by same-sex couples [or] increasing adoptions.” But the attorney general wants to hang the state’s ten-gallon hat on the point that it won’t reduce out-of-wedlock births by opposite-sex couples.
In a previous case, Judge Richard Posner declared that the states of Indiana and Wisconsin had not produced any rational basis for banning gay marriage. Attorney General Abbott seems determined to prove him right.
Steve H. Hanke
Over the last few months, the price of Brent crude oil lost over 20% of its value, dropping below $90 just yesterday and hitting its lowest level in over two years. In consequence, oil producers will no longer be able to rely on oil revenues to pay their bills. The fiscal break-even price – a metric that determines the price per barrel of oil required for a nation to balance its budget at current levels of production – puts the problem into perspective.
Using data from Bloomberg and Deutsche Bank, I prepared a chart showing the break-even prices for the world’s major oil producers and the price on Brent crude. Over the past six months, Brent crude fell far below the break-even price for eleven of the top oil producers in the world; Iran, Venezuela, Nigeria, and even Saudi Arabia can no longer finance their governments’ largess through oil revenues.
The combination of oil markets flying into a perfect storm and excessive government spending puts most of the world’s oil producers between a rock and a hard place, where they will stay for some time.
Dumb arguments against libertarianism are increasing, as guardians of the expansive state begin to worry that the country might actually be trending in a libertarian direction. This may not be the dumbest, but as Nick Gillespie said of a different argument two weeks ago, it’s the most recent:‘You Ready to Step Up?’ The deadly drug war in Long Island’s Hempstead ghetto is a harrowing example of free-market, laissez-faire capitalism, with a heavy dose of TEC-9s To be fair, author Kevin Deutsch never uses the terms “laissez-faire” or “free-market” in his detailed article, so we should probably direct our disdain at Newsweek’s headline writers. Deutsch does portray the second-ranking guy in the Hempstead Crips as a businessman seeking to “recruit talent, maximize profits and expand their customer base.” But even the drug dealer gets the difference between selling prohibited substances and doing business in a free market: “We’re looking to market, sell and profit off drugs the way any business would handle their product,” Tony says. “Only our product is illegal, so more precautions need to be taken. It’s all systematic and planned, all the positions and responsibilities and assignments. All of that’s part of our business strategy. It’s usually real smooth and quiet, because that’s the best environment for us to make bank. But now, we at war, man. Ain’t nothing quiet these days.” Deutsch describes the competition between the local Crips and Bloods in terms not usually seen in articles about, say, Apple and Microsoft or Ford and Toyota: As for strategies, they seem to have settled on a war of attrition, aiming to kill or maim as many of their enemies as possible…. They’re far better armed and willing to use violence than the smaller neighborhood cliques scattered throughout Nassau County…. They’re also able to keep out other competitors through use of brute force…. It’s one of hundreds of similar conflicts being fought by Bloods and Crips sets throughout the country. These battles breed shootings, stabbings and robberies in gang-plagued, low-income neighborhoods each day. These are, of course, just the sorts of consequences that libertarians and economists expect from prohibition. As Tim Lynch and I wrote in the Cato Handbook on Policy a decade ago,
drug prohibition creates high levels of crime. Addicts commit crimes to pay for a habit that would be easily affordable if it were legal. Police sources have estimated that as much as half the property crime in some major cities is committed by drug users. More dramatic, because drugs are illegal, participants in the drug trade cannot go to court to settle disputes, whether between buyer and seller or between rival sellers. When black-market contracts are breached, the result is often some form of violent sanction, which usually leads to retaliation and then open warfare in the streets.
Jeffrey Miron of Harvard’s economics department and Cato made similar points in his book Drug War Crimes, as have such economists as Milton Friedman and Gary Becker. Miron also noted that prohibition drives up the prices of illegal drugs, making the trade attractive to people with a high tolerance for risk. And so in that sense, it’s true that some people will usually enter the prohibited trade – in alcohol, gambling, prostitution, crack, or whatever – and will employ some techniques that are also used in normal business enterprises. As Tyler Cowen says, there are markets in everything. Given our natural propensity to truck, barter, and exchange in order to improve our own situation, we can expect people to step into any trade, prohibited or not. Better that such trade should take place legally, within the rule of law, than underground, where violence may be the only recourse in disputes.
When the government bans the use and sale of a substance, and imprisons hundreds of thousands of people in an attempt to enforce that prohibition, that’s not “laissez-faire, free-market capitalism.” Duh.
Ted Galen Carpenter
Beijing’s behavior on the international stage over the past few months has been surprisingly restrained—in marked contrast to an earlier, lengthy period of assertive, if not abrasive, conduct toward its neighbors. Not too long ago, policymakers in the United States and throughout East Asia were alarmed by China’s initiatives. Beijing’s territorial claims in the South China Sea were breathtakingly broad, leading to nasty incidents with the Philippines, Vietnam, and other nations. Even worse were the confrontations between China and Japan over islands in the East China Sea, along with Beijing’s unilateral proclamation of an extensive Air Defense Identification Zone in that same area, which led to a surge of tensions with Japan, South Korea, and the United States.
Two developments illustrate the new, less confrontational trend in China’s policy. One is Beijing’s concerted diplomatic courtship of such countries as South Korea, Vietnam, and Sri Lanka. As I discuss in a recent article in China-U.S. Focus, even such longstanding rivals as Japan and India have been recipients of this Chinese “charm offensive.”
The other sign of uncharacteristic restraint is Beijing’s handling of the ongoing pro-democracy demonstrations in Hong Kong. True, there are indications that the Chinese government may have organized and paid for counterdemonstrators to confront and harass democracy activists. But, at least to this point, there is no indication that Xi Jinping’s government intends to intervene directly with its security forces, much less trigger a bloodbath reminiscent of the 1989 Tiananmen Square massacre. Instead, Beijing has allowed its appointed authorities in Hong Kong to manage the turbulence.
That is a smart move because the United States and the nations of East Asia are closely watching how the Chinese government handles the democratic ferment in Hong Kong. Taiwan is an especially interested spectator, and if Beijing wants to preserve the possibility of the island’s eventual return to the Chinese fold, a brutal crackdown in Hong Kong would doom those hopes for a generation or more. Conversely, the toleration of even limited moves toward free elections for Hong Kong’s leadership would increase the chances of seducing Taiwan regarding the desirability of gradual re-unification. It appears that Xi and his associates may understand that.
Of course, further developments bear close watching, since they could move quickly in an undesirable direction. It is possible that Beijing’s more conciliatory stance toward its Asian neighbors and its restraint regarding Hong Kong is merely a temporary tactical shift, and that we will soon see a return to a bold, confrontational approach. But if the current restraint instead is the harbinger of a more cautious, cooperative policy over the long term on geopolitical issues, China would become easier to accommodate as a rising great power. That would be good for the peace and security of East Asia and for harmonious relations between Beijing and Washington.
Tim LynchLast Week Tonight with John Oliver: Civil Forfeiture (HBO)
For related Cato scholarship, go here.
Kansas Gov. Sam Brownback (R) has become a punching bag for liberal pundits. They particularly dislike his tax reforms, which they say are causing a state budget disaster. Nicole Kaeding and I awarded Brownback an “A” on our “Fiscal Report Card.” So let’s take a look at how liberal and libertarian views on Governor Brownback differ.
John Judis at the New Republic writes, “the heart of his program consisted of drastic tax cuts for the wealthy…”
Brownback did sign into law large tax cuts, but that is a good thing. Legislation in 2012 replaced income tax rates of 3.5, 6.25, and 6.45 percent with lower rates of 3.0 and 4.9 percent, while substantially increasing the standard deduction. Those cuts provided savings for taxpayers at all income levels, not just the wealthy.
Judis continues, “Brownback’s tax cuts had produced a staggering loss in revenue—$687 million, or nearly 11 percent.” Tax Foundation shows the revenue effects of 2012 and 2013 tax legislation here. Judis gets the numbers about right, but I don’t think that magnitude of revenue change is “staggering.” In 2011, Gov. Dan Malloy (D) increased overall Connecticut taxes about 15 percent. That same year, Gov. Pat Quinn (D) increased overall Illinois taxes about 25 percent—now that is “staggering.” (Details on both increases here).
The important thing with tax cuts is that politicians need to match them with spending cuts so they are sustainable. Brownback has been frugal on spending, but it is true that Kansas needs further budget reforms so that future spending growth matches projected revenues. However, that restraint will be beneficial, as it will encourage policymakers to trim low-value programs in the budget.
Paul Krugman slammed Brownback’s tax cuts, saying, “the state’s budget has plunged deep into deficit, provoking a Moody’s downgrade of its debt.”
One problem with that assessment is that state budgets don’t really “plunge deep into deficit” like the federal budget does. Nearly all states must legally balance their general funds. They often cheat a bit with accounting maneuvers, but they generally get it done.
This recent report from the Kansas Policy Institute (KPI) shows how modest budget changes in Kansas can close the gap between projected future revenues and spending. If Brownback is reelected, he will need to trim spending to match his reduced revenues because the Kansas governor is required to submit balanced budgets. By contrast, the federal government has no balanced budget requirement, and it is federal politicians who have “plunged deep into deficit” in recent years, ironically with Krugman’s strong support.
Krugman is right that the Kansas credit rating has been downgraded, which is certainly bad for the budget. Let’s explore the issue with this chart from Pew. Notice that the ratings are somewhat fluid, with occasional upgrades and downgrades. After the chart was published, S&P downgraded Kansas from AA+ to AA, but the state has lots of company in that lower category.
Nonetheless, Kansas policymakers should roll up their shirtsleeves and begin trimming spending to regain the AA+ rating. Looking at KPI’s “medium” revenue estimate (Table 12), Kansas will need to trim at least 5 percent from spending by 2019 to match revenues, which does not sound too difficult to me.
Brownback’s critics are trying to make the larger point that state tax cuts should be avoided because they lead to low credit ratings. But looking at the Pew chart, there is no obvious relationship between major tax changes and the ratings. Two states that passed large tax hikes in recent years—California and Illinois—have the lowest ratings. And two states that passed large tax cuts in recent years—North Carolina and Indiana—have the highest rating.
The award of the Nobel Peace Prize to the Indian activist Kailash Satyarthi is bound to attract public attention to the problem of child labor. In 1980, Satyarthi founded the Bachpan Bachao Andolan, or “Save the Childhood Movement,” focused on fighting child labor and human trafficking, as well as bonded labor.
Child labor is widespread in developing countries, concentrating often in the agricultural sector where working conditions are particularly dire. Because of the gravity of the problem, it is necessary to be extremely careful in devising solutions. As is often the case, the fix to child labor that most people would think of instinctively—namely, to ban it—could do more harm than good. As another Nobel laureate, Paul Krugman, wrote in a New York Times opinion piece in 2001,
In 1993, child workers in Bangladesh were found to be producing clothing for Wal-Mart, and Senator Tom Harkin proposed legislation banning imports from countries employing underage workers. The direct result was that Bangladeshi textile factories stopped employing children. But did the children go back to school? Did they return to happy homes? Not according to Oxfam, which found that the displaced child workers ended up in even worse jobs, or on the streets—and that a significant number were forced into prostitution.
There are no quick and easy answers to the problem of child labor, especially in poor countries where educational opportunities are limited and where bans on child labor simply displace children into less desirable, illegal, and more dangerous occupations. To end child labor, the currently underdeveloped countries must create economic opportunities that would reduce or eliminate the reliance of many, particularly poorer, families on income from the work of their children. In a recent Cato Economic Development Bulletin, the economist Benjamin Powell argues thatThe main reason children do not work in wealthy countries is precisely because they are wealthy. The relationship between child labor and income is striking. Using the same World Bank data on child labor participation rates we can observe how child labor varies with per capita income. Figure 2 divides countries into five groups based on their level of per capita income adjusted for purchasing power parity. In the richest two-fifths of countries, all of whose incomes exceed $12,000 in 2010 dollars, child labor is virtually nonexistent.
The thought of children laboring in sweatshops is repulsive. But that does not mean we can simply think with our hearts and not our heads. Families who send their children to work in sweatshops do so because they are poor and it is the best available alternative open to them. The vast majority of children employed in countries with sweatshops work in lower-productivity sectors than manufacturing. Passing trade sanctions or other laws that take away the option of children working in sweatshops only limits their options further and throws them into worse alternatives. Luckily, as families escape poverty, child labor declines. As countries become rich, child labor virtually disappears. The answer for how to cure child labor lies in the process of economic growth—a process in which sweatshops play an important role.
Patrick J. Michaels and Paul C. "Chip" Knappenberger
Global Science Report is a weekly feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
A new paper overturns old suppositions regarding volcanoes, tree-rings, and climate sensitivity.
According to a 2012 press release accompanying a paper published in the journal Nature Geoscience, a research team led by Penn State’s Dr. Michael Mann concluded that the cooling influence of historical volcanic eruptions was underrepresented by tree-ring reconstructions of the earth’s temperature.
This, the press release went on to tell us, had potential implications when trying to determine the earth’s equilibrium climate sensitivity (ECS)—i.e., how much the global average surface temperature will rise as a result of a doubling of the atmosphere’s pre-industrial concentration of carbon dioxide. While most recent studies place the ECS noticeably less than earlier studies (including those most heavily relied upon by the U.N.’s Intergovernmental Panel on Climate Change (IPCC) and thus the U.S. Obama Administration), the 2012 Mann study was an exception. It implied that many existing determinations of the ESC were underestimates.
From the press release:
“Scientists look at the past response of the climate to natural factors like volcanoes to better understand how sensitive Earth’s climate might be to the human impact of increasing greenhouse gas concentrations,” said Mann. “Our findings suggest that past studies using tree-ring data to infer this sensitivity have likely underestimated it.”
Fast forward to today.
Appearing on-line in the journal Geophysical Research Letters (and sans press release) is a paper led by Penn State’s Martin Tingley that examined how the temperature response from volcanic inferred from tree-rings compared with that of observations. Tinsley’s team concluded that tree-ring based temperature proxies overestimated the temperature response caused by large volcanic eruptions. Instead of responding only to the cooler temperatures, the tree rings also included signals from reduced light availability (from the shading effect of volcanic aerosols) and the two effects together produced a signal greater than what would have been produced by cooler temperatures alone. This is basically the opposite of what Mann and colleagues concluded.
In an article posted to the website RealClimate back in 2012 touting his team’s findings, Mann took time to point out the “wider implication” of his findings:
Finally it is worth discussing the potential wider implication of these findings. Climate scientists use the past response of the climate to natural factors like volcanoes to better understand how sensitive Earth’s climate might be to the human impact of increasing greenhouse gas concentrations, e.g. to estimate the equilibrium sensitivity of the climate to CO2 doubling i.e. the warming expected for an increase in radiative forcing equivalent to doubling of CO2 concentrations. Hegerl et al (2006) for example used comparisons during the pre-industrial of EBM simulations and proxy temperature reconstructions based entirely or partially on tree-ring data to estimate the equilibrium 2xCO2 climate sensitivity, arguing for a substantially lower 5%-95% range of 1.5–6.2C than found in several previous studies. The primary radiative forcing during the pre-industrial period, however, is that provided by volcanic forcing. Our findings therefore suggest that such studies, because of the underestimate of the response to volcanic forcing in the underlying data, may well have underestimated the true climate sensitivity.
It will be interesting to see if accounting for the potential biases identified in this study leads to an upward revision in the estimated sensitivity range. Our study, in this regard, once again only puts forward a hypothesis. It will be up to other researchers, in further work, to assess the validity and potential implications of this hypothesis.
Based on the new results of Team Tingley, it seems that Mann’s hypothesis is wrong. Using tree ring temperature proxies would overestimate the climate sensitivity.
“It will be interesting to see” if this is recognized over at RealClimate.
But regardless, there is no escaping the fact that the Tingley study provides additional evidence that the earth’s climate sensitivity to human greenhouse gas emissions is likely less than advertised by the UN IPCC and the Obama Administration. The direct result being that headlong pursuit of carbon dioxide emissions limits should be reconsidered in light of this and other scientific literature.
Mann, M. E., Fuentes, J.D., and S. Rutherford, 2012. Underestimation of volcanic cooling in tree-ring-based reconstructions of hemispheric temperatures, Nature Geoscience,5, 202-205.
Tingley, M. P., Stine, A.R., and P. Huybers, 2014. Temperature reconstrictions from tree-ring densities overestimate volcanic cooling. Geophysical Research Letters, doi:10.1002/2014GL061268.
In an editorial today, the Wall Street Journal discusses Democratic complaints linking Ebola with supposedly falling spending on the Centers for Disease Control (CDC). Let’s take a look at the data with the Downsizing Government chart tool. Click open Health and Human Services, then click on CDC. Hold your mouse over the line to see the data.
Between 2000 and 2014, CDC outlays almost doubled in 2014 constant dollars, from $3.5 billion to $6.8 billion. Outlays have dipped the last few years, but that’s after a Bush-Obama spending boom. CDC outlays have quadrupled in constant dollars since the late 1980s.
The chart below shows CDC spending since 1970 in constant, or inflation-adjusted, dollars. The data is sourced from the Office of Management and Budget public database, available here.
State budgets face numerous long-term pressures, including overpromised and underfunded pensions. Another challenge is Medicaid, the health insurance program for low-income individuals, which is growing rapidly in cost and enrollment.
Medicaid is the single largest component of state budgets representing 25 percent of total state expenditures. Since 2003, state spending on Medicaid has increased 75 percent, growing faster than the federal budget. State spending decreased in 2010, but not because of any reforms. The federal stimulus bill temporarily increased the federal government’s share of Medicaid spending, so expenditures were simply shifted to the federal budget. But the stimulus has now expired so state spending is rising once again.
The below chart shows the growth in state Medicaid spending over the last ten years:
The higher levels of Medicaid spending are crowding out spending in other state budget areas, such as transportation and education, while also creating pressure to increase taxes.
In the newest edition of the “Fiscal Policy Report Card on America’s Governors: 2014,” Chris Edwards and I discuss how the president’s health care law is poised to make this situation even worse for state budgets:
Medicaid has grown rapidly for years, and the Affordable Care Act of 2010 (ACA) expanded it even more. Individual states can decide whether or not to implement the ACA’s expanded Medicaid coverage, but Congress created strong incentives to do so. The federal government is paying 100 percent of the costs of expansion through 2016, and then a declining share after that, reaching 90 percent by 2020. The Congressional Budget Office (CBO) estimates that Medicaid expansion under the ACA will cost the federal government $792 billion and state governments $46 billion over the next 10 years.
Even with the federal government paying most of the initial costs, the ACA will put a large strain on state budgets down the road. State policymakers are concerned that Congress will reduce the federal cost share in coming years because federal deficits will create pressure to cut spending. Without reforms, CBO estimates that federal Medicaid spending will almost double from $299 billion in 2014 to $576 billion by 2024. The growth is projected to be so rapid that even President Obama has suggested that Congress decrease the federal cost share.
The expansion of Medicaid under the ACA is bad policy for numerous reasons, and many governors are refusing to go along. Currently, at least 21 states have decided not to go along with the expansion. Those states may lose “free” federal money in the short-run, but leaders in those states may be saving their states from huge fiscal burdens later on.
Refusing to expand Medicaid under the ACA is a good first-step in controlling the growth in state and federal expenditures. But it is not enough. State and federal leaders should pass major structural reforms to Medicaid to halt the growth in this large entitlement program.
In the feudal era, rulers funded their households by taking a share of the crops farmers in their territory produced. The lords called this tribute and the peasants would’ve called it extortion.
We like to think that we’ve come quite a ways since then. After all, taxes are now paid withmoney—or even a digital abstraction of money—and forms, not cartloads of grain. We can even feel good (well, sanguine) about paying taxes, because we know that we’re funding the government of our own choosing—a democratically elected leadership restrained by the Constitution—not just feeding the avarice of a local warlord.
Except if you’re a raisin farmer in California, a state responsible for 40% of the world’s and 99% of America’s raisins. If you’re a California serf raisin farmer, you’re required by federal law to hand over up to 47% of each year’s crop to the U.S. government so the government can control the supply and price of raisins under a New Deal-era regulatory scheme.
The Fifth Amendment says that “private property [shall not] be taken for public use, without just compensation,” however, so it’s hard to see how it would be constitutional for the government to take nearly half a farmer’s harvest without any payment—let alone “just compensation.” (To be clear, if you grow grapes for use in wine or juice, you’re fine. It’s only if you dry out those grapes that you have to watch your property rights evaporate.)
Yet the U.S. Court of Appeals for the Ninth Circuit has done just that, repeatedly. In 2012, the en banc court held that nobody could challenge this taking in federal court. The Supreme Court unanimously disagreed. (For more background and to read Cato’s merits brief in that case go here.)
Failing to take the hint, the Ninth Circuit has now held that the Fifth Amendment’s protection against state expropriation simply doesn’t apply to personal property (as opposed to real estate). To put it bluntly, that’s an arbitrary, unprecedented, and ahistorical distinction, so raisin farmers are once again forced to ask the Supreme Court to correct lower court’s failure to protect their rights.
Joined by the five other organizations, Cato has filed a brief urging the Court to take this case, thus insuring that the farmers’ constitutional rights aren’t left to wither on the vine. We argue that the Ninth Circuit’s distinction between real and personal property has no basis in the text and history of the Constitution, Supreme Court precedent, or a reasonable understanding of the English language.
The Fifth Amendment embodies the notion that property rights are central to a free people and a just government. It could not be more clear that property can’t be taken without “due process,” and that when it is taken, the government must pay “just compensation.” These guarantees reflect the many values inherent in private property, such as individual achievement, privacy, and autonomy from government intrusion.
By devaluing property rights of all sorts, the Ninth Circuit weakens the values of autonomy and reliance that undergird the Takings Clause and conflicts with the very foundations of our constitutional order.
Raisin farming ain’t easy; five pounds of grapes yield only one pound of raisins. Raisin farmers shouldn’t have to hand over half of that pound to the federal government.
The Supreme Court will decide whether to take Horne v. U.S. Dept. of Agriculture later this fall.
Cato legal associate Gabriel Latner co-authored this blogpost.
National Review Online is in the midst of its “education week” – including offerings by yours truly and Jason Bedrick – and today brings us a piece by AEI’s Andrew Kelly on how to fix our higher ed system. Unfortunately, while he largely nails the problems, he stumbles on the solution.
Kelly is absolutely right when he criticizes the Obama administration for demonizing for-profit colleges – see my piece for the evidence that for-profits are not the problem – while simultaneously observing how odd it is for conservatives to decry as some great violation of free-market ideals attacks on institutions that get the vast majority of their funds through Washington. He is also right that the entire ivory tower is awash in waste and failure, and all institutions – for-profit or putatively not-for-profit – are self-interested money-grubbers. Finally, he correctly notes that it is a big problem that by far the largest student lender is the Bank of Uncle Sam, who basically gives to anyone who can breathe.
Where Kelly starts to get into trouble is in suggesting that a lot of these troubles could be meaningfully mitigated if we just had the right data readily available to consumers. He writes, “Basic pieces of information needed to make a sound investment — out-of-pocket costs, the proportion of students who graduate on time, the share who earn enough to pay back their loans after graduation — are either incomplete or nonexistent.”
As I’ve written before, there is actually a huge amount of information available on jobs and schools, and many students appear to simply ignore it. For instance, according to federal data, bachelor’s degrees awarded in journalism ballooned from 59,000 in 2000-01 to 88,752 in 2011-12, despite the very well publicized busting of the industry. Indeed, the BLS reports that as of 2012 there were only 57,600 Americans employed as “reporters, correspondents, and broadcast news analysts,” and 115,300 as editors. And it’s not like those with these job are striking it rich: the median annual pay for reporters in 2012 was $37,090, and for editors, $53,880. And yes, those jobs are expected to contract in the next ten years.
How about psychology majors? This is a regular resident on worst-employment lists put out by major news outlets, but it continues to draw big enrollment. In 2000-01 there were 73,645 psychology bachelor’s degrees awarded, and by 2011-12 there were almost 109,000. Meanwhile, according to the BLS, there were only 160,200 people employed as psychologists in 2012, and to be a practicing psychologist one typically needs a doctorate.
But we couldn’t possibly find out if a school has a poor six-year graduation rate, right? Wrong. If you’re willing to pay the $30 fee to access it – a microscopic investment compared to the overall price of college – you can get all sorts of useful data for schools from the hated U.S. News and World Report “Best Colleges” site. For instance, you can see that Lycoming College – a fairly middling school – has a four-year graduation rate of 54 percent, a 59 percent six-year grad rate for Pell Grant recipients, and a 64 percent grad rate for students receiving neither Pell nor Stafford Loans. You can also find financial aid information and cohort loan default rates for the school.
How about Cleveland State University? You can see that it has a puny four-year graduation rate of 8 percent, a six-year grad rate for Pell recipients of only 28 percent, and a six-year grad rate for non-Pell or Stafford students of just 34 percent. You can also discover that it nonetheless had enrollment of over 12,000 undergrads.
Now, is the data so exhaustive that any question anyone might have is answered? No, but while calling for federal data collection and publication, Kelly acknowledges that inherently “college is hard to evaluate until it is actually experienced.” So federal data collection and publication is also likely to leave a lot of unanswered questions. Of course, that doesn’t matter if the information is going to be ignored anyway, as present experience indicates it almost certainly would be.
In addition to getting out more info, Kelly calls for making colleges have “skin in the game” by holding them responsible for paying a part of their students’ defaulted loans. This certainly makes some sense: The big winners in American higher ed are the colleges that get paid no matter what, and the politicians who come off as caring when they furnish taxpayer-funded aid to almost anyone who wants it.
Still, it is uber-optimistic thinking to believe that skin-in-the-game efforts would be applied equally – or meaningfully – to most schools. For-profits would no doubt get hammered, state-subsidized public schools would have a huge advantage over tuition-dependent private institutions, and loveable but woefully ineffectual community colleges would almost certainly be protected. Indeed, Kelly reports that already:
Democratic senators Jack Reed, Dick Durbin, and Elizabeth Warren have introduced legislation that would force colleges with high default rates to pay back a share of defaulted loans. But here again, Democrats would rather play favorites than hold all colleges to account. The bill includes exemptions for historically black colleges and universities and for community colleges, schools that have default rates higher than the national average. And the proposal would cover only campuses where more than 25 percent of students take out loans. In other words, Democrats believe that only a subset of colleges should have skin in the game.
More important, perhaps, is that while many institutions are happy to take money from students who exhibit little if any evidence they can handle the programs for which they are signing up, it is Washington that gives those students much of the money in the first place. And if we should have learned anything from the housing-induced Great Recession, it is that were schools to start turning woefully prepared low-income people away, the Feds would employ both carrot and stick to get them to enroll those students.
Unfortunately, Kelly dismisses the only solution that would do more than skim the edges of the monstrous waste in higher education: phasing out federal student aid, which as Kelly notes, is now at about $170 billion. Quite simply, it is largely aid that encourages people to sign up for programs that huge numbers will not complete. It is aid that enables individuals to enroll in studies that even if they complete them, will not result in a job requiring their credential. It is aid that has fueled credential inflation such that even many of those jobs requiring degrees don’t really require degrees. And on top of it all, it is aid that has fueled huge price inflation and growing student debt.
Calls for phasing out massive aid, Kelly says, has “ceded…ground to Democrats,” and would perpetuate “the under-provision problem we started with: Many low-income students who would benefit from post-high-school education could not afford it.”
Alas, Kelly offers no evidence for this latter proposition, and logic suggests he is largely wrong. While there is huge waste in higher education, it is still true that an average person with a degree – especially in an in-demand field – stands to make big profits from going to college. That means a low-income student would likely be able to find private-sector aid, both charitable and from professional lenders, were they to meaningfully demonstrate the ability and desire to handle college-level work in a needed field. Both lender and borrower would stand to profit. And yes, there would often be little or no collateral for the loan, but lenders always consider risk, and the benefits would almost certainly outweigh it in most cases.
It’s also important to put the claim of an “under-provision” problem under the microscope. Need-based federal aid started in earnest in the 1960s. How many people went to college back then? In 1960, only 7.7 percent of Americans 25 years and older had a bachelor’s degree. In other words, relatively few people – low, middle, or upper-income – had degrees, making it hard to say we had a big problem of under-provision to low-income students when major federal aid started. And, as has been pretty firmly established, the educational challenges facing low-income people have much more to do with factors outside of the education system – especially the higher education system – than the system itself. If any generalization should apply, it is that low-income students are underrepresented because they are, for numerous reasons, too often underprepared.
Folks like Kelly who advocate for more data, skin in the game, etc., are trying to make any improvements they can. But reasons to believe their proposals will do much good are few, while the root problem is clear: Aid fuels massive price inflation and incredible waste. It has to go.
Excellent article by Jon Campbell for the Village Voice about New York City’s zeal for arresting people on charges of possessing so-called “gravity knives” – knives whose blade can be opened without the assistance of a second hand, and then be secured in place for use. Used in countless trades and occupations, knives fitting this description are sold at hardware, sporting, and work-gear stores from coast to coast. But New York City routinely prosecutes persons in possession of them even in the absence of any indication that the holder was up to no good or knew they violated local law. Excerpt:
For years, New York’s gravity-knife law has been formally opposed by a broad swath of the legal community. Elected officials call the statute “flawed” and “unfair.” Defense attorneys call it “outrageous” and “ridiculous” – or worse. Labor unions, which have seen a parade of members arrested for tools they use on the job, say the law is woefully outdated. Even the Office of Court Administration – the official body of the New York State judiciary – says the law is unjustly enforced and needs to change. They’ve petitioned the legislature to do just that.
A move in Albany to revamp New York’s law to cover possession of such a knife only when accompanied by “unlawful intent” failed, due in part to opposition from some quarters in the law enforcement community, where collaring some poor guy walking home from the subway for a “GK” (gravity knife) is known as an easy way to boost arrest numbers:
A poster on Officer.com, a verified online message board for law enforcement officers, put it bluntly in 2013 when he advised a rookie to be on the lookout for “GKs”: “make sure they have a prior conviction so you can bump it up to that felony!!!”
New York’s controversial stop-and-frisk policies are one reason it has such a high number of knife charges:
a Village Voice analysis of data from several sources suggests there have been as many as 60,000 gravity-knife prosecutions over the past decade, and that the rate has more than doubled in that time. If those estimates are correct, it’s enough to place gravity-knife offenses among the top 10 most prosecuted crimes in New York City.
More recently, Manhattan District Attorney Cyrus Vance in 2010 deployed the law as a municipal money-maker by charging Home Depot and other hardware and sports chains for selling what many of them had assumed were lawful knives, and extracting large “restitution” payments as part of the ensuing settlements.
In much of the rest of the country, fortunately, the law is on a sounder path as Arizona, New Hampshire, and other states revamp outdated laws to respect the peaceful ownership and carrying of knives. (The national group Knife Rights monitors and advances this progress.) Read the whole Voice piece here.