Well Harry Reid went nuclear, as he’d threatened to do all week. And by a vote of 52-48, Senate Democrats did his bidding just a couple of hours ago. I wrote about his hypocrisy at NRO this morning. He’s the same Harry Reid who assured us only a few months ago that “We’re not talking about changing the filibuster rules that relates to nominations for judges” (Press Briefing, 7/11/13) and “We’re not touching judges. That’s what they were talking about. This is not judges.” (NBC’s “Meet The Press,” 7/14/13). Well we are talking about judges. And we’ll be talking about them quite a bit more, I’m afraid.
The Democratic hypocrisy on the subject boils down to this. After sitting on George W. Bush’s appellate court nominees during his first two years when they controlled the Senate—never even holding hearings—Democrats for the next two years, after losing the Senate in the 2002 midterm elections, conducted unprecedented filibusters of Bush’s appellate court picks—all of which ended only with the “Gang of 14” compromise in 2005. But now that the Republican minority has used that same practice—directed this session only at the latest D.C. Circuit nominees—Democrats have moved to strip it from them—and not by a two-thirds vote of the Senate, as Senate rules require, but by a simple majority. It’s heads I win, tails you lose.
But it doesn’t end there. After Obama’s nominee Sri Srinivasen was unanimously confirmed for the D.C. Circuit last May, Republicans have filibustered Obama’s three latest nominees for that circuit for practical reasons, not for the ideological reasons that drove Democratic filibusters. As I outlined in my NRO piece, there simply isn’t enough work in the D.C. Circuit to justify three more judges. For 17 straight years that court has had the lowest number of appeals filed and the lowest number of appeals terminated of all the circuits.
So what’s the upshot of Reid’s move? The most obvious one is this: If Harry Reid is willing to drop the nuclear bomb for these three nominees—given all that that implies about the sanctity of Senate rules—he must be expecting some return. It’s not for nothing that the D.C. Circuit Court is called the second most important court in the land. It’s the court that will be deciding challenges to the vast executive branch “lawmaking” by which the Obama administration today is ruling America, covering everything from health care to environmental regulations, labor arrangements, financial affairs, and so much more. With a divided Congress, Obama can’t get things done the constitutional way, so he rules by diktat—and hopes the courts will uphold his unilateral decisions. Given the docket of the D.C. Circuit, rule by executive order just got easier.
But Obama has three more years to name judges for the other circuits as well, and possibly for the Supreme Court, and that got easier for him too. And of course it’s now easier to change other Senate rules by a simple majority. But what goes around comes around. And the way the polls are going in the wake of the Obamacare debacle, the Senate itself, already in play, may be more so come next November. If it turns out that way, Republicans should have no scruples about playing by the rules the Democrats have seen fit first to employ, when in the minority, and then to remove, when in the majority. As is said, it couldn’t happen to a nicer bunch.
Christopher A. Preble
A loya jirga, an assembly of 3,000 or so Afghan leaders, is currently reviewing a draft bilateral security agreement that would allow U.S. and other foreign troops to remain in Afghanistan until 2024. Even if it passes with few substantive changes, the agreement is unlikely to please anyone.
Afghan President Hamid Karzai has said he will not sign it, and the few remaining hawks in the United States will point to some military leaders’ call for a much larger force to remain for a generation or more, and accuse President Obama of fecklessness.
Most Americans, however, are likely to have the opposite reaction: a force of 8,000 is too large, and ten years is too long. A senior administration official’s assertion to the New York Times that “there is no scenario in which those forces would stay in Afghanistan until anywhere near 2024,” isn’t likely to be very reassuring. We’ve heard before that open-ended missions wouldn’t be, or that U.S. troops would eventually come home.
The president’s supporters, including Secretary of State John Kerry, characterize the agreement as an acceptable compromise that ensures legal protections for Americans stationed in Afghanistan, while also granting the United States access for continued counterterrorism operations, including raids in Afghan homes, said to be one of the last sticking points of the negotiations.
The details must still be worked out, and it is possible that the loya jirga will alter the agreement, or vote it down. If the legal protections for American citizens are stripped out, or if there is no agreement, then the U.S. military mission should be withdrawn entirely from Afghanistan. As in the case in Iraq, when a democratically elected government refused the Obama administration’s reasonable request to shield U.S. troops from the vagaries of Iraqi justice, no deal should mean no troops. This story is far from over, and I will be watching as more details emerge.
This much is clear, however: The enthusiasm for quixotic nation-building crusades that swept through Washington a few years ago has been replaced by a welcome skepticism. Senior military officers dressed it up with a fancy name–COIN–but the public never bought what they were selling. Now even some scholars within the military establishment are pushing back. A force of 100,000 wasn’t nearly large enough to accomplish a nation-building mission, and, the Obama administration no longer even pretends that that is the true object. A mere 8,000 foreign troops will have trouble enough training an Afghan army beset by illiteracy, absenteeism, and corruption. Any pretense that the few U.S. troops who remain in Afghanistan after 2014 can write Afghan legal codes, build a functioning political system, put the country on the road to economic self-sufficiency, and protect the rights of women and religious and ethnic minorities is out the window.
But the critical constraint on any lingering nation-building fantasies is the American people who want this nation’s longest war to be over. They should be forgiven for believing that it would be by now, given that President Obama intoned repeatedly during last year’s campaign that he was committed to ending it.
He hasn’t yet.
Paul C. "Chip" Knappenberger
Tomorrow [today] Rep. Henry A. Waxman and Sen. Sheldon Whitehouse, co-chairs of the Bicameral Task Force on Climate Change, will host representatives from five of America’s major sports leagues, as well as the U.S. Olympic Committee (USOC), to discuss the effects of climate change on sporting activities and the work these organizations are doing to reduce their greenhouse gas (GHG) emissions. The group will meet for a closed-door discussion, followed by a press availability.
Now, admittedly, even as a climatologist, I do spend a fair amount of time discussing sports.
But I do so around the water cooler or at the local bar, not with Congressional task forces.
Your tax dollars are probably better served that way.
Ilya ShapiroThe Congressional Black Caucus has now explicitly attacked Republicans as racist for blocking President Obama’s latest judicial nominees. Not only are they racist, but if you scratch them, you find Confederate gray. Unbelievable. Do these elected officials really think that the filibustering of three D.C. Circuit nominees (one of whom is black) has more to do with race than either judicial philosophy or the ongoing battle over whether this underworked court actually needs more judges? Even after Indian-American Sri Srinivasen was confirmed to that same court unanimously in May after Caitlin Halligan (who’s white) was blocked for ideological reasons? Moreover, this is a pretty rich accusation for Democratic lawmakers to make after the filibustering of Janice Rogers Brown (since confirmed after the ‘Gang of 14’ deal) and Miguel Estrada (seven failed cloture votes ultimately leading to withdrawal) during the Bush years. And recall the infamous memo detailing how Estrada needed to be blocked because he’s Latino. This isn’t even about whether the Senate should use the “nuclear option” to end the filibustering of nominees – that’s a question of tactics rather than principles – but if Republicans had done that back in 2006, we wouldn’t still be trapped in this political gamesmanship. Regardless of what happens on that front, however, there should be a forceful response from folks named Cruz and Rubio – and Tim Scott, who was the only African-American senator until Cory Booker’s election. This is shameful – but alas of a piece with this administration’s racialization of everything from housing policy to Justice Department hiring to voter ID. It’s too bad that the CBC passes for leadership in the black community, distracting its constituents from real policy issues to engage in base calumny. I guess if all you have is a demagogic hammer, then everything is a racist nail.
Gerald P. O'Driscoll Jr.
The Senate Banking Committee just voted 14 to 8 to confirm Janet Yellen’s nomination to be the new Chair of the Federal Reserve. She will likely go on to be confirmed by the full Senate.
Much of the coverage has focused on Yellen as a person, when the real story is on the Fed as an institution. Sometimes individuals have profound influence on Fed policy, such as Paul Volcker in the late 1970s and 1980s. Over time, however, the institutional structure of the central bank and the incentives facing policymakers matter more.
The Federal Reserve famously has a dual mandate of promoting maximum employment and price stability. The Federal Open Market Committee, which sets monetary policy, has great discretion in weighting the two policy goals. As a practical matter, the vast majority of the time, full employment receives the greater weight. That is because the Fed is subject to similar pressures as are the members of Congress to which the Fed must report. In the short run, voters want to see more job creation. That is especially true today. The United States is experiencing weak growth with anemic job creation.
Never mind that the Fed is not capable of stimulating job creation, at least not in a sustained way over time. It has a jobs mandate and has created expectations that it can stimulate job growth with monetary policy. The Fed became an inflation-fighter under Volcker only when high inflation produced strong political currents to fight inflation even at the cost of recession and job creation.
The Federal Reserve claims political independence, but it has been so only comparatively rarely. Even Volcker could make tough decisions only because he was supported by President Carter, who appointed him, and President Reagan, who reappointed him. Conventionally defined inflation is low now, so the Fed under any likely Chair would continue its program of monetary stimulus. Perhaps Yellen is personally inclined to continue it longer than might some other candidates. But all possible Fed chiefs’ would face the same pressures to “do something” to enhance job growth, even if its policy tools are not effective.
The prolonged period of low interest rates has made the Fed the enabler of the federal government’s fiscal deficits. Low interest rates have kept down the government’s borrowing costs, at least compared to what they would have been under “normal” interest rates of 3-4 percent.
Congress and the president have been spared a fiscal crisis, and thus repeatedly punted on fiscal reform. They are likely to continue doing so until rising interest rates precipitate a crisis. How long that can be postponed remains an open question.
There’s been much ink spilled the past few days over U.S. Secretary of Education Arne Duncan’s defense of the Common Core, delivered as an obnoxious attack on white, suburban women. Proclaimed Duncan to a meeting of the Council of Chief State School Officers (one of the Core’s progenitors):
It’s fascinating to me that some of the pushback is coming from, sort of, white suburban moms who – all of a sudden – their child isn’t as brilliant as they thought they were and their school isn’t quite as good as they thought they were, and that’s pretty scary.
Much of the uproar over Duncan’s attack has been over his injecting race and sex into the Common Core debate, and that certainly was unnecessary. But much more concerning to me – and indicative of the fundamental problem with federally driven national standardization – is the clear message sent by Duncan’s denunciation of Jane Suburbia: average Americans are either too dull or too blinkered to do what’s best for their kids. The masses need their betters in government – politicians, bureaucrats – to control their lives.
Alas, this has been a subtext of almost the entire defense of the Core. Every time supporters decide to smear opponents primarily as “misinformed” or “conspiracy theorists,” they imply that people who are fighting for control of what their children will learn are either too ignorant, or too goofy, to matter.
Of course, there are some opponents who don’t get all the facts right about the Common Core, but supporters ignore that many of these people are just finding out about the Core. Unlike major Core supporters, many opponents – often parents and plain ol’ concerned citizens – haven’t been working on the Core for years. And even when opponents use such regretably over-the-top rhetoric as calling the Common Core “Commie Core,” they are ultimately making a legitimate point: the federally driven Core is intended to make the learning outcomes of all public schools the same – “common” is in the name, for crying out loud! – and in so doing, nationalize learning. At the very least, that’s not a move in the libertarian direction.
Every once in a while, Core supporters will openly air their basic distrust of average Americans. If you go to the 53:10 mark of our Common Core “Great Debate,” you’ll catch just such an admission by Chester Finn, president of the Core-championing Thomas B. Fordham Institute. In response to an explanation of how free markets enable average people to smartly consume things about which they are not experts, Finn declares that most parents won’t do even easy work to make informed choices. Then he asks, “is that a way to run a society?”
And there it is: In the end, Common Core, and all the government power behind it, is ultimately about experts running society rather than letting free people govern themselves. Why? Because parents – “the people” – are either thought incapable, or unwilling, of caring for their children themselves.
This attitude if fundamentally at odds with maintaining a free society. It declares that government must control what children learn, and in so doing gives government – not free people – the power over what the next generation of Americans will think. This is not to say that the Common Core is intended to inculcate values and attitudes – most supporters probably just want to better furnish skills – but it will nevertheless couple power over what the schools teach with an attitude that is fundamentally corrupting: I know what is best, and must make you do it. And if your betters think you can’t be trusted to teach your child about something as unthreatening as the ABCs, imagine what they may eventually require – or forbid – in teaching about religion, or guns, or climate change?
As has been the case in the past, Secretary Duncan has actually done Common Core opponents a huge favor in an effort to take them down. But this may be his most important contribution yet, revealing the supremely threatening contempt in which he seems to hold the average parent, and which drives the Common Core.
My new study on the Transportation Security Administration mainly focuses on the agency’s poor management and performance. The TSA has a near monopoly on security screening at U.S. airports, and monopoly organizations usually end up being bloated, inefficient, and providing low-quality services.
The study proposes contracting out or “privatizing” airport screening, which is the structure of aviation security used successfully in Canada and many European countries.
I briefly discuss some of the civil liberties problems surrounding TSA. Note that Cato’s Jim Harper also addresses those issues in his work, as does Robert Poole of Reason Foundation. I noticed this recent blog post by Poole that nicely summarizes some of the realities of TSA, terrorism, and civil liberties:
A couple of years ago Jonathan Corbett, a tech entrepreneur from Miami, posted videos online showing him successfully passing through TSA airport body scanners with a metal box concealed under his clothing, seeking to demonstrate that the scanners are an ineffective replacement for walk-through metal detectors for primary screening. In 2010 he filed a lawsuit contending that body-scanning and pat-downs are both unreasonable searches that violate the Fourth Amendment.
As part of the discovery process, TSA provided Corbett with 4,000 pages of documents, many of them classified. He was allowed to produce two versions of his brief, one containing extracts of classified material, and available only to the court, and a heavily redacted version which could be made public. But as several news sites reported last month, a clerk in the US Court of Appeals (11th District) mistakenly posted the classified version online, and it was quickly noticed and reproduced on various websites. Although the court issued a gag order prohibiting Corbett from talking about the classified material, there was no way to stop others from doing so.
Among the things we’ve learned from TSA Civil Aviation Threat Assessments that Corbett cited in his brief are the following:
- “As of mid-2011, terrorist threat groups present in the Homeland are not known to be actively plotting against civil aviation targets or airports; instead, their focus is on fund-raising, recruiting, and propagandizing.”
- No terrorist has attempted to bring explosives onto an aircraft via a U.S. airport in 35 years, and even worldwide, the use of explosives on aircraft is “extremely rare.”
- There have been no attempted domestic hijackings of any kind since 9/11.
- The government concedes that it would be difficult to repeat a 9/11-type attack due to strengthened cockpit doors and passengers’ willingness to challenge would-be hijackers.
Based on these points, Corbett argues that primary-screening searches via body-scanners or pat-downs are unreasonable under the Fourth Amendment. He agrees that although those searches have not turned up any would-be terrorists, they have detected illegal drugs. But that is irrelevant to aviation security, which is the only purported rationale for such intrusive searches without prior probable cause.
Corbett does not directly address whether the whole array of TSA airport screening measures may have deterred attacks that might have happened without those measures in place. But that is the kind of question that can be—and has been—assessed quantitatively by security experts such as Mark Stewart and John Mueller, whose work I have cited several times in previous issues of this newsletter. And those assessments suggest that body scanners and Federal Air Marshals, among other measures, cost vastly more than they are worth.
Whatever the outcome of Corbett’s suit—and I hope he prevails—Congress needs to take a hard look at the cost-effectiveness of much of what TSA is doing, in light of the revelations inadvertently made public by this case.”
Poole has done superb work over the years, not only on airport screening, but also on airport and air traffic control privatization. Bob’s work can be found here, and our joint article on airports and ATC is here.
Hans Riegel recently died at age 90. He changed the world for the better. He brought us the treat known as gummi bears.
Politicians routinely crusade against wealth and inequality, but that occurs naturally when people create products and offer services benefiting the rest of us.
Today people live on their cell phones. Once we didn’t even have telephones. Thank Alexander Graham Bell, born in Edinburgh, Scotland.
The internal combustion engine auto came from Karl Benz. He was a design engineer who in 1886 won a patent for a “motor car.”
In 1903, Clarence Crane created the hard fruit candy known as Life Savers.
Helen Greiner, a fan of Star Wars’ R2D2, came up with the Roomba vacuum cleaner robot in 2002.
John Mauchly and John Eckert created the first computer in 1946—the Electronic Integrator and Computer, or ENIAC.
Thomas Edison gave us working light bulbs in 1879. Joseph Swan might have beaten Edison, but the latter bought Swan’s patent.
The 3-D printer was created in 1983 by Chuck Hall. His first creation: a tea cup.
General Electric engineer James Wright attempted to make artificial rubber during World War II. He failed, but ad man Peter Hodgson later discovered the malleable material and began selling Silly Putty.
While developing magnetrons for radar in World War II, Percy Spencer noticed that a candy bar in his pocket melted. The result was the microwave oven.
Credit for television goes to Russian émigré Vladimir Zworykin. In 1920 he developed an iconoscope, or television transmission tube, and kinescope, a television receiver.
That same year Austrian Eduard Hass developed the peppermint candy, “pfefferminz” in German, known as PEZ.
The Scottish Charles Macintosh came up with the waterproof Mackintosh Raincoat. A store clerk turned chemist, in 1823 he figured out how to make waterproof fabric.
Infections once were common killers. But in 1928 another Scot, Alexander Fleming, discovered penicillin.
Edward Binney and Harold Smith owned an industrial pigment company and in 1903 combined industrial pigments with paraffin wax. By 1996, 100 billion crayons had been produced.
In 1935, Frederick McKinley Jones developed portable air-conditioning for trucks. Jones became the first African-American elected to the American Society of Refrigeration Engineers.
John Pemberton, an Atlanta pharmacist, developed Coca Cola’s original formula in 1885, in response to a ban on the sale of his wine-coca “patent medicine.”
Canadian-born James Naismith studied theology and worked at a Massachusetts YMCA. In 1891, he relied on a childhood game to develop basketball as a sport to be played indoors in the winter.
In 1884, Lewis Waterman developed the fountain pen. He took ten years to perfect his invention.
Arthur Fry gave the world the “Post-It Note” in 1974. He was both a chemist at 3M and wanted a bookmark that would cling to church hymnal pages. He thought of a failed glue created by a colleague.
Ruth Wakefield, regionally famous for her cooking, ran out of baker’s chocolate while making cookies and in 1930 substituted chunks of semi-sweet chocolate. Her recipe increased chocolate sales and became known to Nestle, which consequently created chocolate chips.
In 1964, while seeking a new synthetic fiber, Stephanie Kwolek came up with the well-nigh indestructible Kevlar—commonly part of bullet-proof vests.
John Harvey Kellogg was a vegetarian who headed a Michigan sanitarium. Faced with wheat gone stale, in 1894 he processed it into dough anyway and ended up with flakes.
These are just a few of the inventions which surround and enrich us. Human creativity and ingenuity—punctuated with a mix of luck and hard work—constantly transform our lives.
As I pointed out in my latest Forbes online column:
Few things better illustrate Adam Smith’s axiom that people can simultaneously benefit the rest of us while pursuing their own interest. Of course people should do good. But they often do best while trying to advance themselves.
Some inventors just love to create. Others hope for money, glory, or something else. Whatever their motives, the rest of us gain.
Like being able to enjoy gummi bears. Hans Riegel, RIP!
Last week I noted that it was “long past time for the U.S. Department of Justice to drop its embarrassing lawsuit which would keep black kids in failing schools.” The Louisiana Department of Education released a study that completely undermined the DOJ’s case against the state’s school voucher program, showing that the program increased racial integration in most of the schools under federal desegregation orders and had a miniscule impact in the remainder.
Today, Michael Warren of the Weekly Standard reports that the DOJ has dropped part of its fight against school choice in Louisiana:
The Obama administration’s Justice Department has dropped a lawsuit aiming to stop a school voucher program in the state of Louisiana. A ruling Friday by a United States district court judge revealed that the federal government has “abandoned” its pursuit of an injunction against the Louisiana Scholarship Program, a state-funded voucher program designed to give students in failing public schools the opportunity to attend better performing public or private schools.
“We are pleased that the Obama Administration has given up its attempt to end the Louisiana Scholarship Program with this absurd lawsuit,” said Louisiana governor Bobby Jindal, a Republican, in a statement. “It is great the Department of Justice has realized, at least for the time being, it has no authority to end equal opportunity of education for Louisiana children.”
The move may have resulted from the bad press or a sudden acceptance of common sense, but more likely it was a simply legal maneuver to prevent the Black Alliance for Educational Options and the Goldwater Institute, representing parents of voucher recipients, from intervening in the lawsuit as defendants. As Warren reports:
On Friday, Judge Ivan Lemelle of the U.S. district court of the Eastern District of Louisiana ruled the parents could not intervene in the case because the feds are “no longer seeking injunctive relief at this time.” Lemelle explained that in the intervening months since the Justice Department filed suit, it had made clear both in a supplemental filing and in its opposition to the parent group’s motion to intervene that it was not seeking in its suit to end the voucher program or take away vouchers from students.
Lemelle continued: “The Court reads these two statements as the United States abandoning its previous request that the Court ‘permanently enjoin the State from issuing any future voucher awards to students unless and until it obtains authorization from the federal court overseeing the applicable desegregation case.’”
Lemelle will hold an oral hearing on Friday, November 22, during which Justice will make its case for the federal review process of the voucher program. In his statement on Friday’s ruling, Jindal criticized the federal government’s efforts.
“The centerpiece of the Department of Justice’s ‘process’ is a requirement that the state may not tell parents, for 45 days, that their child has been awarded a scholarship while the department decides whether to object to the scholarship award. The obvious purpose of this gag order would be to prevent parents from learning that the Department of Justice might try to take their child’s scholarship away if it decides that the child is the wrong race,” said Jindal. “The updated Department of Justice request reeks of federal government intrusion that would put a tremendous burden on the state, along with parents and teachers who want to participate in school choice.”
In other words, the DOJ is still seeking the legal authority to prevent low-income kids from escaping failing public schools if the feds say they have the wrong skin color.
Last week, A Conspiracy Against Obamacare: The Volokh Conspiracy and the Health Care Case was released, of which I am proud to be the editor. The book compiles the discussions and debates about the Affordable Care Act that occurred on the legal blog the Volokh Conspiracy, supplemented with new material. The posts are stitched together into a narrative structure. As a result, you can see the constitutional arguments against the Affordable Care Act develop in real time, from before the law was passed all the way to the Supreme Court.
The book documents a bellwether moment in the history of legal academia: A legal academic blog influencing major Supreme Court litigation. And not just major Supreme Court litigation, but a case that went from a much derided challenge to the biggest and most watched case in decades. As former Solicitor General Paul D. Clement, who expertly argued the case before the Court, kindly wrote in the foreword, “The Constitution had its Federalist Papers, and the challenge to the Affordable Care Act had the Volokh Conspiracy.”
In the introduction, I discuss the constitutional arguments against the law in a more abstract way, as well as describe how the law is destined to fail due to poor design. We are seeing the beginning of those failures now, but I fear we ain’t seen nothin’ yet.
It was not much commented on at the time–the administration and the law’s supporters were too busy spiking the ball–but the Supreme Court’s decision will speed up the law’s inevitable failures. As I describe in the introduction:
Due to the chief justice’s unpredictable opinion, we are now likely stuck with a law that I fear will seriously damage the health of Americans. What’s more, attempts to further centralize power will not stop at the individual mandate. When the law fails, as I predict it will, it will be said that the federal government lacked enough power to make it work. The chief justice’s opinion gives people a real choice whether to comply with the requirement to purchase insurance or pay a “tax.” Many people will not, and as the price of insurance goes up, more and more people will choose to remain uninsured. This will certainly be called a “loophole.” Similarly, the Court also gave states a choice about whether to comply with the Affordable Care Act’s Medicaid expansion. Another “loophole.” Finally, the states that don’t create health care exchanges will also throw wrenches in the law’s overall scheme. “Loopholes” all around. Having freedom of choice in deeply personal health care decisions, however, is not a loophole.
When the time comes to revisit the Affordable Care Act, those choices by free, sovereign entities (citizens and states) will be blamed for the law’s dysfunctions. To paraphrase philosopher Robert Nozick, liberty disrupts patterns. Free choice inevitably upsets the carefully crafted plans of Washington.
As a solution to the law’s problems, more power will be proposed. A few voices, such as many who write for the Volokh Conspiracy and those of us at the Cato Institute, will strenuously argue that the problem is not a lack of power but a lack of freedom. I am not optimistic, however, that very many entrenched bureaucrats and politicians will locate the problem in the mirror rather than in the freedoms of the American people.
If the Affordable Care Act keeps going south at this rate, we may need to prepare to have that debate sooner than we expected.
The Federalist Society came into being in 1982 after a small group of conservatives and libertarians, concerned about the state of the law and the legal academy in particular, gathered for a modest conference at the Yale Law School, after which two law-student chapters were formed at Yale and at the University of Chicago. Quickly thereafter chapters sprung up at other law schools across the country. And in 1986 those students, now lawyers, started forming lawyer chapters in the cities where they practiced. Today the Federalist Society is more than 55,000 strong, its membership drawn from all corners of the law and beyond.
Toward the end of this past week many of those members gathered in Washington for the society’s 27th annual National Lawyers Convention, highlighted on Thursday evening by a gala black tie dinner at the conclusion of which Judge Diane Sykes of Seventh Circuit Court of Appeals treated the audience to a wide-ranging interview of Justice Clarence Thomas. The convention sessions, concluding late Saturday, have now been posted at the Federalist Society’s website. As a look at the various panels and programs will show, this year’s theme, “Textualism and the Role of Judges,” was addressed in a wide variety of domains.
Concerning the role of judges, classical liberals and libertarians, who have long urged judges to be more engaged than many conservatives have thought proper, will find several panels of particular interest. Our own Walter Olson spoke about the new age of litigation financing, for example, while Nick Rosenkranz addressed textualism and the Bill of Rights – a panel that also included the spirited remarks of Cato adjunct scholar Richard Epstein. See also Epstein’s discussion of intellectual property on another panel that first day.
Then too you won’t want to miss senior fellow Randy Barnett’s treatment of textualism and constitutional interpretation the next day, especially as he spars with two opponents on the left, or his Saturday debate against Judge J. Harvie Wilkinson III of the Fourth Circuit Court of Appeals, where the proposition before the two was “Resolved: Courts are Too Deferential to the Legislature.” And finally, our own Trevor Burrus was on hand for a book signing: The book he edited, A Conspiracy Against Obamacare: The Volokh Conspiracy and the Health Care Case, has just come out and is must reading for those who want to see how the issue of the day, and many days to come, was teed up, legally, by a dedicated band of libertarians before it reached the Supreme Court.
Last week, the big news in the trade agreement arena was the leak of a draft text on intellectual property (IP) in the Trans Pacific Partnership (TPP) talks. Tim Lee of the Washington Post (and formerly a Cato adjunct scholar) explains what’s in it:
The leaked draft is 95 pages long, and includes provisions on everything from copyright damages to rules for marketing pharmaceuticals. Several proposed items are drawn from Hollywood’s wish list. The United States wants all signatories to extend their copyright terms to the life of the author plus 70 years for individual authors, and 95 years for corporate-owned works. The treaty includes a long section, proposed by the United States, requiring the creation of legal penalties for circumventing copy-protection schemes such as those that prevent copying of DVDs and Kindle books.
The United States has also pushed for a wide variety of provisions that would benefit the U.S. pharmaceutical and medical device industries. The Obama administration wants to require the extension of patent protection to plants, animals, and medical procedures. It wants to require countries to offer longer terms of patent protection to compensate for delays in the patent application process. The United States also wants to bar the manufacturers of generic drugs from relying on safety and efficacy information that was previously submitted by a brand-name drug maker — a step that would make it harder for generic manufacturers to enter the pharmaceutical market and could raise drug prices.
While the critics pounced, defenders defended. Here’s the MPAA:
What the text does show … is that despite much hyperbole from free trade opponents, the U.S. has put forth no proposals that are inconsistent with U.S. law.
In response to this statement, it is worth noting two things. First, many of the critics of this IP text are not “free trade opponents.” They simply oppose overly strong IP protections. Many of them are actually for free trade, or at least not actively against it. Second, while these proposals may not be inconsistent with U.S. law, that doesn’t make them good policy.
I have a feeling that the IP aspect of the TPP talks is going to be very important for the future of IP in trade agreements. IP was kind of slipped into trade agreements quietly back in the early 1990s. But the recent backlash has been strong. How the TPP fares politically here in the U.S. – if and when negotiations are completed – could tell us a lot about what the future holds for IP in trade agreements.
Are you on Instagram? The Cato Institute is!
We joined the popular image-sharing site in late October. Follow us at http://instagram.com/catoinstitute.
Wondering how YOU can spread the message of liberty on Instagram? Make sure to come to this month’s New Media Lunch. Join the Cato Institute this Thursday at noon for a lunchtime presentation, followed by a roundtable discussion. Allen Gannett of Trackmaven will highlight some interesting discoveries from TrackMaven’s recently released study of Fortune 500 companies on Instagram and share tips for translating their success to the nonprofit world. Make sure to register as space is limited.
Not in D.C.? We will be livestreaming Allen’s presentation. Just navigate to http://www.cato.org/live at noon Eastern Time this Thursday, November 21st. You can also join the conversation on Twitter using #NewMediaLunch.
Paul C. "Chip" Knappenberger and Patrick J. Michaels
Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
A new paper just hit the scientific literature that argues that the apparent pause in the rise in global average surface temperatures during the past 16 years was really just a slowdown.
As you may imagine, this paper, by Kevin Cowtan and Robert Way is being hotly discussed in the global warming blogs, with reaction ranging from a warm embrace by the global-warming-is-going-to-be-bad-for-us crowd to revulsion from the human-activities-have-no-effect-on-the-climate claque.
The lukewarmers (a school we take some credit for establishing) seem to be taking the results in stride. After all, the “pause” as curious as it is/was, is not central to the primary argument that, yes, human activities are pressuring the planet to warm, but that the rate of warming is going to be much slower than is being projected by the collection of global climate models (upon which mainstream projections of future climate change—and the resulting climate alarm (i.e., calls for emission regulations, etc.)—are based).
Under the adjustments to the observed global temperature history put together by Cowtan and Way, the models fare a bit better than they do with the unadjusted temperature record. That is, the observed temperature trend over the past 34 years (the period of record analyzed by Cowtan and Way) is a tiny bit closer to the average trend from the collection of climate models used in the new report from the U.N.’s Intergovernmental Panel on Climate Change (IPCC) than is the old temperature record.
Specifically, while the trend in observed global temperatures from 1979-2012 as calculated by Cowtan and Way is 0.17°C/decade, it is 0.16°C/decade in the temperature record compiled by the U.K. Hadley Center (the record that Cowtan and Way adjusted). Because of the sampling errors associated with trend estimation, these values are not significantly different from one another. Whether the 0.17°C/decade is significantly different from the climate model average simulated trend during that period of 0.23°C/decade is discussed extensively below.
But, suffice it to say that an insignificant difference of 0.01°C/decade in the global trend measured over more than 30 years is pretty small beer and doesn’t give model apologists very much to get happy over.
Instead, the attention is being deflected to “The Pause”—the leveling off of global surface temperatures during the past 16 years (give or take). Here, the new results from Cowtan and Way show that during the period 1997-2012, instead of a statistically insignificant rise at a rate of 0.05°C/decade as is contained in the “old” temperature record, the rise becomes a statistically significant 0.12°C/decade. “The Pause” is transformed into “The Slowdown” and alarmists rejoice because global warming hasn’t stopped after all. (If the logic sounds backwards, it does to us as well, if you were worried about catastrophic global warming, wouldn’t you rejoice at findings that indicate that future climate change was going to be only modest, more so than results to the contrary?)
The science behind the new Cowtan and Way research is still being digested by the community of climate scientists and other interested parties alike. The main idea is that the existing compilations of the global average temperature are very data-sparse in the high latitudes. And since the Arctic (more so than the Antarctic) is warming faster than the global average, the lack of data there may mean that the global average temperature trend may be underestimated. Cowtan and Way developed a methodology which relied on other limited sources of temperature information from the Arctic (such as floating buoys and satellite observations) to try to make an estimate of how the surface temperature was behaving in regions lacking more traditional temperature observations (the authors released an informative video explaining their research which may better help you understand what they did). They found that the warming in the data-sparse regions was progressing faster than the global average (especially during the past couple of years) and that when they included the data that they derived for these regions in the computation of the global average temperature, they found the global trend was higher than previously reported—just how much higher depended on the period over which the trend was calculated. As we showed, the trend more than doubled over the period from 1997-2012, but barely increased at all over the longer period 1979-2012.
Figure 1 shows the impact on the global average temperature trend for all trend lengths between 10 and 35 years (incorporating our educated guess as to what the 2013 temperature anomaly will be), and compares that to the distribution of climate model simulations of the same period. Statistically speaking, instead of there being a clear inconsistency (i.e., the observed trend value falls outside of the range which encompasses 95% of all modeled trends) between the observations and the climate mode simulations for lengths ranging generally from 11 to 28 years and a marginal inconsistency (i.e., the observed trend value falls outside of the range which encompasses 90% of all modeled trends) for most of the other lengths, now the observations track closely the marginal inconsistency line, although trends of length 17, 19, 20, 21 remain clearly inconsistent with the collection of modeled trends. Still, throughout the entirely of the 35-yr period (ending in 2013), the observed trend lies far below the model average simulated trend (additional information on the impact of the new Cowtan and Way adjustments on modeled/observed temperature comparison can be found here).
Figure 1. Temperature trends ranging in length from 10 to 35 years (ending in a preliminary 2013) calculated using the data from the U.K. Hadley Center (blue dots), the adjustments to the U.K. Hadley Center data made by Cowtan and Way (red dots) extrapolated through 2013, and the average of climate model simulations (black dots). The range that encompasses 90% (light grey lines) and 95% (dotted black lines) of climate model trends is also included.
The Cowtan and Way analysis is an attempt at using additional types of temperature information, or extracting “information” from records that have already told their stories, to fill in the missing data in the Arctic. There are concerns about the appropriateness of both the data sources and the methodologies applied to them.
A major one is in the applicability of satellite data at such high latitudes. The nature of the satellite’s orbit forces it to look “sideways” in order to sample polar regions. In fact, the orbit is such that the highest latitude areas cannot be seen at all. This is compounded by the fact that cold regions can develop substantial “inversions” of near-ground temperature, in which temperature actually rises with height such that there is not a straightforward relationship between the surface temperature and the temperature of the lower atmosphere where the satellites measure the temperature. If the nature of this complex relationship is not constant in time, an error is introduced into the Cowtan and Way analysis.
Another unresolved problem comes up when extrapolating land-based weather station data far into the Arctic Ocean. While land temperatures can bounce around a lot, the fact that much of the ocean is partially ice-covered for many months. Under “well-mixed” conditions, this forces the near-surface temperature to be constrained to values near the freezing point of salt water, whether or not the associated land station is much warmer or colder.
You can run this experiment yourself by filling a glass with a mix of ice and water and then making sure it is well mixed. The water surface temperature must hover around 33°F until all the ice melts. Given that the near-surface temperature is close to the water temperature, the limitations of land data become obvious.
Considering all of the above, we advise caution with regard to Cowtan and Way’s findings. While adding high arctic data should increase the observed trend, the nature of the data means that the amount of additional rise is subject to further revision. As they themselves note, there’s quite a bit more work to be done this area.
In the meantime, their results have tentatively breathed a small hint of life back into the climate models, basically buying them a bit more time—time for either the observed temperatures to start rising rapidly as current models expect, or, time for the modelers to try to fix/improve cloud processes, oceanic processes, and other process of variability (both natural and anthropogenic) that lie behind what would be the clearly overheated projections.
We’ve also taken a look at how “sensitive” the results are to the length of the ongoing pause/slowdown. Our educated guess is that the “bit” of time that the Cowtan and Way findings bought the models is only a few years long, and it is a fact, not a guess, that each additional year at the current rate of lukewarming increases the disconnection between the models and reality.
Cowtan, K., and R. G. Way, 2013. Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quarterly Journal of the Royal Meteorological Society, doi: 10.1002/qj.2297.
Juan Carlos Hidalgo
Chile went to the polls yesterday in what was perhaps the most important presidential election since the return of democracy in 1990. Many foreign observers focused on the curiosity that the two leading candidates were both daughters of Air Force generals who chose opposing sides during the military coup that toppled socialist president Salvador Allende in 1973. But what is at stake in this election wasn’t Chile’s past, but its future.
Let’s first recapitulate where Chile stands today: Thanks to the free market reforms implemented since 1975 by the military government of Augusto Pinochet – that were subsequently deepened by the democratic center-left governments that ruled the country since 1990 – Chile can boast the following accomplishments:
- It’s the freest economy in Latin America and it stands 11th in the world (ahead of the United States) in the Economic Freedom of the World report.
- It has more than tripled its income per capita since 1990 to $19,100 (PPP), which is the highest in Latin America.
- According to the IMF, by 2017 Chile will reach an income per capita of $23,800, which is the official threshold to become a developed country.
- According to the UN Economic Commission on Latin America and the Caribbean (ECLAC), Chile has the most impressive poverty reduction record in Latin America in the last two decades. The poverty rate went down from 45% in the mid-1980s to 11% in 2011, the lowest in the region.
- It has the strongest democratic institutions of Latin America according to the Rule of Law Index of the World Justice Project.
- It’s the least corrupt country in Latin America according to Transparency International.
- Along with Costa Rica and Uruguay, it has the best record in Latin America on political rights and civil liberties, according to Freedom House.
- High income inequality, which has always been a sore in the eyes of many, has decreased in the last decade.
With such an impressive record, it’s quite puzzling that the leading candidate, former president Michelle Bachelet, is running again under a platform calling for changes that would significantly alter the Chilean model by increasing the role of the government in the economy. In particular, Bachelet is proposing free higher education to everyone, the abolition of for-profit private schools and universities, the introduction of a state-owned pension fund in the country’s private pension system, higher taxes on businesses and professionals, and even a new constitution.
Bachelet came in first in yesterday’s election with 46.7% of the vote – short of the 50% necessary to avoid a runoff. On December 15th she’ll have to face again the center-right candidate Evelyn Matthei who came in second with 25%.
It’s very likely that Bachelet will win the runoff, but her governing coalition – which for the first time includes the Communist Party – came short of the two-thirds majority needed to change the constitution. However, her coalition does have enough votes to push for her reforms on taxes, education and pensions.
It is worth noting that, despite talk of Bachelet enjoying massive support among Chileans, not only did she fail to avoid a runoff, but she actually received fewer votes yesterday (3,070,012) than what she got in the first round of 2005 (3,190,691). A lot has to do with the fact that yesterday’s was Chile’s first presidential election with voluntary voting. Approximately 50% of Chileans able to vote didn’t show up to the polls. This means that Bachelet received the vote of only 22% of registered voters, hardly an overwhelming mandate for radical changes.
This doesn’t mean that Bachelet won’t push for those reforms though. After all, her coalition captured a majority of the seats in Congress. Unfortunately, a large segment of Chile’s society seems to suffer from a “high expectations trap,” which involves the danger that a false sense of prosperity sets in before the country actually becomes rich. What we have seen in recent years is that new middle class has become the driving force behind demands for the further expansion of the welfare state.
The future of the successful Chilean model will be at stake in the next 4 years.
Jeffrey A. Miron
Only a heartless libertarian could possibly object to bans on child labor, right? After all, no one wants to live in some Dickensian dystopia in which children toil endlessly under brutal conditions.
Unless, of course, bans harm, rather than help, both children and their families. And in a new working paper, economists Prashant Bharadwaj (UCSD), Leah Lakdawala (Michigan State), and Nicholas Li (Toronto), find just that. They
… examine the consequences of India’s landmark legislation against child labor, the Child Labor (Prohibition and Regulation) Act of 1986. … [and] show that child wages decrease and child labor increases after the ban. These results are consistent with a theoretical model … in which families use child labor to reach subsistence constraints and where child wages decrease in response to bans, leading poor families to utilize more child labor. The increase in child labor comes at the expense of reduced school enrollment.
And it gets worse. The authors
… also examine the effects of the ban at the household level. Using linked consumption and expenditure data, [they] find that along various margins of household expenditure, consumption, calorie intake and asset holdings, households are worse off after the ban.
Good intentions are just that; intentions, not results. The law of unintended consequences should never be ignored.
Douglas Walburg faces potential liability of $16-48 million. What heinous acts caused such astronomical damages? A violation of 47 C.F.R. § 16.1200(a)(3)(iv), an FCC regulation that enables lawsuits against senders of unsolicited faxes.
Walburg, however, never sent any unsolicited faxes; he was sued under the regulation by a class of plaintiffs for failing to include opt-out language in faxes sent to those who expressly authorized Walburg to send them the faxes.
The district court ruled for Walburg, holding that the regulation should be narrowly interpreted so as to require opt-out notices only for unsolicited faxes. But on appeal, the Federal Communications Commission, not previously party to the case, filed an amicus brief explaining that its regulation applies to previously authorized faxes too. Walburg argued that the FCC lacked statutory authority to regulate authorized advertisements. In response, the FCC filed another brief, arguing that the Hobbs Act prevents federal courts from considering challenges to the validity of FCC regulations when raised as a defense in a private lawsuit. Although the U.S. Court of Appeals for the Eighth Circuit recognized that Walburg’s argument may have merit, it declined to hear it and ruled that the Hobbs Act indeed prevents judicial review of administrative regulations except on appeal from prior agency review.
In this case, however, Walburg couldn’t have raised his challenge in an administrative setting because the regulation at issue outsources enforcement to private parties in civil suits! Moreover, having not been charged until the period for agency review lapsed, he has no plausible way to defend himself from the ruinous liability he will be subject to if not permitted to challenge the regulation’s validity. Rather than face those odds, Walburg has petitioned the Supreme Court to hear his case, arguing that the Eighth Circuit was wrong to deny him the right to judicial review without having to initiate a separate (and impossible) administrative review.
Cato agrees, and has joined the National Federation of Independent Business on an amicus brief supporting Walburg’s petition. We argue that the Supreme Court should hear the case because the Eighth Circuit’s ruling permits administrative agencies to insulate themselves from judicial review while denying those harmed by their regulations the basic due-process right to meaningfully defend themselves. The Court should hear the case because it offers the opportunity to resolve lower-court disputes about when the right to judicial review arises and whether a defendant can be forced to bear the burden of establishing a court’s jurisdiction.
These are important due-process implications raised in this case, and the Court would do well to adopt a rule consistent with the Eleventh Circuit’s holding on this issue—one that protects the right to immediately and meaningfully defend oneself from unlawful regulations. Otherwise, more and more Americans will end up finding themselves at the bad end of obscene regulatory penalties by unaccountable government agencies, with no real means to defend themselves.
The Court will decide whether to take Walburg v. Nack early in the new year.