Gerald P. O'Driscoll Jr.
The Senate Banking Committee just voted 14 to 8 to confirm Janet Yellen’s nomination to be the new Chair of the Federal Reserve. She will likely go on to be confirmed by the full Senate.
Much of the coverage has focused on Yellen as a person, when the real story is on the Fed as an institution. Sometimes individuals have profound influence on Fed policy, such as Paul Volcker in the late 1970s and 1980s. Over time, however, the institutional structure of the central bank and the incentives facing policymakers matter more.
The Federal Reserve famously has a dual mandate of promoting maximum employment and price stability. The Federal Open Market Committee, which sets monetary policy, has great discretion in weighting the two policy goals. As a practical matter, the vast majority of the time, full employment receives the greater weight. That is because the Fed is subject to similar pressures as are the members of Congress to which the Fed must report. In the short run, voters want to see more job creation. That is especially true today. The United States is experiencing weak growth with anemic job creation.
Never mind that the Fed is not capable of stimulating job creation, at least not in a sustained way over time. It has a jobs mandate and has created expectations that it can stimulate job growth with monetary policy. The Fed became an inflation-fighter under Volcker only when high inflation produced strong political currents to fight inflation even at the cost of recession and job creation.
The Federal Reserve claims political independence, but it has been so only comparatively rarely. Even Volcker could make tough decisions only because he was supported by President Carter, who appointed him, and President Reagan, who reappointed him. Conventionally defined inflation is low now, so the Fed under any likely Chair would continue its program of monetary stimulus. Perhaps Yellen is personally inclined to continue it longer than might some other candidates. But all possible Fed chiefs’ would face the same pressures to “do something” to enhance job growth, even if its policy tools are not effective.
The prolonged period of low interest rates has made the Fed the enabler of the federal government’s fiscal deficits. Low interest rates have kept down the government’s borrowing costs, at least compared to what they would have been under “normal” interest rates of 3-4 percent.
Congress and the president have been spared a fiscal crisis, and thus repeatedly punted on fiscal reform. They are likely to continue doing so until rising interest rates precipitate a crisis. How long that can be postponed remains an open question.
There’s been much ink spilled the past few days over U.S. Secretary of Education Arne Duncan’s defense of the Common Core, delivered as an obnoxious attack on white, suburban women. Proclaimed Duncan to a meeting of the Council of Chief State School Officers (one of the Core’s progenitors):
It’s fascinating to me that some of the pushback is coming from, sort of, white suburban moms who – all of a sudden – their child isn’t as brilliant as they thought they were and their school isn’t quite as good as they thought they were, and that’s pretty scary.
Much of the uproar over Duncan’s attack has been over his injecting race and sex into the Common Core debate, and that certainly was unnecessary. But much more concerning to me – and indicative of the fundamental problem with federally driven national standardization – is the clear message sent by Duncan’s denunciation of Jane Suburbia: average Americans are either too dull or too blinkered to do what’s best for their kids. The masses need their betters in government – politicians, bureaucrats – to control their lives.
Alas, this has been a subtext of almost the entire defense of the Core. Every time supporters decide to smear opponents primarily as “misinformed” or “conspiracy theorists,” they imply that people who are fighting for control of what their children will learn are either too ignorant, or too goofy, to matter.
Of course, there are some opponents who don’t get all the facts right about the Common Core, but supporters ignore that many of these people are just finding out about the Core. Unlike major Core supporters, many opponents – often parents and plain ol’ concerned citizens – haven’t been working on the Core for years. And even when opponents use such regretably over-the-top rhetoric as calling the Common Core “Commie Core,” they are ultimately making a legitimate point: the federally driven Core is intended to make the learning outcomes of all public schools the same – “common” is in the name, for crying out loud! – and in so doing, nationalize learning. At the very least, that’s not a move in the libertarian direction.
Every once in a while, Core supporters will openly air their basic distrust of average Americans. If you go to the 53:10 mark of our Common Core “Great Debate,” you’ll catch just such an admission by Chester Finn, president of the Core-championing Thomas B. Fordham Institute. In response to an explanation of how free markets enable average people to smartly consume things about which they are not experts, Finn declares that most parents won’t do even easy work to make informed choices. Then he asks, “is that a way to run a society?”
And there it is: In the end, Common Core, and all the government power behind it, is ultimately about experts running society rather than letting free people govern themselves. Why? Because parents – “the people” – are either thought incapable, or unwilling, of caring for their children themselves.
This attitude if fundamentally at odds with maintaining a free society. It declares that government must control what children learn, and in so doing gives government – not free people – the power over what the next generation of Americans will think. This is not to say that the Common Core is intended to inculcate values and attitudes – most supporters probably just want to better furnish skills – but it will nevertheless couple power over what the schools teach with an attitude that is fundamentally corrupting: I know what is best, and must make you do it. And if your betters think you can’t be trusted to teach your child about something as unthreatening as the ABCs, imagine what they may eventually require – or forbid – in teaching about religion, or guns, or climate change?
As has been the case in the past, Secretary Duncan has actually done Common Core opponents a huge favor in an effort to take them down. But this may be his most important contribution yet, revealing the supremely threatening contempt in which he seems to hold the average parent, and which drives the Common Core.
My new study on the Transportation Security Administration mainly focuses on the agency’s poor management and performance. The TSA has a near monopoly on security screening at U.S. airports, and monopoly organizations usually end up being bloated, inefficient, and providing low-quality services.
The study proposes contracting out or “privatizing” airport screening, which is the structure of aviation security used successfully in Canada and many European countries.
I briefly discuss some of the civil liberties problems surrounding TSA. Note that Cato’s Jim Harper also addresses those issues in his work, as does Robert Poole of Reason Foundation. I noticed this recent blog post by Poole that nicely summarizes some of the realities of TSA, terrorism, and civil liberties:
A couple of years ago Jonathan Corbett, a tech entrepreneur from Miami, posted videos online showing him successfully passing through TSA airport body scanners with a metal box concealed under his clothing, seeking to demonstrate that the scanners are an ineffective replacement for walk-through metal detectors for primary screening. In 2010 he filed a lawsuit contending that body-scanning and pat-downs are both unreasonable searches that violate the Fourth Amendment.
As part of the discovery process, TSA provided Corbett with 4,000 pages of documents, many of them classified. He was allowed to produce two versions of his brief, one containing extracts of classified material, and available only to the court, and a heavily redacted version which could be made public. But as several news sites reported last month, a clerk in the US Court of Appeals (11th District) mistakenly posted the classified version online, and it was quickly noticed and reproduced on various websites. Although the court issued a gag order prohibiting Corbett from talking about the classified material, there was no way to stop others from doing so.
Among the things we’ve learned from TSA Civil Aviation Threat Assessments that Corbett cited in his brief are the following:
- “As of mid-2011, terrorist threat groups present in the Homeland are not known to be actively plotting against civil aviation targets or airports; instead, their focus is on fund-raising, recruiting, and propagandizing.”
- No terrorist has attempted to bring explosives onto an aircraft via a U.S. airport in 35 years, and even worldwide, the use of explosives on aircraft is “extremely rare.”
- There have been no attempted domestic hijackings of any kind since 9/11.
- The government concedes that it would be difficult to repeat a 9/11-type attack due to strengthened cockpit doors and passengers’ willingness to challenge would-be hijackers.
Based on these points, Corbett argues that primary-screening searches via body-scanners or pat-downs are unreasonable under the Fourth Amendment. He agrees that although those searches have not turned up any would-be terrorists, they have detected illegal drugs. But that is irrelevant to aviation security, which is the only purported rationale for such intrusive searches without prior probable cause.
Corbett does not directly address whether the whole array of TSA airport screening measures may have deterred attacks that might have happened without those measures in place. But that is the kind of question that can be—and has been—assessed quantitatively by security experts such as Mark Stewart and John Mueller, whose work I have cited several times in previous issues of this newsletter. And those assessments suggest that body scanners and Federal Air Marshals, among other measures, cost vastly more than they are worth.
Whatever the outcome of Corbett’s suit—and I hope he prevails—Congress needs to take a hard look at the cost-effectiveness of much of what TSA is doing, in light of the revelations inadvertently made public by this case.”
Poole has done superb work over the years, not only on airport screening, but also on airport and air traffic control privatization. Bob’s work can be found here, and our joint article on airports and ATC is here.
Hans Riegel recently died at age 90. He changed the world for the better. He brought us the treat known as gummi bears.
Politicians routinely crusade against wealth and inequality, but that occurs naturally when people create products and offer services benefiting the rest of us.
Today people live on their cell phones. Once we didn’t even have telephones. Thank Alexander Graham Bell, born in Edinburgh, Scotland.
The internal combustion engine auto came from Karl Benz. He was a design engineer who in 1886 won a patent for a “motor car.”
In 1903, Clarence Crane created the hard fruit candy known as Life Savers.
Helen Greiner, a fan of Star Wars’ R2D2, came up with the Roomba vacuum cleaner robot in 2002.
John Mauchly and John Eckert created the first computer in 1946—the Electronic Integrator and Computer, or ENIAC.
Thomas Edison gave us working light bulbs in 1879. Joseph Swan might have beaten Edison, but the latter bought Swan’s patent.
The 3-D printer was created in 1983 by Chuck Hall. His first creation: a tea cup.
General Electric engineer James Wright attempted to make artificial rubber during World War II. He failed, but ad man Peter Hodgson later discovered the malleable material and began selling Silly Putty.
While developing magnetrons for radar in World War II, Percy Spencer noticed that a candy bar in his pocket melted. The result was the microwave oven.
Credit for television goes to Russian émigré Vladimir Zworykin. In 1920 he developed an iconoscope, or television transmission tube, and kinescope, a television receiver.
That same year Austrian Eduard Hass developed the peppermint candy, “pfefferminz” in German, known as PEZ.
The Scottish Charles Macintosh came up with the waterproof Mackintosh Raincoat. A store clerk turned chemist, in 1823 he figured out how to make waterproof fabric.
Infections once were common killers. But in 1928 another Scot, Alexander Fleming, discovered penicillin.
Edward Binney and Harold Smith owned an industrial pigment company and in 1903 combined industrial pigments with paraffin wax. By 1996, 100 billion crayons had been produced.
In 1935, Frederick McKinley Jones developed portable air-conditioning for trucks. Jones became the first African-American elected to the American Society of Refrigeration Engineers.
John Pemberton, an Atlanta pharmacist, developed Coca Cola’s original formula in 1885, in response to a ban on the sale of his wine-coca “patent medicine.”
Canadian-born James Naismith studied theology and worked at a Massachusetts YMCA. In 1891, he relied on a childhood game to develop basketball as a sport to be played indoors in the winter.
In 1884, Lewis Waterman developed the fountain pen. He took ten years to perfect his invention.
Arthur Fry gave the world the “Post-It Note” in 1974. He was both a chemist at 3M and wanted a bookmark that would cling to church hymnal pages. He thought of a failed glue created by a colleague.
Ruth Wakefield, regionally famous for her cooking, ran out of baker’s chocolate while making cookies and in 1930 substituted chunks of semi-sweet chocolate. Her recipe increased chocolate sales and became known to Nestle, which consequently created chocolate chips.
In 1964, while seeking a new synthetic fiber, Stephanie Kwolek came up with the well-nigh indestructible Kevlar—commonly part of bullet-proof vests.
John Harvey Kellogg was a vegetarian who headed a Michigan sanitarium. Faced with wheat gone stale, in 1894 he processed it into dough anyway and ended up with flakes.
These are just a few of the inventions which surround and enrich us. Human creativity and ingenuity—punctuated with a mix of luck and hard work—constantly transform our lives.
As I pointed out in my latest Forbes online column:
Few things better illustrate Adam Smith’s axiom that people can simultaneously benefit the rest of us while pursuing their own interest. Of course people should do good. But they often do best while trying to advance themselves.
Some inventors just love to create. Others hope for money, glory, or something else. Whatever their motives, the rest of us gain.
Like being able to enjoy gummi bears. Hans Riegel, RIP!
Last week I noted that it was “long past time for the U.S. Department of Justice to drop its embarrassing lawsuit which would keep black kids in failing schools.” The Louisiana Department of Education released a study that completely undermined the DOJ’s case against the state’s school voucher program, showing that the program increased racial integration in most of the schools under federal desegregation orders and had a miniscule impact in the remainder.
Today, Michael Warren of the Weekly Standard reports that the DOJ has dropped part of its fight against school choice in Louisiana:
The Obama administration’s Justice Department has dropped a lawsuit aiming to stop a school voucher program in the state of Louisiana. A ruling Friday by a United States district court judge revealed that the federal government has “abandoned” its pursuit of an injunction against the Louisiana Scholarship Program, a state-funded voucher program designed to give students in failing public schools the opportunity to attend better performing public or private schools.
“We are pleased that the Obama Administration has given up its attempt to end the Louisiana Scholarship Program with this absurd lawsuit,” said Louisiana governor Bobby Jindal, a Republican, in a statement. “It is great the Department of Justice has realized, at least for the time being, it has no authority to end equal opportunity of education for Louisiana children.”
The move may have resulted from the bad press or a sudden acceptance of common sense, but more likely it was a simply legal maneuver to prevent the Black Alliance for Educational Options and the Goldwater Institute, representing parents of voucher recipients, from intervening in the lawsuit as defendants. As Warren reports:
On Friday, Judge Ivan Lemelle of the U.S. district court of the Eastern District of Louisiana ruled the parents could not intervene in the case because the feds are “no longer seeking injunctive relief at this time.” Lemelle explained that in the intervening months since the Justice Department filed suit, it had made clear both in a supplemental filing and in its opposition to the parent group’s motion to intervene that it was not seeking in its suit to end the voucher program or take away vouchers from students.
Lemelle continued: “The Court reads these two statements as the United States abandoning its previous request that the Court ‘permanently enjoin the State from issuing any future voucher awards to students unless and until it obtains authorization from the federal court overseeing the applicable desegregation case.’”
Lemelle will hold an oral hearing on Friday, November 22, during which Justice will make its case for the federal review process of the voucher program. In his statement on Friday’s ruling, Jindal criticized the federal government’s efforts.
“The centerpiece of the Department of Justice’s ‘process’ is a requirement that the state may not tell parents, for 45 days, that their child has been awarded a scholarship while the department decides whether to object to the scholarship award. The obvious purpose of this gag order would be to prevent parents from learning that the Department of Justice might try to take their child’s scholarship away if it decides that the child is the wrong race,” said Jindal. “The updated Department of Justice request reeks of federal government intrusion that would put a tremendous burden on the state, along with parents and teachers who want to participate in school choice.”
In other words, the DOJ is still seeking the legal authority to prevent low-income kids from escaping failing public schools if the feds say they have the wrong skin color.
Last week, A Conspiracy Against Obamacare: The Volokh Conspiracy and the Health Care Case was released, of which I am proud to be the editor. The book compiles the discussions and debates about the Affordable Care Act that occurred on the legal blog the Volokh Conspiracy, supplemented with new material. The posts are stitched together into a narrative structure. As a result, you can see the constitutional arguments against the Affordable Care Act develop in real time, from before the law was passed all the way to the Supreme Court.
The book documents a bellwether moment in the history of legal academia: A legal academic blog influencing major Supreme Court litigation. And not just major Supreme Court litigation, but a case that went from a much derided challenge to the biggest and most watched case in decades. As former Solicitor General Paul D. Clement, who expertly argued the case before the Court, kindly wrote in the foreword, “The Constitution had its Federalist Papers, and the challenge to the Affordable Care Act had the Volokh Conspiracy.”
In the introduction, I discuss the constitutional arguments against the law in a more abstract way, as well as describe how the law is destined to fail due to poor design. We are seeing the beginning of those failures now, but I fear we ain’t seen nothin’ yet.
It was not much commented on at the time–the administration and the law’s supporters were too busy spiking the ball–but the Supreme Court’s decision will speed up the law’s inevitable failures. As I describe in the introduction:
Due to the chief justice’s unpredictable opinion, we are now likely stuck with a law that I fear will seriously damage the health of Americans. What’s more, attempts to further centralize power will not stop at the individual mandate. When the law fails, as I predict it will, it will be said that the federal government lacked enough power to make it work. The chief justice’s opinion gives people a real choice whether to comply with the requirement to purchase insurance or pay a “tax.” Many people will not, and as the price of insurance goes up, more and more people will choose to remain uninsured. This will certainly be called a “loophole.” Similarly, the Court also gave states a choice about whether to comply with the Affordable Care Act’s Medicaid expansion. Another “loophole.” Finally, the states that don’t create health care exchanges will also throw wrenches in the law’s overall scheme. “Loopholes” all around. Having freedom of choice in deeply personal health care decisions, however, is not a loophole.
When the time comes to revisit the Affordable Care Act, those choices by free, sovereign entities (citizens and states) will be blamed for the law’s dysfunctions. To paraphrase philosopher Robert Nozick, liberty disrupts patterns. Free choice inevitably upsets the carefully crafted plans of Washington.
As a solution to the law’s problems, more power will be proposed. A few voices, such as many who write for the Volokh Conspiracy and those of us at the Cato Institute, will strenuously argue that the problem is not a lack of power but a lack of freedom. I am not optimistic, however, that very many entrenched bureaucrats and politicians will locate the problem in the mirror rather than in the freedoms of the American people.
If the Affordable Care Act keeps going south at this rate, we may need to prepare to have that debate sooner than we expected.
The Federalist Society came into being in 1982 after a small group of conservatives and libertarians, concerned about the state of the law and the legal academy in particular, gathered for a modest conference at the Yale Law School, after which two law-student chapters were formed at Yale and at the University of Chicago. Quickly thereafter chapters sprung up at other law schools across the country. And in 1986 those students, now lawyers, started forming lawyer chapters in the cities where they practiced. Today the Federalist Society is more than 55,000 strong, its membership drawn from all corners of the law and beyond.
Toward the end of this past week many of those members gathered in Washington for the society’s 27th annual National Lawyers Convention, highlighted on Thursday evening by a gala black tie dinner at the conclusion of which Judge Diane Sykes of Seventh Circuit Court of Appeals treated the audience to a wide-ranging interview of Justice Clarence Thomas. The convention sessions, concluding late Saturday, have now been posted at the Federalist Society’s website. As a look at the various panels and programs will show, this year’s theme, “Textualism and the Role of Judges,” was addressed in a wide variety of domains.
Concerning the role of judges, classical liberals and libertarians, who have long urged judges to be more engaged than many conservatives have thought proper, will find several panels of particular interest. Our own Walter Olson spoke about the new age of litigation financing, for example, while Nick Rosenkranz addressed textualism and the Bill of Rights – a panel that also included the spirited remarks of Cato adjunct scholar Richard Epstein. See also Epstein’s discussion of intellectual property on another panel that first day.
Then too you won’t want to miss senior fellow Randy Barnett’s treatment of textualism and constitutional interpretation the next day, especially as he spars with two opponents on the left, or his Saturday debate against Judge J. Harvie Wilkinson III of the Fourth Circuit Court of Appeals, where the proposition before the two was “Resolved: Courts are Too Deferential to the Legislature.” And finally, our own Trevor Burrus was on hand for a book signing: The book he edited, A Conspiracy Against Obamacare: The Volokh Conspiracy and the Health Care Case, has just come out and is must reading for those who want to see how the issue of the day, and many days to come, was teed up, legally, by a dedicated band of libertarians before it reached the Supreme Court.
Last week, the big news in the trade agreement arena was the leak of a draft text on intellectual property (IP) in the Trans Pacific Partnership (TPP) talks. Tim Lee of the Washington Post (and formerly a Cato adjunct scholar) explains what’s in it:
The leaked draft is 95 pages long, and includes provisions on everything from copyright damages to rules for marketing pharmaceuticals. Several proposed items are drawn from Hollywood’s wish list. The United States wants all signatories to extend their copyright terms to the life of the author plus 70 years for individual authors, and 95 years for corporate-owned works. The treaty includes a long section, proposed by the United States, requiring the creation of legal penalties for circumventing copy-protection schemes such as those that prevent copying of DVDs and Kindle books.
The United States has also pushed for a wide variety of provisions that would benefit the U.S. pharmaceutical and medical device industries. The Obama administration wants to require the extension of patent protection to plants, animals, and medical procedures. It wants to require countries to offer longer terms of patent protection to compensate for delays in the patent application process. The United States also wants to bar the manufacturers of generic drugs from relying on safety and efficacy information that was previously submitted by a brand-name drug maker — a step that would make it harder for generic manufacturers to enter the pharmaceutical market and could raise drug prices.
While the critics pounced, defenders defended. Here’s the MPAA:
What the text does show … is that despite much hyperbole from free trade opponents, the U.S. has put forth no proposals that are inconsistent with U.S. law.
In response to this statement, it is worth noting two things. First, many of the critics of this IP text are not “free trade opponents.” They simply oppose overly strong IP protections. Many of them are actually for free trade, or at least not actively against it. Second, while these proposals may not be inconsistent with U.S. law, that doesn’t make them good policy.
I have a feeling that the IP aspect of the TPP talks is going to be very important for the future of IP in trade agreements. IP was kind of slipped into trade agreements quietly back in the early 1990s. But the recent backlash has been strong. How the TPP fares politically here in the U.S. – if and when negotiations are completed – could tell us a lot about what the future holds for IP in trade agreements.
Are you on Instagram? The Cato Institute is!
We joined the popular image-sharing site in late October. Follow us at http://instagram.com/catoinstitute.
Wondering how YOU can spread the message of liberty on Instagram? Make sure to come to this month’s New Media Lunch. Join the Cato Institute this Thursday at noon for a lunchtime presentation, followed by a roundtable discussion. Allen Gannett of Trackmaven will highlight some interesting discoveries from TrackMaven’s recently released study of Fortune 500 companies on Instagram and share tips for translating their success to the nonprofit world. Make sure to register as space is limited.
Not in D.C.? We will be livestreaming Allen’s presentation. Just navigate to http://www.cato.org/live at noon Eastern Time this Thursday, November 21st. You can also join the conversation on Twitter using #NewMediaLunch.
Paul C. "Chip" Knappenberger and Patrick J. Michaels
Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
A new paper just hit the scientific literature that argues that the apparent pause in the rise in global average surface temperatures during the past 16 years was really just a slowdown.
As you may imagine, this paper, by Kevin Cowtan and Robert Way is being hotly discussed in the global warming blogs, with reaction ranging from a warm embrace by the global-warming-is-going-to-be-bad-for-us crowd to revulsion from the human-activities-have-no-effect-on-the-climate claque.
The lukewarmers (a school we take some credit for establishing) seem to be taking the results in stride. After all, the “pause” as curious as it is/was, is not central to the primary argument that, yes, human activities are pressuring the planet to warm, but that the rate of warming is going to be much slower than is being projected by the collection of global climate models (upon which mainstream projections of future climate change—and the resulting climate alarm (i.e., calls for emission regulations, etc.)—are based).
Under the adjustments to the observed global temperature history put together by Cowtan and Way, the models fare a bit better than they do with the unadjusted temperature record. That is, the observed temperature trend over the past 34 years (the period of record analyzed by Cowtan and Way) is a tiny bit closer to the average trend from the collection of climate models used in the new report from the U.N.’s Intergovernmental Panel on Climate Change (IPCC) than is the old temperature record.
Specifically, while the trend in observed global temperatures from 1979-2012 as calculated by Cowtan and Way is 0.17°C/decade, it is 0.16°C/decade in the temperature record compiled by the U.K. Hadley Center (the record that Cowtan and Way adjusted). Because of the sampling errors associated with trend estimation, these values are not significantly different from one another. Whether the 0.17°C/decade is significantly different from the climate model average simulated trend during that period of 0.23°C/decade is discussed extensively below.
But, suffice it to say that an insignificant difference of 0.01°C/decade in the global trend measured over more than 30 years is pretty small beer and doesn’t give model apologists very much to get happy over.
Instead, the attention is being deflected to “The Pause”—the leveling off of global surface temperatures during the past 16 years (give or take). Here, the new results from Cowtan and Way show that during the period 1997-2012, instead of a statistically insignificant rise at a rate of 0.05°C/decade as is contained in the “old” temperature record, the rise becomes a statistically significant 0.12°C/decade. “The Pause” is transformed into “The Slowdown” and alarmists rejoice because global warming hasn’t stopped after all. (If the logic sounds backwards, it does to us as well, if you were worried about catastrophic global warming, wouldn’t you rejoice at findings that indicate that future climate change was going to be only modest, more so than results to the contrary?)
The science behind the new Cowtan and Way research is still being digested by the community of climate scientists and other interested parties alike. The main idea is that the existing compilations of the global average temperature are very data-sparse in the high latitudes. And since the Arctic (more so than the Antarctic) is warming faster than the global average, the lack of data there may mean that the global average temperature trend may be underestimated. Cowtan and Way developed a methodology which relied on other limited sources of temperature information from the Arctic (such as floating buoys and satellite observations) to try to make an estimate of how the surface temperature was behaving in regions lacking more traditional temperature observations (the authors released an informative video explaining their research which may better help you understand what they did). They found that the warming in the data-sparse regions was progressing faster than the global average (especially during the past couple of years) and that when they included the data that they derived for these regions in the computation of the global average temperature, they found the global trend was higher than previously reported—just how much higher depended on the period over which the trend was calculated. As we showed, the trend more than doubled over the period from 1997-2012, but barely increased at all over the longer period 1979-2012.
Figure 1 shows the impact on the global average temperature trend for all trend lengths between 10 and 35 years (incorporating our educated guess as to what the 2013 temperature anomaly will be), and compares that to the distribution of climate model simulations of the same period. Statistically speaking, instead of there being a clear inconsistency (i.e., the observed trend value falls outside of the range which encompasses 95% of all modeled trends) between the observations and the climate mode simulations for lengths ranging generally from 11 to 28 years and a marginal inconsistency (i.e., the observed trend value falls outside of the range which encompasses 90% of all modeled trends) for most of the other lengths, now the observations track closely the marginal inconsistency line, although trends of length 17, 19, 20, 21 remain clearly inconsistent with the collection of modeled trends. Still, throughout the entirely of the 35-yr period (ending in 2013), the observed trend lies far below the model average simulated trend (additional information on the impact of the new Cowtan and Way adjustments on modeled/observed temperature comparison can be found here).
Figure 1. Temperature trends ranging in length from 10 to 35 years (ending in a preliminary 2013) calculated using the data from the U.K. Hadley Center (blue dots), the adjustments to the U.K. Hadley Center data made by Cowtan and Way (red dots) extrapolated through 2013, and the average of climate model simulations (black dots). The range that encompasses 90% (light grey lines) and 95% (dotted black lines) of climate model trends is also included.
The Cowtan and Way analysis is an attempt at using additional types of temperature information, or extracting “information” from records that have already told their stories, to fill in the missing data in the Arctic. There are concerns about the appropriateness of both the data sources and the methodologies applied to them.
A major one is in the applicability of satellite data at such high latitudes. The nature of the satellite’s orbit forces it to look “sideways” in order to sample polar regions. In fact, the orbit is such that the highest latitude areas cannot be seen at all. This is compounded by the fact that cold regions can develop substantial “inversions” of near-ground temperature, in which temperature actually rises with height such that there is not a straightforward relationship between the surface temperature and the temperature of the lower atmosphere where the satellites measure the temperature. If the nature of this complex relationship is not constant in time, an error is introduced into the Cowtan and Way analysis.
Another unresolved problem comes up when extrapolating land-based weather station data far into the Arctic Ocean. While land temperatures can bounce around a lot, the fact that much of the ocean is partially ice-covered for many months. Under “well-mixed” conditions, this forces the near-surface temperature to be constrained to values near the freezing point of salt water, whether or not the associated land station is much warmer or colder.
You can run this experiment yourself by filling a glass with a mix of ice and water and then making sure it is well mixed. The water surface temperature must hover around 33°F until all the ice melts. Given that the near-surface temperature is close to the water temperature, the limitations of land data become obvious.
Considering all of the above, we advise caution with regard to Cowtan and Way’s findings. While adding high arctic data should increase the observed trend, the nature of the data means that the amount of additional rise is subject to further revision. As they themselves note, there’s quite a bit more work to be done this area.
In the meantime, their results have tentatively breathed a small hint of life back into the climate models, basically buying them a bit more time—time for either the observed temperatures to start rising rapidly as current models expect, or, time for the modelers to try to fix/improve cloud processes, oceanic processes, and other process of variability (both natural and anthropogenic) that lie behind what would be the clearly overheated projections.
We’ve also taken a look at how “sensitive” the results are to the length of the ongoing pause/slowdown. Our educated guess is that the “bit” of time that the Cowtan and Way findings bought the models is only a few years long, and it is a fact, not a guess, that each additional year at the current rate of lukewarming increases the disconnection between the models and reality.
Cowtan, K., and R. G. Way, 2013. Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quarterly Journal of the Royal Meteorological Society, doi: 10.1002/qj.2297.