Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

To capitalism’s detractors, Nike symbolizes the Dickensian horrors of trade and globalization – a world ripened for mass exploitation of workers and the environment for the impious purpose of padding the bottom line. They are offended by President Obama’s selection of Nike headquarters as the setting for his speech, last week, in which he touted the benefits of the emerging Trans-Pacific Partnership agreement. But Nike exemplifies the redeeming virtues of globalization and illustrates how self-interested capitalism satisfies popular demands – including, even, the demands of its detractors.

Fealty to the reviled bottom line incentivizes companies like Nike to deliver, in a sustainable manner, what those genuinely concerned about development claim to want. U.S. and other Western investments in developing-country manufacturing and assembly operations tend to raise local labor, environmental, and product safety standards. Western companies usually offer higher wages than the local average to attract the best workers, which can reduce the total cost of labor through higher productivity and lower employee turnover. Western companies often use production technologies and techniques that meet higher standards and bring best practices that are emulated by local firms, leading to improvements in working conditions, environmental outcomes, and product safety.

Perhaps most significantly, companies like Nike are understandably protective of their brands, which are usually their most valuable assets. In an age when people increasingly demand social accountability as an attribute of the products and services they consume, mere allegations – let alone confirmed instances – of labor abuses, safety violations, tainted products, environmental degradation, and other objectionable practices can quickly degrade or destroy a brand. Western brands have every incentive to find scrupulous supply chain partners and even to submit to third party verifications of the veracity of all sorts of practices in developing countries because the verdict of the marketplace can be swift and unambiguous.

Nike remembers the boycotts and the profit losses it endured on account of global reactions to its association with “sweatshop” working conditions in the past. Mattel’s bottom line took a beating when some of its toys manufactured in certain Chinese factories were found to contain dangerous levels of lead paint. There have been numerous examples of lax oversight and wanting conditions, but increasingly they are becoming the exception and not the rule.

Obviously, most Americans would find developing country factory conditions and practices to be, on average, inferior to those in the United States. But the proper comparison is not between wages and conditions in a factory in Ho Chi Minh City and Akron, Ohio or between Akron in 2015 and Akron in 1915. Trade and globalization scolds who would hamper investment flows to developing countries by demanding that poor countries price themselves out of global supply chain networks by adopting rich-country standards should stop and ponder the conditions that would prevail in those locations without Western investment because that’s where their demands ultimately lead.

Even New York Times columnist Nicholas Kristof – an icon of the Left – has argued that factory work offers a step up the ladder for billions of impoverished people around the world.  His stories about the limited options for subsisting among Cambodian women before the arrival of apparel factories, which included picking through garbage dumps, backbreaking agricultural work, and prostitution, remind us that development is a process and not one that is prone to use of magic wands. What employment options would exist in the absence of Western investment? How much accountability would there be if locally-owned factories were the only choices? Without Western investment, there would be much less opportunity and much less scrutiny of labor and environmental practices.

Globalization has brought greater accountability by assigning globally recognizable brand names to otherwise anonymous, small-scale, production and assembly operations. Brands have the most to lose from the discovery of any unscrupulous practices, so the incentives are aligned with the goals of development. An important lesson of capitalism and markets is that even corporate behavior that meets the disapproval of consumers gets punished and corrected.

Unfortunately, a lesson that too many on the Left fail to heed is that capitalism and trade are making life much better for people around the world. Calling globalization a “race to the bottom” may make for a hip bumper sticker, but it has no bearing in reality.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

Two papers were announced this week that sought to examine the sources of bias in the scientific literature. They could not be more starkly opposed.

First off is a doozy of a new paper by Stephan Lewandowsky, Naomi Oreskes and colleagues that complains that skeptical viewpoints are disproportionately influencing the science of climate change. Recall that Lewandowsky and Oreskes are quixotic climate change denialslayers—conspiracy theorists of somewhat ill-repute.

According to a story in Science Daily (the Lewandowsky et al. paper was not available at the time of this writing) Lewandowsky and Oreskes argue that:

Climate change denial in public discourse may encourage climate scientists to over-emphasize scientific uncertainty and is also affecting how they themselves speak – and perhaps even think – about their own research.

Lewandowsky and Oreskes fret:

The idea that ‘global warming has stopped’ has been promoted in contrarian blogs and media articles for many years, and ultimately the idea of a ‘pause’ or ‘hiatus’ has become ensconced in the scientific literature, including in the latest assessment report of the Intergovernmental Panel on Climate Change (IPCC).

The Science Daily article continues:

Recent warming has been slower than the long term trend, but this fluctuation differs little from past fluctuations in warming rate, including past periods of more rapid than average warming. Crucially, on previous occasions when decadal warming was particularly rapid, the scientific community did not give short-term climate variability the attention it has now received, when decadal warming was slower. During earlier rapid warming there was no additional research effort directed at explaining ‘catastrophic’ warming. By contrast, the recent modest decrease in the rate of warming has elicited numerous articles and special issues of leading journals.

This asymmetry in response to fluctuations in the decadal warming trend likely reflects what the study’s authors call the ‘seepage’ of contrarian claims into scientific work.

And according the Lewandowsky, this is a problem because:

“It seems reasonable to conclude that the pressure of climate contrarians has contributed, at least to some degree, to scientists re-examining their own theory, data and models, even though all of them permit – indeed, expect – changes in the rate of warming over any arbitrarily chosen period.”

So why might scientists be affected by contrarian public discourse? The study argues that three recognised psychological mechanisms are at work: ‘stereotype threat’, ‘pluralistic ignorance’ and the ‘third-person effect’.

‘Stereotype threat’ refers to the emotional and behaviour responses when a person is reminded of an adverse stereotype against a group to which they belong. Thus, when scientists are stereotyped as ‘alarmists’, a predicted response would be for them to try to avoid seeming alarmist by downplaying the degree of threat. Several studies have indeed shown that scientists tend to avoid highlighting risks, lest they be seen as ‘alarmist’.

‘Pluralistic ignorance’ describes the phenomenon which arises when a minority opinion is given disproportionate prominence in public debate, resulting in the majority of people incorrectly assuming their opinion is marginalised. Thus, a public discourse that asserts that the IPCC has exaggerated the threat of climate change may cause scientists who disagree to think their views are in the minority, and they may therefore feel inhibited from speaking out in public.

Research shows that people generally believe that persuasive communications exert a stronger effect on others than on themselves: this is known as the ‘third-person effect’. However, in actual fact, people tend to be more affected by persuasive messages than they think. This suggests the scientific community may be susceptible to arguments against climate change even when they know them to be false.

We humbly assert that Lewandowsky, Oreskes, and colleagues have this completely backwards.

When global warming was occurring faster than climate models expected during the 1990s, there was little effort by the mainstream climate science community to look into why, despite plenty of skeptic voices (such as our own) pointing to the influence of natural variability.  Instead, headlines proclaimed “Global warming worse than expected,” which fueled the human-caused climate change hysteria (favored by the 1990s White House) and helped build the push for calls to regulate greenhouse gas emissions from fossil fuels.  But since the late 1990s, there has been no statistically significant warming trend in the highly-cited HadCRU4 temperature record, and both the RSS and UAH satellite records are now in their 21st consecutive year without a significant trend.  This behavior contrasted with, and called into question, the veracity of climate model projections. And it was these projections upon which rested the case for a dangerous human influence on the climate. Again, skeptic voices were raised in objection to the mainstream view of climate change and the need for government intervention. But this time, the skeptic voices were accompanied by data that clearly showed that rather than “worse than expected,” climate change was actually proceeding at a quite modest pace.

It was only then, with the threat of losing support for actions to mitigate climate change—actions that a top U.N. climate official, Christine Figueres, described as an effort “to intentionally transform the economic development model, for the first time in human history” —that the mainstream climate community started to pay attention and began investigating the “hiatus” or “pause”—the words so despised by Lewandowsky and Oreskes.

Through these research efforts, we have learned a lot about the role of natural variability in the broader climate system and how such variability impacts of projections of human-caused climate change (such as through a better understanding of the equilibrium climate sensitivity—how much warming results from a doubling of atmospheric carbon dioxide concentration).

In other words, science has been moved forward, propelled by folks who didn’t take the mainstream climate science at face value, and instead questioned it—i.e., Lewandowsky’s and Oreskes’ “deniers.”

The outcome of all of this is, in fact, the opposite of what Lewandowsky and Oreskes assert has occurred.  Rather than “skeptic” ideas “seeping” into science and leading to a false narrative, skeptic ideas instead have spurred new research and therefore new knowledge. Such was not the case when skeptics were being shut out. The only thing different now vs. 20 years ago, is that this time around, the existence of a profoundly inconvenient truth (a “hiatus” in global warming) gave public credence to the skeptics which forced them to be taken seriously by the scientific consensus-keepers. Incontrovertible evidence that threatened to tear down the meme of climate alarmism clearly required some sort of response.

Science is biased not by the inclusion of skeptical voices, but rather the exclusion of them.

In fact, this week, we announced the framework for an investigation into the existence of such bias.

We teamed with Dr. David Wojick to produce a Cato Working Paper titled “Is the Government Buying Science or Support? A Framework Analysis of Federal Funding-induced Biases” we describe:

The purpose of this report is to provide a framework for doing research on the problem of bias in science, especially bias induced by Federal funding of research. In recent years the issue of bias in science has come under increasing scrutiny, including within the scientific community. Much of this scrutiny is focused on the potential for bias induced by the commercial funding of research. However, relatively little attention has been given to the potential role of Federal funding in fostering bias. The research question is clear: does biased funding skew research in a preferred direction, one that supports an agency mission, policy or paradigm?

An interested reader may want to review the fifteen bias-inducing scientific practices that we identify and compare them with the “three recognised psychological mechanisms” that Lewandowsky and Oreskes assert are at work to see which seem to make the most sense.

Essentially, our project seeks to determine if the dog is wagging the tail. Lewandowsky and Oreskes propose the tail is wagging the dog.

Hopefully, in the not too distant future, we’ll be able to report back what we find in our investigations. We’ll be surprised if we find that exclusionary practices drive science forward more efficiently than inclusive ones!

Reference

Lewandowsky, S., N. Oreskes, J. S. Risbey, B. R. Newell and M. Smithson, 2015. Climate change denial and its effect on the scientific community. Global Environmental Change, (in press) 

Watching Robert Reich’s new video in which he endorses raising the minimum wage by $7.75 per hour – to $15 per hour – is painful.  It hurts to encounter such rapid-fire economic ignorance, even if the barrage lasts for only two minutes. 

Perhaps the most remarkable flaw in this video is Reich’s manner of addressing the bedrock economic objection to the minimum wage – namely, that minimum wage prices some low-skilled workers out of jobs.  Ignoring supply-and-demand analysis (which depicts the correct common-sense understanding that the higher the minimum wage, the lower is the quantity of unskilled workers that firms can profitably employ), Reich asserts that a higher minimum wage enables workers to spend more money on consumer goods which, in turn, prompts employers to hire more workers.  Reich apparently believes that his ability to describe and draw such a “virtuous circle” of increased spending and hiring is reason enough to dismiss the concerns of “scare-mongers” (his term) who worry that raising the price of unskilled labor makes such labor less attractive to employers. 

Ignore (as Reich does) that any additional amounts paid in total to workers mean lower profits for firms or higher prices paid by consumers – and, thus, less spending elsewhere in the economy by people other than the higher-paid workers.

Ignore (as Reich does) the extraordinarily low probability that workers who are paid a higher minimum wage will spend all of their additional earnings on goods and services produced by minimum-wage workers. 

Ignore (as Reich does) the impossibility of making people richer simply by having them circulate amongst themselves a larger quantity of money.  (If Reich is correct that raising the minimum wage by $7.75 per hour will do nothing but enrich all low-wage workers to the tune of $7.75 per hour because workers will spend all of their additional earnings in ways that make it profitable for their employers to pay them an additional $7.75 per hour, then it can legitimately be asked: Why not raise the minimum wage to $150 per hour?  If higher minimum wages are fully returned to employers in the form of higher spending by workers as Reich theorizes, then there is no obvious limit to the amount by which government can hike the minimum wage before risking an increase in unemployment.)

Focus instead on Reich’s apparent complete ignorance of the important concept of the elasticity of demand for labor.  This concept refers to the responsiveness of employers to changes in wage rates.  It’s true that if employers’ demand for unskilled workers is “inelastic,” then a higher minimum wage would indeed put more money into the pockets of unskilled workers as a group.  The increased pay of workers who keep their jobs more than offsets the lower pay of worker who lose their jobs.  Workers as a group could then spend more in total.  But if employers’ demand for unskilled workers is “elastic,” then raising the minimum wage reduces, rather than increases, the amount of money in the pockets of unskilled workers as a group.  When the demand for labor is elastic, the higher pay of those workers fortunate enough to keep their jobs is more than offset by the lower pay of workers who lose their jobs.  So total spending by minimum-wage workers would likely fall, not rise.

By completely ignoring elasticity, Reich assumes his conclusion.  That is, he simply assumes that raising the minimum wage raises the total pay of unskilled workers (and, thereby, raises the total spending of such workers).  Yet whether or not raising the minimum wage has this effect is among the core issues in the debate over the merits of minimum-wage legislation.  Even if (contrary to fact) increased spending by unskilled workers were sufficient to bootstrap up the employment of such workers, raising the minimum wage might well reduce the total amount of money paid to unskilled workers and, thus, lower their spending.

So is employers’ demand for unskilled workers more likely to be elastic or inelastic?  The answer depends on how much the minimum wage is raised.  If it were raised by, say, only five percent, it might be inelastic, causing only a relatively few worker to lose their jobs and, thus, the total take-home pay of unskilled workers as a group to rise.  But Reich calls for an increase in the minimum wage of 107 percent!  It’s impossible to believe that more than doubling the minimum wage would not cause a huge negative response by employers.  Such an assumption – if it described reality – would mean that unskilled workers are today so underpaid (relative to their productivity) that their employers are reaping gigantic windfall profits off of such workers.  But the fact that we see increasing automation of low-skilled tasks, as well as continuing high rates of unemployment of teenagers and other unskilled workers, is solid evidence that the typical low-wage worker is not such a bountiful source of profit for his or her employer. 

Reich’s video is infected, from start to finish, with too many other errors to count.  I hope that other sensible people will take the time to expose them all.

The Big Picture: Fight for $15 with Robert Reich

A new documentary by Cato Senior Fellow Johan Norberg, shown recently on PBS stations nationwide, is a non-political look at the reality of the world’s energy problems. “Energy questions are complicated, and there are always trade-offs,” Norberg notes.    While bringing electricity to many remote villages in India and the Sahara causes an increase in carbon emissions, it also allows families to have refrigeration for their food, electricity to light their homes and the time to develop their lives beyond working just to sustain themselves every day.   “Don’t they deserve the same kinds of life changing benefits that power has brought the west?” Norberg asks.

This program explains how ALL sources of energy have their attributes and drawbacks.   It will take large amounts of low-cost power to fuel economic development in the third world, while also keeping up with growth in the developed world.  There is no “perfect” source to meet these needs:  Coal and oil make up a third of the current world energy supply, so while the infrastructure is in place and works fairly inexpensively, these fossil fuels are consistently tagged as “dirty.”  Natural gas is abundant and clean, and cheap and easy to use, but the means of getting to it (fracking) is controversial.  Nuclear energy power is one of the only large-scale alternatives to fossil fuels, but nuclear accidents like Chernobyl and Three Mile Island have made the public wary.  Hydro power is clean and fairly cheap, but dams have been targeted by environmentalist for harming fish populations. And, Norberg notes, most good sources of hydropower are already being utilized to their full capacity, leaving little chance to expand this resource.  Solar power is clean and abundant, but it doesn’t work when the sun doesn’t shine, and the infrastructure to capture it is expensive.  Wind supplies only one percent of energy globally because while it’s clean, it’s intermittent and doesn’t always come at the right velocity.

Norberg doesn’t make judgements, for the most part…except to say that top-down, government imposed “solutions” to the world’s energy problems have not worked yet, and are highly unlikely to suddenly start working. 

This is an excellent program for people who really want to understand the basics of world energy needs.  Watch it at Cato’s site here, and read more about the Free to Choose network here.

In a ruling certain to profoundly shape the ongoing debate over surveillance reform in Congress, the U.S. Court of Appeals for the Second Circuit today held that the National Security Agency’s indiscriminate collection of Americans’ telephone calling records exceeds the legal authority granted by the Patriot Act’s controversial section 215, which is set to expire at the end of this month.  Legislation to reform and constrain that authority, the USA Freedom Act, has drawn broad bipartisan support, but Senate Majority Leader Mitch McConnell has stubbornly pressed ahead with a bill to reauthorize §215 without any changes.  But the Second Circuit ruling gives even defenders of the NSA program powerful reasons to support reform.

McConnell and other reform opponents have consistently insisted, in defiance of overwhelming evidence, that the NSA program is an essential tool in the fight against terrorism, and that any reform would hinder efforts to keep Americans safe—a claim rejected even by the leaders of the intelligence community. (Talk about being more Catholic than the Pope!)  Now, however, a federal appellate court has clearly said that no amount of contortion can stretch the language of §215 into a justification for NSA’s massive database—which means it’s no longer clear that a simple reauthorization would preserve the program. Ironically, if McConnell is determined to salvage some version of this ineffective program, his best hope may now be… the USA Freedom Act!

The Freedom Act would, in line with the Second Circuit opinion, bar the use of §215 and related authorities to indiscriminately collect records in bulk, requiring that a “specific selection term,” like a phone number, be used to identify the records sought by the government.  It also, however, creates a separate streamlined process that would allow call records databases already retained by telephone companies to be rapidly searched and cross-referenced, allowing NSA to more quickly obtain the specific information it seeks about terror suspects and their associates without placing everyone’s phone records in the government’s hands.  If the Second Circuit’s ruling is upheld, NSA will likely have to cease bulk collection even if Congress does reauthorize §215.  That makes passage of the Freedom Act the best way to guarantee preservation of the rapid search capability McConnell seems to think is so important—though, of course, the government will retain the ability to obtain specific phone records (albeit less quickly) under either scenario.  With this ruling, in short, the arguments against reform have gone from feeble to completely unsustainable.

A few notable points from the ruling itself.  Echoing the reasoning of the Privacy and Civil Liberties Oversight Board’s extremely thorough report on §215, the Second Circuit rejected the torured legal logic underpinning both the NSA telephone program and a now-defunct program that gathered international Internet metadata in bulk.  The government had persuaded the Foreign Intelligence Surveillance Court to interpret an authority to get records “relevant to an authorized investigation” as permitting collection of entire vast databases of information, the overwhelming majority of which are clearly not relevant to any investigation, on the premise that this allows NSA to later search for specific records that are relevant.  As the court noted, this not only defies common sense, but it is wildly inconsistent with the way the standard of “relevance”—which governs subpoenas and court orders used in routine criminal investigations— has been interpreted for decades.  If every American’s phone records are “relevant” to counterterrorism investigations, after all, why wouldn’t those and other records be similarly “relevant” to investigations aiming to ferret out narcotics traffickers or fraudsters or tax cheats?  Past cases invoked by the government, in which courts have blessed relatively broad subpoenas under a standard of “relevance” only underscore how unprecedented the NSA’s interpretation of that standard truly is—since even the broadest such subpoenas fall dramatically short of the indiscriminate, indefinite hoovering the agency is now enaged in.

The court also quickly dispatched arguments that the plaintiffs here lacked standing to challenge the NSA program.  In general, parties seeking to challenge government action must demonstrate they’ve been harmed in some concrete way—which presents a significant hurdle when the government operates behind a thick veil of secrecy.  Since documents disclosed to press by Edward Snowden—and the government’s own subsequent admissions—leave little question that the plaintiffs’ phone records are indeed being obtained, however, there’s no need for a further showing that those records were subsequently reviewed or used against the plaintiffs.  That’s critical because advocates of broad surveillance powers have often sought to argue that the mere collection of information, even on a massive scale, does not raise privacy concerns—and focus should instead be on whether the information is used appropriately.  The court here makes plain that the unauthorized colleciton of data—placing it in the control and discretion of the government—is itself a privacy harm.

Finally, the court repudiated the Foreign Intelligence Surveillance Court’s strained use of the doctine of legislative ratification to bless the NSA program.  Under this theory—reasonable enough in most cases—when courts have interpreted some statutory language in a particular way, legislatures are presumed to incorporate that interpretation when they use similar language in subsequent laws.  The FISC reasoned that Congress had therefore effectively “ratified” the NSA telephone program, and the sweeping legal theory behind it, by repeatedly reauthorizing §215.  But as the court pointed out—somewhat more diplomatically—it’s absurd to apply that doctrine to surveillance programs and legal interpretations that were, until recently, secret even from many (if not most) members of Congress, let alone the general public.

While the court didn’t reach the crucial question of whether the program violates the Fourth Amendment, the ruling gives civil libertarians good reason to hope that a massive and egregious violation of every American’s privacy will finally come to an end.

The U.S. Court of Appeals for the Second Circuit has ruled that section 215 of the USA-PATRIOT Act never authorized the National Security Agency’s collection of all Americans’ phone calling records. It’s pleasing to see the opinion parallel arguments that Randy Barnett and I put forward over the last couple of years.

Two points from different parts of the opinion that can help structure our thinking about constitutional protection for communications data and other digital information. Data is property, which can be unconstitutionally seized.

As cases like this often do, the decision spends much time on niceties like standing to sue. In that discussion—finding that the ACLU indeed has legal standing to challenge government collection of its calling data—the court parried the government’s argument that the ACLU suffers no offense until its data is searched.

“The Fourth Amendment protects against unreasonable searches and seizures,” the court emphasized. Data is a thing that can be owned, and when the government takes someone’s data, it is seized.

In this situation, the data is owned jointly by telecommunications companies and their customers. The companies hold it subject to obligations they owe their customers limiting what they can do with it. Think of covenants that run with land. These covenants run with data for the benefit of the customer.

Far later in the decision, on the other side of the substantive ruling that section 215 doesn’t authorize the NSA’s program, the court discusses the Supreme Court’s 2012 Jones decision. Jones found that attaching a GPS tracking device to a vehicle requires a warrant.

”[Jones] held that the operation was a search entitled to Fourth Amendment protection,” the Second Circuit says, “because the attachment of the GPS device constituted a technical trespass on the defendant’s vehicle.”

That’s the interpretation I’ve given to Jones, that it is best regarded as a seizure case. When government agents put a GPS device on a car, they converted the car to their purposes, in a small way, to transport their device. The car was not theirs to use this way.

The Supreme Court itself didn’t call this a seizure, but the essential element of what happened was that tiny seizure of the defendant’s car when they put the device on it.

Data is property that can be seized. And even tiny seizures are subject to the constitutional requirement of reasonableness and a warrant. These gems from the Second Circuit’s opinion help show the way privacy can be protected through application of the Fourth Amendment in the digital age.

Food prices are (slightly) lower today than they were in 1961. Yes, that’s right. Adjusted for inflation, the United Nations’ Food and Agriculture Organization calculates, the food price index in 2015 stood at 131.2. It was 131.7 in 1961.

In the meantime, the world population has increased from 3.01 billion to 7.28 billion – a rise of 4.2 billion or 135 percent.

If you are Paul Ehrlich, Lester Brown, William and Paul Paddock, Garrett Hardin, Rajiv Gandhi and countless other followers of Reverend Malthus, this should NOT be happening. But, it is. Human beings are intelligent animals. Unlike rabbits, who overbreed when food is plentiful and die out when it is not, humans innovate their way out of scarcity.

So, happy Thursday to you all.

Common Core is either meaningless or antithetical to a free and pluralistic society.

That’s the key conundrum that Professor Jay P. Greene, chair of the Department of Education Reform at the University of Arkansas, identified yesterday during his testimony before the Arkansas Council on Common Core Review, which is currently considering whether to keep, modify, or scrap the standards:

Because standards are about values, their content is not merely a technical issue that can be determined by scientific methods. There is no technically correct set of standards, just as there is no technically correct political party or religion. Reasonable people have legitimate differences of opinion about what they want their children taught. A fundamental problem with national standards efforts, like Common Core, is that they are attempting to impose a single vision of a proper education on a large and diverse country with differing views.

National standards can try to produce uniformity out of diversity with some combination of two approaches. They can promote standards that are so bland and ambiguous as to be inoffensive to almost everyone. Or they can force their particular vision on those who believe differently. Either way, national standards, like Common Core, are inappropriate and likely to be ineffective. If national standards embrace a vague consensus, then they make no difference since almost everyone already believes them and is already working toward them. If, on the other hand, national standards attempt to impose their particular vision of a proper education on those with differing visions, then national standards are oppressive and likely to face high levels of resistance and non-compliance. So, national standards are doomed to be either unnecessary or illiberal. Either way, they are wrong. [emphasis added]

Supporters of Common Core clearly hope it does bend educators to their will induce “instructional shifts” in our nation’s classrooms, but as Greene points out, for Common Core to be more than “just a bunch of words in a document,” it needs some sort of mechanism to coerce schools and educators into changing their practice to align with the Core. Prominent backers of Common Core have long promoted a “tripod” of standards, tests, and “accountability” measures – i.e. rewards or (more likely) punishments tied to performance on those tests.

And that brings us to the second conundrum Greene identified: either a combination of frustrated educators and parents will neuter the “accountability” measures (enter the opt-out movement), or those measures will create perverse incentives that could warp the education system in ways that even Common Core supporters wouldn’t like:

The problem with trying to use PARCC or Smarter Balanced tests to drive Common Core changes is that it almost certainly requires more coercion than is politically possible and would be undesirable even if it could be accomplished. If Arkansas tries to use the PARCC test to impose strong enough sanctions on schools and educators to drive changes in their practice, we will witness a well-organized and effective counter-attack from educators and sympathetic parents who will likely neuter those sanctions. If, on the other hand, the consequences of PARCC are roughly the equivalent of double secret probation in the movie, Animal House, then no one has to change practice to align with the new standards.

And even if by some political miracle the new PARCC test could be used to impose tough sanctions on schools and educators who failed to comply with Common Core, it’s a really bad idea to try to run school systems with a test. All sorts of bad things happen when maximizing performance on standardized tests becomes the governing principle of schools. Schools and educators are likely to narrow the curriculum by focusing on tested subjects at the expense of untested ones. If we care at all about the Arts, History, and Science we should oppose trying to run schools with math and ELA tests. And within tested subjects schools and educators are likely to focus narrowly on tested items at the expense of a more complete understanding of math and English.

So if national standards don’t work, does that mean abandoning testing and accountability entirely? Not at all. As Greene concludes:

The purpose of PARCC is to drive changes in educator behavior in ways that are desired by Common Core. But we should not be using tests aligned with a set of standards to coerce schools and educators to change their practice. What we really need from standardized testing is just information about how our students are performing. This can be accomplished at much lower cost by just buying a nationally-normed test off of the shelf. And lower stakes tests that are primarily about information rather than coercion will produce much less harmful narrowing of the curriculum.

I would add that opposing uniform, government-imposed standards does not mean opposing all standards. Rather, it means leaving space for competing standards from which schools and parents can choose. There is no One Best Way to educate or to measure educational progress, so a top-down accountability system amounts to hubristic folly. Instead, we should employ the market’s “bottom-up channeling of knowledge” that Yuval Levin so thoughtfully described in a recent essay:

… Put simply, it is a process that involves three general steps, all grounded in humility: experimentation, evaluation, and evolution.

Markets are ideally suited to following these steps. They offer entrepreneurs and businesses a huge incentive to try new ways of doing things (experimentation); the people directly affected decide which ways they like best (evaluation); and those consumer responses inform which ways are kept and which are left behind (evolution).

This three-step process is at work well beyond the bounds of explicitly economic activity. It is how our culture learns and evolves, how norms and habits form, and how society as a general matter “decides” what to keep and what to change. It is an exceedingly effective way to balance stability with improvement, continuity with alteration, tradition with dynamism. It involves conservation of the core with experimentation at the margins in an effort to attain the best of both.

Supporters of Common Core are right to lament a broken system that produces mediocre results on average, and acts as a slaughterhouse of dreams at worse. But they have misdiagnosed the problem, and therefore propose the wrong solution. The problem isn’t that 50 states had 50 different sets of standards, but rather that a government-run schooling system lacks the ability to engage in the experimentation, end-user evaluation, and consumer-driven evolution that have produced great advances and increased productivity in other sectors. The solution, therefore, is not to grant more power to bureaucrats to remake our education system from the top down, but to support polices that empower parents to remake it from the bottom up.

The British luxury passenger liner RMS Lusitania was torpedoed a century ago. The sinking was deemed an atrocity of war and encouraged American intervention in World War I.

But the ship was carrying munitions through a war zone and left unprotected by the Royal Navy. The “Great War” was a thoroughly modern conflict, enshrouded in government lies. We see similar deceptions today.

World War I was a mindless imperial slugfest triggered by an act of state terrorism by Serbian authorities. Contending alliances acted as transmission belts of war. Nearly 20 million died in the resulting military avalanche.

America’s Woodrow Wilson initially declared neutrality, though he in fact leaned sharply toward the motley “Entente.” The German-led Central Powers were no prize. However, the British grouping included a terrorist state, an anti-Semitic despotism, a ruthless imperial power, and a militaristic colonial republic.

Britain was the best of a bad lot, but it ruled much of the globe without the consent of those “governed.” This clash of empires was no “war for democracy” as often characterized.

London ignored the traditional rules of war when imposing a starvation blockade on Germany and neutrals supplying the Germans. Explained Winston Churchill, First Lord of the Admiralty, Britain’s policy was to “starve the whole population—men, women, and children, old and young, wounded and sound—into submission.”

Since Berlin lacked the warships necessary to break Britain’s naval cordon sanitaire, Germany could retaliate only with surface raiders, which were vulnerable to London’s globe-spanning navy, and submarines. U-boats were more effective, but were unable to play by the normal rules of war and stop and search suspect vessels.

The British Admiralty armed some passenger liners and cargo ships, and ordered captains to fire on or ram any submarines that surfaced. Britain also misused neutral flags to shelter its ships. Thus, the U-boats were forced to torpedo allied and some neutral vessels, sending guilty and innocent alike to the ocean’s bottom..

However, Churchill encouraged the voyages. The week before the Lusitania’s sinking he explained that it was “most important to attract neutral shipping to our shores, in the hope especially of embroiling the United States with Germany.”

Wilson complained about the British blockade, but never threatened the bilateral relationship. Washington took a very different attitude toward the U-boat campaign.

The Imperial German government sponsored newspaper ads warning Americans against traveling on British liners, but that didn’t stop the foolhardy from booking passage. Off Ireland’s coast the Lusitania went down after a single torpedo hit; the coup d’ grace apparently was a second explosion of the ship’s cargo of munitions. The dead included 128 Americans.

There was a political firestorm in the U.S., but the flames subsided short of Churchill’s desired declaration of war. Still, the president demanded “strict accountability” for the German U-boat campaign.

His position was frankly absurd: Americans should be able to safely travel on armed vessels of a belligerent power carrying munitions through a war zone. The president eventually issued a de facto ultimatum which caused Berlin to suspend attacks on liners and limit attacks on neutral vessels.

As the war dragged on, however, Berlin tired of placating Washington. In January 1917 the Kaiser approved resumption of submarine warfare. But the effort could not redress Germany’s continental military disadvantages.

After the conflict ended the egotistical, vainglorious Wilson was outmaneuvered by cynical European leaders. The Versailles “peace” treaty turned out to be but a generational truce during which the participants prepared for another round of war.

Today America’s unofficial war lobby routinely clamors for Washington to bomb, invade, and occupy other lands. As I wrote on Forbes, “On the centennial of the Lusitania’s demise Americans should remember the importance of just saying no. Now as then Americans need a president and Congress that believe war to be a last resort for use only when necessary to protect this nation, its people, liberties, and future.”

Prime Minister Shinzo Abe’s trip to Washington demonstrated that Japan remains America’s number one Asian ally. Unfortunately, the relationship increases the likelihood of a confrontation between the United States and China.

Japan’s international role has been sharply limited since World War II. During Prime Minister Abe’s visit, the two governments released new “Guidelines for Japan-U.S. Defense Cooperation.” The document clearly sets America against China.

First, the rewrite targets China. Japan’s greatest security concern is the ongoing Senkaku/Diaoyu dispute and Tokyo had pushed hard for an explicit U.S. guarantee for the unpopulated rocks. Second, Japan’s promise to do more means little; the document stated that it created no “legal rights or obligations.” Tokyo will remain reluctant to act outside of core Japanese interests.

Third, though the new rules remove geographical limits from Japanese operations, most of Japan’s new international responsibilities appeared to be what Prime Minister Abe called “human security.” In his speech to Congress, the prime minister mostly cited humanitarian and peacekeeping operations as examples of his nation’s new duties.

Moreover, the guidelines indicate that the SDF’s military involvement will be “from the rear and not on offensive operations,” noted analysts at the Center for Strategic and International Studies. Defense Minister Gen Nakatani cited “ship inspection” as an example of helping America’s defense.

Fourth, to the extent force is involved, Japan mostly promises to help the United States defend Japan. For instance, Tokyo cited the fact that Japanese vessels now could assist U.S. ships if the latter were attacked while on a joint patrol.

This should be inherent to any alliance, but Narushige Michishita, at Tokyo’s National Graduate Institute for Policy Studies, noted that “technically” it remains impossible for Japanese forces to defend even a U.S. vessel in a Japanese flotilla “when an attack on that ship does not directly or will not directly threaten Japan’s security.” That means a situation which “threatens Japan’s survival and poses a clear danger to overturn fundamentally its people’s right to life, liberty, and pursuit of happiness, to ensure Japan’s survival, and to protect its people.”

In contrast, the revised guidelines begin with an affirmation that “The United States will continue to extend deterrence to Japan through the full range of capabilities, including U.S. nuclear forces. The United States also will continue to forward deploy combat-ready forces in the Asia-Pacific region and maintain the ability to reinforce those forces rapidly.” This means more and newer weapons.

Fifth, as I wrote in China-U.S. Focus, “America’s burden will grow. Tokyo’s military expenditures have been flat for years, but now Japan plans on devoting more resources to non-combat activities. That will leave less for defense against what the Japanese government sees as the greatest threat, the PRC—which continues to hike military outlays. Washington will be expected to fill the ever widening gap.”

Sixth, the new rules build on the Obama administration’s explicit promise to defend Tokyo’s contested territorial claims, most importantly the Senkakus/Diaoyus. U.S. forces will be drawn into the islands’ defense.

According to the document, “If the need arises, the Self-Defense Forces will conduct operations to retake an island.” The SDF would, of course, expect American support. Protecting Tokyo’s claims also encourages the Japanese government to be needlessly provocative.

Japanese and U.S. authorities also are discussing mounting joint air patrols to the edge of the East China Sea and into the South China Sea. In the latter, Tokyo is working with other countries, including Indonesia, the Philippines, and Vietnam. Thus, a U.S. plane could find itself challenging Chinese aircraft in support of a third nation’s disputed territorial claim.

President Obama argued that “we don’t think that a strong U.S.-Japan alliance should be seen as a provocation,” but it will be if directed against the PRC. Unfortunately, the new guidelines make it more likely that Washington will find itself confronting China over issues of limited interest to America.

Judging from the November electoral tsunami, whose epicenter was in coal country, people aren’t taking very kindly to the persistent exaggeration of mundane weather and climate stories that ultimately leads to, among other things, unemployment and increased cost of living. In response, we’ve decided to initiate “The Spin Cycles” based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth.

Like the popular and useful Fujita tornado ratings (“F1” through “F5”), or the oft-quoted Saffir-Simpson hurricane severity index (Category 1 through Category 5), and in the spirit of the Washington Post’s iconic “Pinocchios,”, we hereby initiate the “Spin Cycle,” using a scale of Delicates through Permanent Press. Our image will be the universal vortex symbol for tropical cyclones, intimately familiar to anyone who has ever been alive during hurricane season, being spun by a washing machine. Here’s how they stack up, with apologies to the late Ted Fujita and Bob Simpson, two of the true heroes of atmospheric science with regard to the number of lives their research ultimately saved.

And so, here we have it:

Delicates. An accidentally misleading statement by a person operating outside their area of expertise. Little harm, little foul. One spin cycle.

Slightly Soiled.  Over-the-top rhetoric. An example is the common meme that some obnoxious weather element is new, thanks to anthropogenic global warming, when it’s in fact as old as the earth. An example would the president’s science advisor John Holdren’s claim the “polar vortex,” a circumpolar westerly wind that separates polar cold from tropical warmth, is a man-made phenomenon. It waves and wiggles all over the place, sometimes over your head, thanks to the fact that the atmosphere behaves like a fluid, complete with waves, eddies, and stalls. It’s been around since the earth first acquired an atmosphere and rotation, somewhere around the beginning of the Book of Genesis. Two spin cycles.

Normal Wash. Using government authority to create public panic regarding climate change, particularly those omitting benefits, in an effort to advance policy. For example, the 2014 National Climate Assessment. Three spin cycles.

Heavy Duty. Government regulations or treaties claiming to save the planet from certain destruction, but which actually accomplish nothing. Can also apply to important UN climate confabs, such as Copenhagen 2009 (or, quite likely, the upcoming 2015 Paris Summit), that are predicted to result in a massive, sweeping, and world-saving new treaty, followed by self-congratulatory back-patting. Four spin cycles.

Permanent Press. Purposefully misleading commentary on science which will hinder actual scientific debate and credibility for generations to come, especially those with negative policy outcomes. Linking extreme weather events to climate change, the perpetually impending demise of the polar bears, the Federal government attempting to convince you to sell your beachfront property before it’s submerged. Five spin cycles.

 

INAUGURAL SPIN CYCLE AWARD 

DOES MERCURY FROM POWER PLANTS MAKE US STUPID?

In State of Michigan et al. v. Environmental Protection Agency, the EPA contends that the costs to reduce and then eliminate mercury from power plant effluent are justified because current emissions are lowering I.Q. scores. The result will be to eliminate all coal-fired generation of electricity, [double entendre ahead] currently around 40 percent of our total electric power.

You remember IQ (“Intelligence Quotient”) tests, right? Oh, well, maybe you don’t, because public schools can’t use them anymore. Whether or not they measure intelligence (whatever that is) or not, not all socioeconomic groups score the same, so they can’t be fair (whatever that means). But they do predict, within certain humongous error ranges, lifetime income—which isn’t fair, either.

Which, means, according to EPA, that power plant emissions of mercury are harming…whom?

So—we can’t make this stuff up, the EPA invented a population of 240,000 nonexistent women who fish day in and day out, in order to feed themselves. We won’t get into the fact that, given the cost of, say, a can of mackerel, these folks are paying themselves far, far below the minimum wage. No, instead, they eat—or should we say gorge—up to 300 pounds of hand-caught freshwater fish per day. And then they go home and do the sort of things that lead to children., whose IQ scores are lowered thanks to the mercury in those fish.

Nevermind that U.S. power plants emit less than 0.7 percent of the total mercury input to the atmosphere each year, or that the total U.S. contribution is a mere two percent, or that East Asia, (mainly China) contributes around 36 percent.  Given that mercury can stay in the atmosphere for weeks before it is deposited on the surface, their contribution to our mercury deposition is huge compared to what comes from our homegrown power plants.

The average IQ score is 100. The measurement error for practical purposes is +/- 5 points (one standard deviation). That means if you score 140, your true score is likely between 135 (“highly intelligent”) and 145 (“genius’), or about the average score of our readers.

Those hard facts weren’t enough to keep the EPA from confidently stating that the average IQ reduction in the hypothetical children of the hypothetical fish-obsessed women will be (drum roll!) 0.00209 IQ points. In other words, the average IQ of these sorry tots will read 99.997, with a real value of between 94.997 and 104.997.

Nowhere did the EPA say that avoiding such an IQ loss could impact future earnings, but they still proceeded to translate the value of 0.00209 IQ points to a value of up to $6,000,000 per year across 240,000 hypothetical kids.

One gets the impression that people who think they can find a needle of precisely 0.00209 IQ points in a haystack of 10.0000, or two-hundreths of one percent of the error range, might not score too high on such a test. Of course, since they are most likely government bureaucrats making around $115K per year, that shows how good IQ tests are, after all.

For “thinking” that we can measure 0.00209 IQ points, and, for that, we will shut down power plants that produce 40 percent of our juice, the inaugural recipient of the Spin Cycle award, the U.S. Environmental Protection Agency, gets five spin cycles, or Permanent Press.

The United States is effectively bankrupt. Economist Laurence Kotlikoff figures the United States faces unfunded liabilities in excess of $200 trillion. Only transforming or eliminating such programs would save the republic.

The Left likes to paint conservatives as radical destroyers of the welfare state. Instead, some on the Right have made peace with expansive government.

Particularly notable is the movement of “reform conservatism,” or the so-called “reformicons” who, noted Reason’s Shikha Dalmia, “have ended up with a mix of old and new liberal ideas that thoroughly scale back the right’s long-running commitment to free markets and limited government.”

The point is not that attempts to improve the functioning of bloated, inefficient programs are bad. But they are inadequate. Yes, government costs too much. Government also does too much.

The worst “reform conservatism” idea is to manipulate the state to support a particular “conservative” vision. For instance, Dalmia points out that some reformicons want to use the state to strengthen institutions which they favor. 

Dalmia noted that Utah’s Sen. Mike Lee has criticized conservatives who “have abandoned words like ‘together,’ ‘compassion,’ and ‘community’.” Although he warned against overreliance on the state, he still wants to use it for his own ends.

Reformicon intellectuals and politicians argue for an expended Earned Income Tax Credit for singles and increased deductions for dependents and tax credits for parents who stay at home. Some reformicons want more taxes on the wealthy, new employee-oriented public transportation, and a preference for borrowing over deficit reduction.

Senators Lee and Marco Rubio have introduced the “Economic Growth and Family Fairness Tax Reform Plan.” It offers some corporate and individual tax reductions but raises the rates on most everyone by lowering tax thresholds. The bill also increases the child credit even for the well-to-do.

Alas, this differs little from liberal social engineering. As Dalmia put it:  “Broad-based, neutral tax cuts to stimulate growth are out, markets are optional tools, the welfare state is cool, redistributive social engineering is the way forward, and class warfare is in.”

Reformicons don’t so much disagree as argue that they can do better than liberals. For instance, Yuval Levin of National Affairs contended that his movement relies on “experimentation and evaluation [and] will keep those programs that work and dump those that fail.”

Politics drives reform conservatism. Henry Olsen of the Ethics and Public Policy Center made a fulsome pitch for conservatives to embrace social benefits for “their” voters. After all, “Many of those working-class voters are located precisely in the two places a Republican presidential candidate needs to carry to win the White House.”

Of course, no one should want policies that don’t work. But that doesn’t address the most important question: is the end itself justified? Efficient income redistribution doesn’t make the process morally right, only less wasteful.

And such measures can create new problems. For instance, author Amity Shlaes and Matthew Denhart of the Calvin Coolidge Presidential Foundation warned the Rubio-Lee plan would generate resentment by pitting individuals against families. It also would sacrifice opportunities to spur economic growth by emphasizing group privileges over rate reductions for all.

Big issues are at stake. The current economic system isn’t working for all. Rubio asked the right question: “How can we get to the point where we’re creating more middle-income and higher-income jobs, and how do we help people acquire the skills they need?”

As I point out in the Freeman:  “However, social engineering, even conservative social engineering, is not the answer. The starting point for job creation remains what it always has been, making it easier to create businesses and jobs.” For most issues, the principal answer will come outside of politics. As Sen. Lee recognized, “Collective action doesn’t only—or even usually—mean government action.”

Some reformicon ideas might make some conservatives appear more presentable to the public. But reform conservatism fails to provide an answer to the most important problems facing America. Government is not just inefficient. It is too big and does too much.

It’s been a nice few weeks for civil liberties in Montana.  On the heels of the nation’s most comprehensive restrictions on police militarization, Montana Governor Steve Bullock (D) has signed a bill reforming civil asset forfeiture in the state.

HB463 requires a criminal conviction before seized property can be forfeited, requires that seized property be shown by “clear and convincing evidence” to be connected to the criminal activity, and bolsters the defenses for innocent owners by shifting the burden of proof to the government.

The effort was spearheaded by State Representative Kelly McCarthy (D), who credited the work of the Institute for Justice and other civil liberties organizations for bringing the abuses of civil asset forfeiture to light.

McCarthy told the Daily Caller News Foundation:

“After looking into Montana laws and working with the Institute for Justice, we found that our laws provided no greater property rights protections than those states who were identified with rampant abuse, (Texas, Kentucky, Pennsylvania, Virginia, etc.).

From that time I began meeting with stakeholders and working on the bill.”

Montana is now the second state in less than a month to heavily restrict state-level civil asset forfeiture, following New Mexico. It must be noted that the Montana reforms are less robust than those that passed in New Mexico last month. 

Unlike the New Mexico law, the Montana law does not restrict law enforcement agencies’ exploitation of federal forfeiture laws that maintain the lower burdens of proof and the civil proceedings that Montana now restricts at the state level. The bill also allows Montana law enforcement to keep the proceeds of their seizures, whereas the New Mexico law requires that such proceeds be deposited into the general fund, thus depriving police of any profit motive for initiating seizures.

That said, the Montana law represents substantial progress for a state that the Institute for Justice labeled “terrible” on civil asset forfeiture, and all those who worked for its passage should be commended for striking a blow in favor of due process and property rights.

That a traditionally red state like Montana with a Democratic governor and a traditionally blue state like New Mexico with a Republican governor have both passed substantial civil asset forfeiture reforms this year is a testament to the bipartisan consensus building around restricting this inherently abusive practice.

 

What happens when the population of K-12 students grows faster than the government is able to build school buildings? Las Vegas is finding out the hard way:

Las Vegas is back, baby. After getting slammed by the Great Recession, the city today is seeing rising home sales, solid job growth and a record number of visitors in 2014.

But the economic rebound has exacerbated the city’s severe school overcrowding and left school administrators, lawmakers and parents scrambling.

This elementary school was built to serve a maximum of 780 students. Today it serves 1,230 — and enrollment is growing.

Forbuss Elementary is hardly alone. The crowding is so bad here in the Clark County School District that 24 schools will soon run on year-round schedules.

Forbuss already is. One of five sections is always on break to make room. Scores of other schools are on staggered schedules. More than 21,000 Clark County students are taking some online classes, in large part because of space strains. Nearly 700 kids in the district take all of their classes online.

“It’s pretty rough some days. I’m in a small portable with 33 students,” says Sarah Sunnasy. She teaches fifth grade at Bertha Ronzone Elementary School, a high-poverty school that is nearly 90 percent over capacity. “We tend to run into each other a lot. Trying to meet individual needs when you have that many kids with such a wide range of ability levels is hard. We do the best we can with what we have,” she says.

At Forbuss Elementary there are 16 trailer classrooms — the school prefers the term “portables” — parked in the outdoor recess area, eating away at playground space.

There’s also a “portable” bathroom and portable lunchroom. “It’s warmer in the big school,” a little girl tells me. “These get cold in winter.”

“You have to make do,” says Principal Shawn Paquette. “You get creative.”

“Our school is so overcrowded, that, you know, everybody’s gotta pitch in,” says school support staffer Ruby Crabtree. “We don’t have enough people.”

The Nevada legislature recently approved funding to build new schools and renovate old ones, but as NPR notes, the “handful of new schools won’t be finished for at least two years.” In that time, the Las Vegas school district is expected to experience 1 percent enrollment growth, or about 3,000 to 4,000 students, so the district will need “at least two more elementary schools every year.”

Instead of herding children into crowded trailers “portables,” Nevada should consider giving students and their families the option of attending private schools. As education policy guru Matthew Ladner has pointed out repeatedly, school choice programs can serve as a pressure release valve in areas experiencing rapid growth–particularly where the elderly population is also growing, further straining public resources:

The 76,000,000 strong Baby Boom generation is already moving into retirement. Every day between now and the year 2030, 10,000 Americans reach retirement age. Every state will be much older than today, and the vast majority of states will have a larger portion of elderly than Florida has today – some much larger. 

As the Baby Boomers retire, many will also be sending their grandchildren off to school. The Census Bureau projects many states will face a simultaneous increase in school-aged and elderly populations. A fierce battle between advocates of public spending on health and public education looms. If economists have correctly described the relationship between age demography and economic growth, tax dollars may prove scarce, exacerbating the problem.

Let’s be clear about the improvement needed: in anticipation of the crisis ahead, we need a system of vastly improved learning outcomes at a lower overall cost per student. In other words we need to improve both the academic and cost effectiveness of our education delivery system.

Fortunately, we already know how to improve learning outcomes at a lower cost per student: school choice.

Last month, Nevada adopted a scholarship tax credit law, but sadly the available credits are so limited that the law will barely relieve any pressure at all. As I explained recently:

The total amount of tax credits available is limited to only $5 million in the first year, or about 0.14 percent of statewide district school expenditures. Following Arizona, Florida, and New Hampshire, Nevada lawmakers wisely included an “escalator clause” allowing the total amount of credits to grow by 10 percent each year. However, assuming an average scholarship of $5,000 (significantly lower than the law allows), there would only be sufficient funds for 1,000 students in the first year, which is the equivalent of about 0.2 percent of statewide district school enrollment. Even with the escalator clause, very few students will be able to receive scholarships without the legislature expanding the available credits.

This year, Nevada let the school choice camel get its nose whisker under the tent, but policymakers shouldn’t rely on the escalator clause alone for growth. Students crammed into overcrowded district schools need alternatives now. Kids who happen to be assigned to an overcrowded Las Vegas district school shouldn’t have to stay in that school.

The BBC reports that Nancie Atwell of Maine has just won the million dollar “Global Teacher Prize.” Congratulations Ms. Atwell! On the rare occasions such prizes are doled out, the reaction is universally celebratory. But is there really only one teacher in the world worth $1,000,000–and even then only once in a lifetime?

Here’s a radical thought: What if we organized education such that the top teachers could routinely make large sums of money “the old-fashioned way” (i.e., by earning it in a free and open marketplace)? In other fields, the people and institutions that best meet our needs attract more customers and thereby earn greater profits. Why have we structured our economy such that the best cell phone innovators can become rich, but not the best teachers? This seems not only deeply unfair but unwise as well.

Perhaps some people don’t believe it would be possible for educators to become wealthy in an open marketplace. Their negativity is contradicted by reality. In one of the few places where instruction is organized as a marketplace activity, Korea’s tutoring sector, one of the top tutors (Kim Ki-Hoon) has earned millions of dollars per year over the last decade. His secret: offering recorded lessons over the Internet at a reasonable price, and attracting over a hundred thousand students each year. His employment contract with his tutoring firm ensures that he receives a portion of the revenue he brings in–so even though his fees are reasonable, his earnings are large due to the vast number of students he reaches. And his success depends on his performance. In an interview with Amanda Ripley he observed: “The harder I work, the more I make…. I like that.” Is there any reason we shouldn’t like that, too?

As Ripley reports, this tutoring marketplace receives favorable reviews from students:

In a 2010 survey of 6,600 students at 116 high schools conducted by the Korean Educational Development Institute, Korean teenagers gave their hagwon [i.e., private tutoring] teachers higher scores across the board than their regular schoolteachers: Hagwon teachers were better prepared, more devoted to teaching and more respectful of students’ opinions, the teenagers said. Interestingly, the hagwon teachers rated best of all when it came to treating all students fairly, regardless of the students’ academic performance.

That is not to say that the Korean education system is without flaw. Indeed, the government-mandated college entrance testing system creates enormous pressure on students and skews families’ demands toward doing well on “the test,” rather than on fulfilling broader educational goals. This, of course, is not caused by the marketplace, but rather by the government mandate. The marketplace simply responds to families’ demands, whatever they happen to be. While many hagwons prepare students for the mandated college-entrance exam, there are also those teaching such things as swimming or calligraphy.

If we liberate educators, educational entrepreneurship will thrive. There are policies already in place in some states that could ensure universal access to such an educational marketplace.

In his groundbreaking work, Denationalisation of Money: the Argument Refined, F.A. Hayek proposed that open competition among private suppliers of irredeemable monies would favor the survival of those monies that earned a reputation for possessing a relatively stable purchasing power.

One of the main problems with Bitcoin has been its tremendous price instability: its volatility is about an order of magnitude greater than that of traditional financial assets, and this price instability is a serious deterrent to Bitcoin’s more widespread adoption as currency. So is there anything that can be done about this problem?

Let’s go back to basics. A key feature of the Bitcoin protocol is that the supply of bitcoins grows at a predetermined rate.1 The Bitcoin price then depends on the demand for bitcoins: the higher the demand, the higher the price; the more volatile the demand, the more volatile the price. The fixed supply schedule also introduces a strong speculative element. To quote Robert Sams (2014: 1):

If a cryptocurrency system aims to be a general medium-of-exchange, deterministic coin supply is a bug rather than a feature… . Deterministic money supply combined with uncertain future money demand conspire to make the market price of a bitcoin a sort of prediction market [based] on its own future adoption.

To put it another way, the current price is indicative of expected future demand. Sams continues:

The problem is that high levels of volatility deter people from using coin as a medium of exchange [and] it might be conjectured that deterministic money supply rules are self-defeating.

One way to reduce such volatility is to introduce a feedback rule that adjusts supply in response to changes in demand. Such a rule could help reduce speculative demand and potentially lead to a cryptocurrency with a stable price.

Let’s consider a cryptocurrency that I shall call “coins,” which we can think of as a Bitcoin-type cryptocurrency but with an elastic supply schedule. Following Sams, if we are to stabilize its price, we want a supply rule that ensures that if the price rises (falls) by X% over some period, then the supply increases (decreases) by X% to return the price back toward its initial or target value. Suppose we measure a period as the length of time needed to validate n transactions blocks. For example, a period might be a day; if takes approximately 10 minutes to validate each transactions block, as under the Bitcoin protocol, then the period would be the length of time needed to validate 144 transactions blocks. Sams posits the following supply rule:

(1a) Qt=Q(t-1)(Pt/P(t-1)),

(1b) ΔQt=Qt-Q(t-1).

Here Pt is the coin price, Qt is the coin supply at the end of period t, and ∆Qt is the change in the coin supply over period t. There is a question as to how Pt is defined, but following Ferdinando Ametrano (2014a), let’s assume that Pt is defined in USD and that the target is Pt=$1. This assumed target provides a convenient starting point, and we can generalize it later to look at other price targets, such as those involving price indices. Indeed, we can also generalize it to targets specified in terms of other indices such as NGDP.

Another issue is how the change in coin supply (∆Qt) is distributed. The point to note here is that there will be occasions when the coin supply needs to be reduced, and others when it needs to be raised, depending on whether the coin price has fallen or risen over the preceding period.

Ametrano proposes an elegant solution to this distribution problem, which he calls ‘Hayek Money.’ At the end of each period, the system should automatically reset the price back to the target value and simultaneously adjust the number of coins in each wallet by a factor of Pt/P(t-1). Instead of having k coins in a wallet that each increase or decrease in value by a factor of Pt/P(t-1), a wallet holder would thus have k×Pt/P(t-1), coins in their wallet, but the value of each coin would be the same at the end of each period.

This proposal would stabilize the coin price and achieve a stable unit of account. However, it would make no difference to the store of value performance of the currency: the value of the wallet would be just as volatile as it was before. To deal with this problem, both Ametrano (2014b) and Sams propose improvements based on an idea they call ‘Seigniorage Shares.’ These involve two types of claims on the system—coins and shares, with the latter used to support the price of the former via swaps of one for the other. Similar schemes have been proposed by Buterin (2014a),2 Morini (2014),3 and Iwamura et al. (2014), but I focus here on Seigniorage Shares as all these schemes are fairly similar.

The most straightforward version of Seigniorage Shares is that of Sams, and under my interpretation, this scheme would work as follows. If ∆Qt is positive and new coins have to be created in the t-th period, Sams would have a coin auction 4 in which ∆Qt coins would be created and swapped for shares, which would then be digitally destroyed by putting them into a burning blockchain wallet from which they could never be removed. Conversely, if ∆Qt is negative, existing coins would be swapped for newly created shares, and the coins taken in would be digitally destroyed.

At the margin, and so long as there is no major shock, the system should work beautifully. After some periods, new coins would be created; after other periods, existing coins would be destroyed. But either way, at the end of each period, the Ametrano-style coin quantity adjustments would push the price of coins back to the target value of $1.

Rational expectations would then come into play to stabilize the price of coins during each period. If the price of coins were to go below $1 during any such period, it would be profitable to take a bullish position in coins, go long, and profit when the quantity adjustments at the end of the period pushed the price back up to $1. Conversely, if the price of coins were to go above $1 during that period, then it would be profitable to take a bear position and sell or short coins to reap a profit at the end of that period, when the quantity adjustments would push the price back down to $1.

These self-fulfilling speculative forces, driven by rational expectations, would ensure that the price during each period would never deviate much from $1. They would also mean that the length of the period is not a critical parameter in the system. Doubling or halving the length of the period would make little difference to how the system would operate. One can also imagine that the period might be very short—even as short as the period needed to validate a single transactions block, which is less than a minute. In such a case, very frequent rebasings would ensure almost continuous stability of the coin price.

The take-home message here is that a well-designed cryptocurrency system can achieve its price-pegging target—provided that there is no major shock.

References

Ametrano, F.A. “Hayek Money: The Cryptocurrency Price Stability Solution.” August 19, 2014. (a)

Ametrano, F. M “Price Stability Using Cryptocurrency Seigniorage Shares.” August 23 2014. (b)

Buterin, V. “The Search for a Stable Cryptocurrency.” November 11, 2014. (a)

Buterin, V. “SchellingCoin: A Minimal-Trust Universal Data Feed.” March 28, 2014. (b)

Iwamura, M., Kitamura, Y., Matsumoto, T., and Saito, K. “Can We Stabilize the Price of a Cryptocurrency? Understanding the Design of Bitcoin and Its Potential to Compete with Central Bank Money.” October 25, 2014.

Morini, M. “Inv/Sav Wallets and the Role of Financial Intermediaries in a Digital Currency.” July 21, 2014.

Sams, R. “A Note on Cryptocurrency Stabilisation: Seigniorage Shares.” November 8, 2014.

[1] Strictly speaking, the supply of bitcoins is only deterministic when measured in block-time intervals. Measured in real time, there is a (typically) small randomness in how long it takes to validate each block. However, the impact of this randomness is negligible, especially over the longer term where the law of large numbers also comes into play.

[2] Buterin (2014b) examines three schemes that seek to stabilize the cryptocurrency price: BitAsset, the SchellingCoin (first proposed by Buterin (2014b)) and Seigniorage Shares. He concludes that each of these is vulnerable to fragility problems similar to those to be discussed in my next post.

[3] In the Morini system, participants would have a choice of Inv and Sav wallets, the former for investors in coins and the other for savers who want coin-price security. The Sav wallets would be protected by the Inv wallets, and participants could choose a mix of the two to meet their risk-aversion preferences.

[4] In fact, Sams’ auction is unnecessarily complicated and not even necessary. Since shares and coins would have well-defined market values under his system, it would suffice merely to have a rule to swap them as appropriate at going market prices without any need to specify an auction mechanism.

[Cross-posted from Alt-M.org]

Free speech has been in the news a lot recently. And lately it seems that we’ve had an unusually vigorous crop of utility monsters - the sort of professional complainers whose feelings are all too easily bruised, and who therefore demand that the rights of others be curtailed. 

In a climate like this, it’s important to distinguish the true heroes of free speech from the false ones. The latter are all too common. The key question to ask of public figures is simple: If you had all the power, how would you treat your opponents?

Meet Dutch politician Geert Wilders. He was a guest of honor at the recent Garland, Texas exhibition of cartoons of Mohammed, where two would-be terrorists armed with assault weapons were gunned down by a single heroic security guard armed only with a pistol. (Nice shooting, by the way.)

Wilders is now being hailed as a free-speech hero, at least in some circles. Unfortunately, he’s nothing of the kind. Besides criticizing Islam, Wilders has also repeatedly called for banning the Koran. The former is compatible with the principle of free speech. The latter is not.

A key move here is to distinguish the exercise of free speech from the principled defense of free speech. The two are not the same, as my colleague Adam Bates has ably pointed out.

Exercises of free speech can be completely one-sided. As an example, here’s me exercising my free speech: I happen to think Islam is a false religion. I have no belief whatsoever that Mohammed’s prophecies are true. They’re not even all that interesting. I mean, if you think the Bible is dull…well…have I got a book for you. I speak only for myself here, but I disagree with Islam. (And probably with your religion, too, because I’m a skeptic about all of them.) My saying so is an exercise of free speech. 

Defenses of free speech are different. Properly speaking, they must not be one-sided. A principled defense of free speech means giving your opponents in any particular issue the exact same rights that you would claim for yourself: If you would offend them with words, then they must be allowed to offend you with words, too. Say what you like about them, and they must be allowed to say what they like about you. 

No, we’re not all going to agree. And that’s actually the point: Given that agreement on so many issues is simply impossible in our modern, interconnected world, how shall we proceed? With violence and repression? Or with toleration, even for views that we find reprehensible? 

If you had all the power, how would you treat your opponents?

Mismanagement within the Department of Veterans Affairs (VA) is chronic. The agency mismanages its projects and its patients. Last year’s scandal at the Phoenix VA centered on allegations that veterans waited months for treatment while never being added to the official waiting lists. The VA Secretary resigned and the agency focused on changing course. New reports suggest that agency reforms still have a long way to go.

A congresswoman at a recent congressional hearing described the VA as having a “culture of retaliation and intimidation.” Employees who raise concerns about agency missteps are punished. The U.S. Office of Special Counsel (OSC), which manages federal employee whistleblower complaints, reported that it receives twice as complaints from VA employees than from Pentagon employees, even though the Pentagon has double the staff. Forty percent of OSC claims in 2015 have come from VA employees, compared to 20 percent in 2009, 2010, and 2011.

During the hearing, a VA surgeon testified about the retaliation he faced following his attempts to highlight a coworker’s timecard fraud. From July 2014 until March 2015, his supervisors revoked his operating privileges, criticized him in front of other employees, and relocated his office to a dirty closet before demoting him from Chief of Staff.

Another physician was suspended from his job shortly after alerting supervisors to mishandled lab specimens. A week’s worth of samples were lost. Several months later, he reported another instance of specimen mishandling and his office was searched. He became a target of immense criticism.

In addition to these sorts of cases, Carolyn Lerner, head of OSC, told Congress that in some cases a whistleblower’s own VA medical records are illegally accessed in order to discredit them.  

One VA whistleblower claims that his VA medical records were accessed “by a dozen different people from October 28, 2014 to March 10, 2015.” Apparently, other employees were trying to retaliate against him because he attempted to flag the VA’s mishandling of suicidal patients at the Phoenix facility. His only treatment during this time period was to purchase a new pair of glasses.

These stories paint a dark picture of the VA system. A VA neurologist said, “the story of VA is a story of two different organizations; there is the VA that takes care of veterans, and there is the VA that takes care of itself.”

Congress and the VA should try to clean up these messes. Veterans’ health care needs improvement, and employees should be free to highlight these issues without the fear of retribution.

Is the problem with Baltimore’s district schools a lack of funds?

The Daily Show’s Jon Stewart argued as much during a recent interview with ABC’s George Stephanopoulos:

“If we are spending a trillion dollars to rebuild Afghanistan’s schools, we can’t, you know, put a little taste Baltimore’s way. It’s crazy.”

However, under even cursory scrutiny, Stewart’s claim falls apart like a Lego Super Star Destroyer dropped from ten feet. As economist Alex Tabarrok explained:

Let’s forget the off-the-cuff comparison to Afghanistan, however, and focus on a more relevant comparison. Is it true, as Stewart suggests, that Baltimore schools are underfunded relative to other American schools? The National Center for Education Statistics reports the following data on Baltimore City Public Schools and Fairfax County Public Schools, the latter considered among the best school districts in the entire country:

Baltimore schools spend 27% more than Fairfax County schools per student and a majority of the money comes not from the city but from the state and federal government. Thus, when it comes to education spending, Baltimore has not been ignored but is a recipient of significant federal and state aid.

Clearly, as Tabarrok shows, Baltimore’s schools are not lacking for funds. According to the most recent NCES data, the national average district school per-pupil expenditure was about $12,000 in 2010-11, which is about $12,500 in 2015 dollars.

However, one could object to Tabarrok’s comparison: perhaps it’s simply more expensive to educate low-income students in Baltimore than the generally well-off students in Fairfax County. To see if money really makes a difference, we would need an apples-to-apples comparison.

One way to test the “more money equals better results” assumption is to see look at the funding changes across different states to see if there is any correlation between increased funding and improved results. In 2012, researchers from Harvard, Stanford, and the University of Munich released a report on international and state trends in student achievement that addressed this very question, finding that “Just about as many high-spending states showed relatively small gains as showed large ones…. And many states defied the theory [that spending drives performance] by showing gains even when they did not commit much in the way of additional resources.” They concluded:

It is true that on average, an additional $1,000 in per-pupil spending is associated with an annual gain in achievement of one-tenth of 1 percent of a standard deviation. But that trivial amount is of no statistical or substantive significance.

 

In other words, there’s no good reason to believe that Baltimore’s district schools would improve if the government followed Stewart’s advice and gave them a lot more money. In fact, the federal government already tried that. Due to stimulus funds, federal spending on Baltimore city schools increased from about $143 million in 2009 to a high of $265 million in 2011, before declining to about $150 million in 2014.

Source: Baltimore City Public Schools, Adopted Operating Budget, Fiscal Year 2014, page 12.

So how did Baltimore city school students perform on the state’s standardized test over that time period? About the same, and perhaps slightly worse:

Source: Maryland State Department of Education, 2014 Maryland Report Card.

Nearby Washington, D.C. already spends significantly more on its district schools. According to the most recent U.S. Census Bureau data, the D.C. district schools spent $1.2 billion in FY2012 [Table 1] on 44,618 students [Table 19], or about $26,660 per pupil. That’s down from the nearly $30,000 spent per pupil in FY2010, yet D.C.’s district schools still rank among the worst in the nation. By contrast, the D.C. Opportunity Scholarship Program spends less than one third as much per pupil yet, according to a random-assignment study by the U.S. Department of Education, it produces slightly better academic results and a significantly higher graduation rate (82 percent for students offered a voucher, compared to 70 percent in the control group). Other gold standard studies on school choice programs have found a positive impact on student achievement as well.

What Baltimore needs is not more money, but more choice.

Roger Milliken, head of the South Carolina textile firm Milliken & Co. for more than 50 years, was one of the most important benefactors of modern conservatism. He was active in the Goldwater campaign, and was a founder and funder of National Review and the Heritage Foundation. He dabbled in libertarianism, too. He was a board member of the Foundation for Economic Education and supported the legendary anarchist-libertarian speaker Robert LeFevre, sending his executives to LeFevre’s classes.

But he parted company with his free-market friends on one issue: free trade. Starting in the 1980s, when Americans started buying a lot of textile imports, he hated it. As the Wall Street Journal reports today,

Milliken & Co., one of the largest U.S. textile makers, has been on the front lines of nearly every recent battle to defeat free-trade legislation. It has financed activists, backed like-minded lawmakers and helped build a coalition of right and left-wing opponents of free trade….

“Roger Milliken was likely the largest single investor in the anti-trade movement for many years—as though no amount of money was too much,” said former Clinton administration U.S. Trade Representative Charlene Barshefsky, who battled with him and his allies….

Mr. Milliken, a Republican, invited anti-free-trade activists of all stripes to dinners on Capitol Hill. The coalition was secretive about their meetings, dubbing themselves the No-Name Coalition.

Several people who attended the dinners, which continued through the mid-2000s, recall how International Ladies’ Garment Workers Union lobbyist Evelyn Dubrow, a firebrand four years younger than the elderly Mr. Milliken, would greet the textile boss, who fought to keep unions out of his factories, with a kiss on the cheek.

“He had this uncanny convening power,” says Lori Wallach, an anti-free-trade activist who works for Public Citizen, a group that lobbies on consumer issues. “He could assemble people who would otherwise turn into salt if they were in the same room.”…

“He was just about the only genuinely big money that was active in funding trade-policy critics,” says Alan Tonelson, a former senior researcher at the educational arm of the U.S. Business and Industry Council, a group that opposed trade pacts.

But the world has changed, and so has Milliken & Co. Roger Milliken died in 2010, at age 95 still the chairman of the company his grandfather founded. His chosen successor, Joseph Salley, wants Milliken to be part of the global economy. He has ended the company’s support for protectionism and slashed its lobbying budget. And as the Journal reports, Milliken’s executives are urging Congress to support fast-track authority for President Obama.

American businesses are going global:

But as business becomes more international, American industries that once pushed for protection—apparel, automobiles, semiconductors and tires—now rarely do so. The U.S. Fashion Industry Association, an apparel trade group that wants to reduce tariffs, says that half the brands and retailers it surveyed last year used between six and 20 countries for production. Only two of the eight members of the main U.S. tire-industry trade group, the Rubber Manufacturers Association, even have their headquarters in the U.S….

“There’s a new generation of CEOs,” says Dartmouth College economic historian Douglas Irwin.“It’s part of their DNA that they operate in an international environment.”…

While Mr. Milliken saw China is a major threat to the industry—he said in 1999 he was “outraged, totally outraged” by Congress clearing the way for China’s entrance into the WTO—his successor sees the company’s future there. Milliken opened an industrial-carpet factory near Shanghai in 2007. It has a research-and-development center there and a laboratory stuffed with machinery where Chinese customers can check out the latest additive for strengthening or coloring synthetics.

Globalization is bringing billions of people into the world economy and into prosperity. Even in South Carolina.

Pages