Policy Institutes

What happens when the population of K-12 students grows faster than the government is able to build school buildings? Las Vegas is finding out the hard way:

Las Vegas is back, baby. After getting slammed by the Great Recession, the city today is seeing rising home sales, solid job growth and a record number of visitors in 2014.

But the economic rebound has exacerbated the city’s severe school overcrowding and left school administrators, lawmakers and parents scrambling.

This elementary school was built to serve a maximum of 780 students. Today it serves 1,230 — and enrollment is growing.

Forbuss Elementary is hardly alone. The crowding is so bad here in the Clark County School District that 24 schools will soon run on year-round schedules.

Forbuss already is. One of five sections is always on break to make room. Scores of other schools are on staggered schedules. More than 21,000 Clark County students are taking some online classes, in large part because of space strains. Nearly 700 kids in the district take all of their classes online.

“It’s pretty rough some days. I’m in a small portable with 33 students,” says Sarah Sunnasy. She teaches fifth grade at Bertha Ronzone Elementary School, a high-poverty school that is nearly 90 percent over capacity. “We tend to run into each other a lot. Trying to meet individual needs when you have that many kids with such a wide range of ability levels is hard. We do the best we can with what we have,” she says.

At Forbuss Elementary there are 16 trailer classrooms — the school prefers the term “portables” — parked in the outdoor recess area, eating away at playground space.

There’s also a “portable” bathroom and portable lunchroom. “It’s warmer in the big school,” a little girl tells me. “These get cold in winter.”

“You have to make do,” says Principal Shawn Paquette. “You get creative.”

“Our school is so overcrowded, that, you know, everybody’s gotta pitch in,” says school support staffer Ruby Crabtree. “We don’t have enough people.”

The Nevada legislature recently approved funding to build new schools and renovate old ones, but as NPR notes, the “handful of new schools won’t be finished for at least two years.” In that time, the Las Vegas school district is expected to experience 1 percent enrollment growth, or about 3,000 to 4,000 students, so the district will need “at least two more elementary schools every year.”

Instead of herding children into crowded trailers “portables,” Nevada should consider giving students and their families the option of attending private schools. As education policy guru Matthew Ladner has pointed out repeatedly, school choice programs can serve as a pressure release valve in areas experiencing rapid growth–particularly where the elderly population is also growing, further straining public resources:

The 76,000,000 strong Baby Boom generation is already moving into retirement. Every day between now and the year 2030, 10,000 Americans reach retirement age. Every state will be much older than today, and the vast majority of states will have a larger portion of elderly than Florida has today – some much larger. 

As the Baby Boomers retire, many will also be sending their grandchildren off to school. The Census Bureau projects many states will face a simultaneous increase in school-aged and elderly populations. A fierce battle between advocates of public spending on health and public education looms. If economists have correctly described the relationship between age demography and economic growth, tax dollars may prove scarce, exacerbating the problem.

Let’s be clear about the improvement needed: in anticipation of the crisis ahead, we need a system of vastly improved learning outcomes at a lower overall cost per student. In other words we need to improve both the academic and cost effectiveness of our education delivery system.

Fortunately, we already know how to improve learning outcomes at a lower cost per student: school choice.

Last month, Nevada adopted a scholarship tax credit law, but sadly the available credits are so limited that the law will barely relieve any pressure at all. As I explained recently:

The total amount of tax credits available is limited to only $5 million in the first year, or about 0.14 percent of statewide district school expenditures. Following Arizona, Florida, and New Hampshire, Nevada lawmakers wisely included an “escalator clause” allowing the total amount of credits to grow by 10 percent each year. However, assuming an average scholarship of $5,000 (significantly lower than the law allows), there would only be sufficient funds for 1,000 students in the first year, which is the equivalent of about 0.2 percent of statewide district school enrollment. Even with the escalator clause, very few students will be able to receive scholarships without the legislature expanding the available credits.

This year, Nevada let the school choice camel get its nose whisker under the tent, but policymakers shouldn’t rely on the escalator clause alone for growth. Students crammed into overcrowded district schools need alternatives now. Kids who happen to be assigned to an overcrowded Las Vegas district school shouldn’t have to stay in that school.

The BBC reports that Nancie Atwell of Maine has just won the million dollar “Global Teacher Prize.” Congratulations Ms. Atwell! On the rare occasions such prizes are doled out, the reaction is universally celebratory. But is there really only one teacher in the world worth $1,000,000–and even then only once in a lifetime?

Here’s a radical thought: What if we organized education such that the top teachers could routinely make large sums of money “the old-fashioned way” (i.e., by earning it in a free and open marketplace)? In other fields, the people and institutions that best meet our needs attract more customers and thereby earn greater profits. Why have we structured our economy such that the best cell phone innovators can become rich, but not the best teachers? This seems not only deeply unfair but unwise as well.

Perhaps some people don’t believe it would be possible for educators to become wealthy in an open marketplace. Their negativity is contradicted by reality. In one of the few places where instruction is organized as a marketplace activity, Korea’s tutoring sector, one of the top tutors (Kim Ki-Hoon) has earned millions of dollars per year over the last decade. His secret: offering recorded lessons over the Internet at a reasonable price, and attracting over a hundred thousand students each year. His employment contract with his tutoring firm ensures that he receives a portion of the revenue he brings in–so even though his fees are reasonable, his earnings are large due to the vast number of students he reaches. And his success depends on his performance. In an interview with Amanda Ripley he observed: “The harder I work, the more I make…. I like that.” Is there any reason we shouldn’t like that, too?

As Ripley reports, this tutoring marketplace receives favorable reviews from students:

In a 2010 survey of 6,600 students at 116 high schools conducted by the Korean Educational Development Institute, Korean teenagers gave their hagwon [i.e., private tutoring] teachers higher scores across the board than their regular schoolteachers: Hagwon teachers were better prepared, more devoted to teaching and more respectful of students’ opinions, the teenagers said. Interestingly, the hagwon teachers rated best of all when it came to treating all students fairly, regardless of the students’ academic performance.

That is not to say that the Korean education system is without flaw. Indeed, the government-mandated college entrance testing system creates enormous pressure on students and skews families’ demands toward doing well on “the test,” rather than on fulfilling broader educational goals. This, of course, is not caused by the marketplace, but rather by the government mandate. The marketplace simply responds to families’ demands, whatever they happen to be. While many hagwons prepare students for the mandated college-entrance exam, there are also those teaching such things as swimming or calligraphy.

If we liberate educators, educational entrepreneurship will thrive. There are policies already in place in some states that could ensure universal access to such an educational marketplace.

In his groundbreaking work, Denationalisation of Money: the Argument Refined, F.A. Hayek proposed that open competition among private suppliers of irredeemable monies would favor the survival of those monies that earned a reputation for possessing a relatively stable purchasing power.

One of the main problems with Bitcoin has been its tremendous price instability: its volatility is about an order of magnitude greater than that of traditional financial assets, and this price instability is a serious deterrent to Bitcoin’s more widespread adoption as currency. So is there anything that can be done about this problem?

Let’s go back to basics. A key feature of the Bitcoin protocol is that the supply of bitcoins grows at a predetermined rate.1 The Bitcoin price then depends on the demand for bitcoins: the higher the demand, the higher the price; the more volatile the demand, the more volatile the price. The fixed supply schedule also introduces a strong speculative element. To quote Robert Sams (2014: 1):

If a cryptocurrency system aims to be a general medium-of-exchange, deterministic coin supply is a bug rather than a feature… . Deterministic money supply combined with uncertain future money demand conspire to make the market price of a bitcoin a sort of prediction market [based] on its own future adoption.

To put it another way, the current price is indicative of expected future demand. Sams continues:

The problem is that high levels of volatility deter people from using coin as a medium of exchange [and] it might be conjectured that deterministic money supply rules are self-defeating.

One way to reduce such volatility is to introduce a feedback rule that adjusts supply in response to changes in demand. Such a rule could help reduce speculative demand and potentially lead to a cryptocurrency with a stable price.

Let’s consider a cryptocurrency that I shall call “coins,” which we can think of as a Bitcoin-type cryptocurrency but with an elastic supply schedule. Following Sams, if we are to stabilize its price, we want a supply rule that ensures that if the price rises (falls) by X% over some period, then the supply increases (decreases) by X% to return the price back toward its initial or target value. Suppose we measure a period as the length of time needed to validate n transactions blocks. For example, a period might be a day; if takes approximately 10 minutes to validate each transactions block, as under the Bitcoin protocol, then the period would be the length of time needed to validate 144 transactions blocks. Sams posits the following supply rule:

(1a) Qt=Q(t-1)(Pt/P(t-1)),

(1b) ΔQt=Qt-Q(t-1).

Here Pt is the coin price, Qt is the coin supply at the end of period t, and ∆Qt is the change in the coin supply over period t. There is a question as to how Pt is defined, but following Ferdinando Ametrano (2014a), let’s assume that Pt is defined in USD and that the target is Pt=$1. This assumed target provides a convenient starting point, and we can generalize it later to look at other price targets, such as those involving price indices. Indeed, we can also generalize it to targets specified in terms of other indices such as NGDP.

Another issue is how the change in coin supply (∆Qt) is distributed. The point to note here is that there will be occasions when the coin supply needs to be reduced, and others when it needs to be raised, depending on whether the coin price has fallen or risen over the preceding period.

Ametrano proposes an elegant solution to this distribution problem, which he calls ‘Hayek Money.’ At the end of each period, the system should automatically reset the price back to the target value and simultaneously adjust the number of coins in each wallet by a factor of Pt/P(t-1). Instead of having k coins in a wallet that each increase or decrease in value by a factor of Pt/P(t-1), a wallet holder would thus have k×Pt/P(t-1), coins in their wallet, but the value of each coin would be the same at the end of each period.

This proposal would stabilize the coin price and achieve a stable unit of account. However, it would make no difference to the store of value performance of the currency: the value of the wallet would be just as volatile as it was before. To deal with this problem, both Ametrano (2014b) and Sams propose improvements based on an idea they call ‘Seigniorage Shares.’ These involve two types of claims on the system—coins and shares, with the latter used to support the price of the former via swaps of one for the other. Similar schemes have been proposed by Buterin (2014a),2 Morini (2014),3 and Iwamura et al. (2014), but I focus here on Seigniorage Shares as all these schemes are fairly similar.

The most straightforward version of Seigniorage Shares is that of Sams, and under my interpretation, this scheme would work as follows. If ∆Qt is positive and new coins have to be created in the t-th period, Sams would have a coin auction 4 in which ∆Qt coins would be created and swapped for shares, which would then be digitally destroyed by putting them into a burning blockchain wallet from which they could never be removed. Conversely, if ∆Qt is negative, existing coins would be swapped for newly created shares, and the coins taken in would be digitally destroyed.

At the margin, and so long as there is no major shock, the system should work beautifully. After some periods, new coins would be created; after other periods, existing coins would be destroyed. But either way, at the end of each period, the Ametrano-style coin quantity adjustments would push the price of coins back to the target value of $1.

Rational expectations would then come into play to stabilize the price of coins during each period. If the price of coins were to go below $1 during any such period, it would be profitable to take a bullish position in coins, go long, and profit when the quantity adjustments at the end of the period pushed the price back up to $1. Conversely, if the price of coins were to go above $1 during that period, then it would be profitable to take a bear position and sell or short coins to reap a profit at the end of that period, when the quantity adjustments would push the price back down to $1.

These self-fulfilling speculative forces, driven by rational expectations, would ensure that the price during each period would never deviate much from $1. They would also mean that the length of the period is not a critical parameter in the system. Doubling or halving the length of the period would make little difference to how the system would operate. One can also imagine that the period might be very short—even as short as the period needed to validate a single transactions block, which is less than a minute. In such a case, very frequent rebasings would ensure almost continuous stability of the coin price.

The take-home message here is that a well-designed cryptocurrency system can achieve its price-pegging target—provided that there is no major shock.

References

Ametrano, F.A. “Hayek Money: The Cryptocurrency Price Stability Solution.” August 19, 2014. (a)

Ametrano, F. M “Price Stability Using Cryptocurrency Seigniorage Shares.” August 23 2014. (b)

Buterin, V. “The Search for a Stable Cryptocurrency.” November 11, 2014. (a)

Buterin, V. “SchellingCoin: A Minimal-Trust Universal Data Feed.” March 28, 2014. (b)

Iwamura, M., Kitamura, Y., Matsumoto, T., and Saito, K. “Can We Stabilize the Price of a Cryptocurrency? Understanding the Design of Bitcoin and Its Potential to Compete with Central Bank Money.” October 25, 2014.

Morini, M. “Inv/Sav Wallets and the Role of Financial Intermediaries in a Digital Currency.” July 21, 2014.

Sams, R. “A Note on Cryptocurrency Stabilisation: Seigniorage Shares.” November 8, 2014.

[1] Strictly speaking, the supply of bitcoins is only deterministic when measured in block-time intervals. Measured in real time, there is a (typically) small randomness in how long it takes to validate each block. However, the impact of this randomness is negligible, especially over the longer term where the law of large numbers also comes into play.

[2] Buterin (2014b) examines three schemes that seek to stabilize the cryptocurrency price: BitAsset, the SchellingCoin (first proposed by Buterin (2014b)) and Seigniorage Shares. He concludes that each of these is vulnerable to fragility problems similar to those to be discussed in my next post.

[3] In the Morini system, participants would have a choice of Inv and Sav wallets, the former for investors in coins and the other for savers who want coin-price security. The Sav wallets would be protected by the Inv wallets, and participants could choose a mix of the two to meet their risk-aversion preferences.

[4] In fact, Sams’ auction is unnecessarily complicated and not even necessary. Since shares and coins would have well-defined market values under his system, it would suffice merely to have a rule to swap them as appropriate at going market prices without any need to specify an auction mechanism.

[Cross-posted from Alt-M.org]

Free speech has been in the news a lot recently. And lately it seems that we’ve had an unusually vigorous crop of utility monsters - the sort of professional complainers whose feelings are all too easily bruised, and who therefore demand that the rights of others be curtailed. 

In a climate like this, it’s important to distinguish the true heroes of free speech from the false ones. The latter are all too common. The key question to ask of public figures is simple: If you had all the power, how would you treat your opponents?

Meet Dutch politician Geert Wilders. He was a guest of honor at the recent Garland, Texas exhibition of cartoons of Mohammed, where two would-be terrorists armed with assault weapons were gunned down by a single heroic security guard armed only with a pistol. (Nice shooting, by the way.)

Wilders is now being hailed as a free-speech hero, at least in some circles. Unfortunately, he’s nothing of the kind. Besides criticizing Islam, Wilders has also repeatedly called for banning the Koran. The former is compatible with the principle of free speech. The latter is not.

A key move here is to distinguish the exercise of free speech from the principled defense of free speech. The two are not the same, as my colleague Adam Bates has ably pointed out.

Exercises of free speech can be completely one-sided. As an example, here’s me exercising my free speech: I happen to think Islam is a false religion. I have no belief whatsoever that Mohammed’s prophecies are true. They’re not even all that interesting. I mean, if you think the Bible is dull…well…have I got a book for you. I speak only for myself here, but I disagree with Islam. (And probably with your religion, too, because I’m a skeptic about all of them.) My saying so is an exercise of free speech. 

Defenses of free speech are different. Properly speaking, they must not be one-sided. A principled defense of free speech means giving your opponents in any particular issue the exact same rights that you would claim for yourself: If you would offend them with words, then they must be allowed to offend you with words, too. Say what you like about them, and they must be allowed to say what they like about you. 

No, we’re not all going to agree. And that’s actually the point: Given that agreement on so many issues is simply impossible in our modern, interconnected world, how shall we proceed? With violence and repression? Or with toleration, even for views that we find reprehensible? 

If you had all the power, how would you treat your opponents?

Mismanagement within the Department of Veterans Affairs (VA) is chronic. The agency mismanages its projects and its patients. Last year’s scandal at the Phoenix VA centered on allegations that veterans waited months for treatment while never being added to the official waiting lists. The VA Secretary resigned and the agency focused on changing course. New reports suggest that agency reforms still have a long way to go.

A congresswoman at a recent congressional hearing described the VA as having a “culture of retaliation and intimidation.” Employees who raise concerns about agency missteps are punished. The U.S. Office of Special Counsel (OSC), which manages federal employee whistleblower complaints, reported that it receives twice as complaints from VA employees than from Pentagon employees, even though the Pentagon has double the staff. Forty percent of OSC claims in 2015 have come from VA employees, compared to 20 percent in 2009, 2010, and 2011.

During the hearing, a VA surgeon testified about the retaliation he faced following his attempts to highlight a coworker’s timecard fraud. From July 2014 until March 2015, his supervisors revoked his operating privileges, criticized him in front of other employees, and relocated his office to a dirty closet before demoting him from Chief of Staff.

Another physician was suspended from his job shortly after alerting supervisors to mishandled lab specimens. A week’s worth of samples were lost. Several months later, he reported another instance of specimen mishandling and his office was searched. He became a target of immense criticism.

In addition to these sorts of cases, Carolyn Lerner, head of OSC, told Congress that in some cases a whistleblower’s own VA medical records are illegally accessed in order to discredit them.  

One VA whistleblower claims that his VA medical records were accessed “by a dozen different people from October 28, 2014 to March 10, 2015.” Apparently, other employees were trying to retaliate against him because he attempted to flag the VA’s mishandling of suicidal patients at the Phoenix facility. His only treatment during this time period was to purchase a new pair of glasses.

These stories paint a dark picture of the VA system. A VA neurologist said, “the story of VA is a story of two different organizations; there is the VA that takes care of veterans, and there is the VA that takes care of itself.”

Congress and the VA should try to clean up these messes. Veterans’ health care needs improvement, and employees should be free to highlight these issues without the fear of retribution.

Is the problem with Baltimore’s district schools a lack of funds?

The Daily Show’s Jon Stewart argued as much during a recent interview with ABC’s George Stephanopoulos:

“If we are spending a trillion dollars to rebuild Afghanistan’s schools, we can’t, you know, put a little taste Baltimore’s way. It’s crazy.”

However, under even cursory scrutiny, Stewart’s claim falls apart like a Lego Super Star Destroyer dropped from ten feet. As economist Alex Tabarrok explained:

Let’s forget the off-the-cuff comparison to Afghanistan, however, and focus on a more relevant comparison. Is it true, as Stewart suggests, that Baltimore schools are underfunded relative to other American schools? The National Center for Education Statistics reports the following data on Baltimore City Public Schools and Fairfax County Public Schools, the latter considered among the best school districts in the entire country:

Baltimore schools spend 27% more than Fairfax County schools per student and a majority of the money comes not from the city but from the state and federal government. Thus, when it comes to education spending, Baltimore has not been ignored but is a recipient of significant federal and state aid.

Clearly, as Tabarrok shows, Baltimore’s schools are not lacking for funds. According to the most recent NCES data, the national average district school per-pupil expenditure was about $12,000 in 2010-11, which is about $12,500 in 2015 dollars.

However, one could object to Tabarrok’s comparison: perhaps it’s simply more expensive to educate low-income students in Baltimore than the generally well-off students in Fairfax County. To see if money really makes a difference, we would need an apples-to-apples comparison.

One way to test the “more money equals better results” assumption is to see look at the funding changes across different states to see if there is any correlation between increased funding and improved results. In 2012, researchers from Harvard, Stanford, and the University of Munich released a report on international and state trends in student achievement that addressed this very question, finding that “Just about as many high-spending states showed relatively small gains as showed large ones…. And many states defied the theory [that spending drives performance] by showing gains even when they did not commit much in the way of additional resources.” They concluded:

It is true that on average, an additional $1,000 in per-pupil spending is associated with an annual gain in achievement of one-tenth of 1 percent of a standard deviation. But that trivial amount is of no statistical or substantive significance.

 

In other words, there’s no good reason to believe that Baltimore’s district schools would improve if the government followed Stewart’s advice and gave them a lot more money. In fact, the federal government already tried that. Due to stimulus funds, federal spending on Baltimore city schools increased from about $143 million in 2009 to a high of $265 million in 2011, before declining to about $150 million in 2014.

Source: Baltimore City Public Schools, Adopted Operating Budget, Fiscal Year 2014, page 12.

So how did Baltimore city school students perform on the state’s standardized test over that time period? About the same, and perhaps slightly worse:

Source: Maryland State Department of Education, 2014 Maryland Report Card.

Nearby Washington, D.C. already spends significantly more on its district schools. According to the most recent U.S. Census Bureau data, the D.C. district schools spent $1.2 billion in FY2012 [Table 1] on 44,618 students [Table 19], or about $26,660 per pupil. That’s down from the nearly $30,000 spent per pupil in FY2010, yet D.C.’s district schools still rank among the worst in the nation. By contrast, the D.C. Opportunity Scholarship Program spends less than one third as much per pupil yet, according to a random-assignment study by the U.S. Department of Education, it produces slightly better academic results and a significantly higher graduation rate (82 percent for students offered a voucher, compared to 70 percent in the control group). Other gold standard studies on school choice programs have found a positive impact on student achievement as well.

What Baltimore needs is not more money, but more choice.

Roger Milliken, head of the South Carolina textile firm Milliken & Co. for more than 50 years, was one of the most important benefactors of modern conservatism. He was active in the Goldwater campaign, and was a founder and funder of National Review and the Heritage Foundation. He dabbled in libertarianism, too. He was a board member of the Foundation for Economic Education and supported the legendary anarchist-libertarian speaker Robert LeFevre, sending his executives to LeFevre’s classes.

But he parted company with his free-market friends on one issue: free trade. Starting in the 1980s, when Americans started buying a lot of textile imports, he hated it. As the Wall Street Journal reports today,

Milliken & Co., one of the largest U.S. textile makers, has been on the front lines of nearly every recent battle to defeat free-trade legislation. It has financed activists, backed like-minded lawmakers and helped build a coalition of right and left-wing opponents of free trade….

“Roger Milliken was likely the largest single investor in the anti-trade movement for many years—as though no amount of money was too much,” said former Clinton administration U.S. Trade Representative Charlene Barshefsky, who battled with him and his allies….

Mr. Milliken, a Republican, invited anti-free-trade activists of all stripes to dinners on Capitol Hill. The coalition was secretive about their meetings, dubbing themselves the No-Name Coalition.

Several people who attended the dinners, which continued through the mid-2000s, recall how International Ladies’ Garment Workers Union lobbyist Evelyn Dubrow, a firebrand four years younger than the elderly Mr. Milliken, would greet the textile boss, who fought to keep unions out of his factories, with a kiss on the cheek.

“He had this uncanny convening power,” says Lori Wallach, an anti-free-trade activist who works for Public Citizen, a group that lobbies on consumer issues. “He could assemble people who would otherwise turn into salt if they were in the same room.”…

“He was just about the only genuinely big money that was active in funding trade-policy critics,” says Alan Tonelson, a former senior researcher at the educational arm of the U.S. Business and Industry Council, a group that opposed trade pacts.

But the world has changed, and so has Milliken & Co. Roger Milliken died in 2010, at age 95 still the chairman of the company his grandfather founded. His chosen successor, Joseph Salley, wants Milliken to be part of the global economy. He has ended the company’s support for protectionism and slashed its lobbying budget. And as the Journal reports, Milliken’s executives are urging Congress to support fast-track authority for President Obama.

American businesses are going global:

But as business becomes more international, American industries that once pushed for protection—apparel, automobiles, semiconductors and tires—now rarely do so. The U.S. Fashion Industry Association, an apparel trade group that wants to reduce tariffs, says that half the brands and retailers it surveyed last year used between six and 20 countries for production. Only two of the eight members of the main U.S. tire-industry trade group, the Rubber Manufacturers Association, even have their headquarters in the U.S….

“There’s a new generation of CEOs,” says Dartmouth College economic historian Douglas Irwin.“It’s part of their DNA that they operate in an international environment.”…

While Mr. Milliken saw China is a major threat to the industry—he said in 1999 he was “outraged, totally outraged” by Congress clearing the way for China’s entrance into the WTO—his successor sees the company’s future there. Milliken opened an industrial-carpet factory near Shanghai in 2007. It has a research-and-development center there and a laboratory stuffed with machinery where Chinese customers can check out the latest additive for strengthening or coloring synthetics.

Globalization is bringing billions of people into the world economy and into prosperity. Even in South Carolina.

The OECD has just released a report offering “its perspective” on Sweden’s academic decline. Its perspective is too narrow. In launching the new report, OECD education head Andres Schleicher declared that “It was in the early 2000s that the Swedish school system somehow seems to have lost its soul.” The OECD administers the international PISA test, which began in the year 2000.

Certainly Sweden’s academic performance has fallen since the early 2000s, but its decline was substantially faster in the preceding decade. PISA cannot shed light on this, but TIMSS—an alternative international test—can, having been introduced several years earlier. On the 8th grade mathematics portion of TIMSS, Sweden’s rate of decline between 1995 and 2003 was over five points per year. Between 2003 and 2011 it was less than two points per year. Still regrettable, but less grievously so.

Why is this timing important? Because Sweden introduced a nationwide public/private school choice program in 1992 and many critics blame that program for Sweden’s decline. This charge is hopelessly anachronistic. In 2003, at the end of the worst phase of the nation’s academic decline, public schools still enrolled 96% of students. Hence it must have been declining public school performance that brought down the national average. A 4% private sector could have had little effect.

What then can explain the country’s disappointing results?  Gabriel Sahlgren has some intriguing suggestions in a recent piece analyzing trends in Finland, Sweden, and Norway. For instance:

Something extreme clearly happened in Sweden in the mid-to-late 1990s, most probably due to the 1994 national curriculum that emphasised pupil-led methods, which decreased teacher-led instruction. [emphasis added]

If you happened to miss it last week, go catch Bill Keller’s extraordinary Marshall Project interview with David Simon, former Baltimore Sun reporter, creator of the crime drama “The Wire,” and longtime Drug War critic. A few highlights:

I guess there’s an awful lot to understand and I’m not sure I understand all of it. The part that seems systemic and connected is that the drug war — which Baltimore waged as aggressively as any American city — was transforming in terms of police/community relations, in terms of trust, particularly between the black community and the police department. Probable cause was destroyed by the drug war. …

Probable cause from a Baltimore police officer has always been a tenuous thing. It’s a tenuous thing anywhere, but in Baltimore, in these high crime, heavily policed areas, it was even worse. When I came on, there were jokes about, “You know what probable cause is on Edmondson Avenue? You roll by in your radio car and the guy looks at you for two seconds too long.” Probable cause was whatever you thought you could safely lie about when you got into district court.

Then at some point when cocaine hit and the city lost control of a lot of corners and the violence was ratcheted up, there was a real panic on the part of the government. And they basically decided that even that loose idea of what the Fourth Amendment was supposed to mean on a street level, even that was too much. Now all bets were off. Now you didn’t even need probable cause. The city council actually passed an ordinance that declared a certain amount of real estate to be drug-free zones. They literally declared maybe a quarter to a third of inner city Baltimore off-limits to its residents, and said that if you were loitering in those areas you were subject to arrest and search. Think about that for a moment: It was a permission for the police to become truly random and arbitrary and to clear streets any way they damn well wanted.

Former mayor (and later governor and presidential candidate) Martin O’Malley instituted a mass arrest policy made possible by the ready availability of humbles:

A humble is a cheap, inconsequential arrest that nonetheless gives the guy a night or two in jail before he sees a court commissioner. You can arrest people on “failure to obey,” it’s a humble. Loitering is a humble. These things were used by police officers going back to the ‘60s in Baltimore. It’s the ultimate recourse for a cop who doesn’t like somebody who’s looking at him the wrong way. And yet, back in the day, there was, I think, more of a code to it. If you were on a corner, you knew certain things would catch you a humble.  

“The drug war gives everybody permission to do anything.” One way Simon noticed things changing was that his own film crew members kept getting picked up:

…anybody who was slow to clear the sidewalk or who stayed seated on their front stoop for too long when an officer tried to roust them. Schoolteachers, Johns Hopkins employees, film crew people, kids, retirees, everybody went to the city jail. If you think I’m exaggerating look it up.

Under pressure from O’Malley to portray a crime reduction miracle, the BPD cooked its books to undercount serious crimes like rape and armed robbery while also going back to inflate crime numbers in previous years so as to simulate a bigger drop for which to take credit. Even as the arrest mill hummed, clearance rates for offenses like murder and aggravated assault were plummeting, the prolonged footwork needed to solve these crimes affording ambitious cops relatively few opportunities for overtime or advancement.

Meanwhile, the informal but understood street policing “code” was decaying. Under the old code, for example, “the rough ride [in the back of the van] was reserved for the guys who fought the police,” which Freddie Gray did not do, witnesses say.  The Baltimore Sun’s investigation of police misconduct payouts is frightening not so much because it shows patterns of abuse but because of its lack of patterns: “anyone and everyone” can wind up brutalized.

Policing in Baltimore may actually have bounced back from its low point, if Simon is correct, not only because newer police administrators are trying to refocus on serious crime rather than arrest numbers, but – crucially – because the public is now able to film the police: “The smartphone with its small, digital camera, is a revolution in civil liberties.” 

There is much more, in rich detail: which insults cops will informally shrug off, and which they won’t; why replacing white with African-American officers didn’t fix things; how the nightmare ends (“end the drug war”: it would help even if D.A.s just stopped paying cops overtime for penny-ante drug arrests.)  Read the whole thing.

 

When banks are in distress, it is important to assess how easily the bank’s capital cushion can absorb potential losses from troubled assets. To do this, I performed an analysis using Texas Ratios for Greece’s four largest banks, which control 88% of total assets in the banking system.

We use a little known, but very useful formula to determine the health of the Big Four. It is called the Texas Ratio. It was used during the U.S. Savings and Loan Crisis, which was centered in Texas. The Texas Ratio is the book value of all non-performing assets divided by equity capital plus loan loss reserves. Only tangible equity capital is included in the denominator. Intangible capital — like goodwill — is excluded.

Despite the already worry-some numbers, the actual situation is far worse than even I had initially deduced. A deeper analysis of the numbers reveals that Greece’s largest banks include deferred tax assets as part of total equity in their financial statements. Deferred tax assets are created when banks are allowed to declare their losses at a later time, thereby reducing tax liabilities. This is problematic because these deferred tax assets are really just “phantom assets” in the sense that these credits cannot be used (read: worthless) if the Greek banks continue to operate at a pretax loss.

Similar to its neighbors — Portugal, Spain and Italy —Greece provides significant state support to its banks by offering credit for loss deductions for taxable future profits. For the four largest banks, this type of support made up 38-61% of total equity (see accompanying chart).

Adjusting the Texas Ratio to account for the phantom assets yields much higher ratios. These indicate significantly higher risk of bank failures, barring a capital injection (see the accompanying chart).

The federal government runs more than 2,300 subsidy programs. One of the problems created by the armada of hand-outs is that many programs work at cross-purposes.

Government information programs urge women to breastfeed. This website says, “the cells, hormones, and antibodies in breastmilk protect babies from illness. This protection is unique and changes to meet your baby’s needs.” Breastfeeding, the government says, may protect babies against asthma, leukemia, obesity, ear infections, eczema, diarrhea, vomiting, lower respiratory infections, necrotizing enterocolitis, sudden infant death syndrome, and diabetes. 

The alternative to breastfeeding is baby formula. Some moms need to use formula, but you would think given the superiority of breastmilk that the government would not want to encourage formula. But that is exactly what the government does with the Women, Infants, and Children (WIC) program. According to the Wall Street Journal, the “largest single expense” in the $6 billion program is subsidies for formula. If you subsidize something, you get more of it. And, presumably, more formula means less breastmilk.

The government and probably every pediatrician tell moms to breastfeed if they can, yet the government provides huge subsidies for the alternative. “Huge” seems to be the correct word. The WSJ says that WIC provides benefits to the moms of half of all babies in the nation, and the program “accounts for well over half of all infant formula sold in the U.S.” That is remarkable.

Obviously then, ending WIC subsidies for formula would be a good way to trim the bloated federal budget. Another way to trim the budget would be to cut off people on WIC who earn more than the federal income limits, which is the focus of the WSJ article.

So WIC would be a good target for reforms by Republicans, who often rail against bloated spending and promise to eliminate deficits. Alas, rather than a take-charge reform agenda on WIC from the GOP, the WSJ captures just a quiet whimper:

“The focus will remain on preserving the intent of these programs, which is to ensure low-income children—and, in this case, mothers and infants in need—receive supplemental assistance to help protect against inadequate nutrition,” said Senate Agriculture Committee Chairman Pat Roberts (R., Kan.), who has a lead role in renewing the WIC law.

In March, we detailed reforms announced by Attorney General Eric Holder to federal asset forfeitures under the Bank Secrecy Act’s “structuring” law.  Those changes mirror an earlier policy shift by the Internal Revenue Service.  Unfortunately for some, those changes were not made retroactive, meaning people whose property was seized before the announcements in a way that would violate the new policies did not automatically have their property returned.

Lyndon McLellan, the owner of a North Carolina convenience store, has not been charged with a crime.  He has, however, had his entire business account totaling $107,702.66, seized by the federal government.  As Mr. McLellan attempts to recover his money, he is now being represented by the Institute for Justice, which issued this release:

“This case demonstrates that the federal government’s recent reforms are riddled with loopholes and exceptions and fundamentally fail to protect Americans’ basic rights,” said Institute for Justice Attorney Robert Everett Johnson, who represents Lyndon. “No American should have his property taken by the government without first being convicted of a crime.”

In February 2015, during a hearing before the U.S. House of Representatives Ways & Means Oversight Subcommittee, North Carolina Congressman George Holding told IRS Commissioner John Koskinen that he had reviewed Lyndon’s case—without specifically naming it—and that there was no allegation of the kind of illegal activity required by the IRS’s new policy. The IRS Commissioner responded, “If that case exists, then it’s not following the policy.”

The government’s response to the notoriety Mr. McLellan’s case has received was nothing short of threatening.  After the hearing, Assistant U.S. Attorney Steven West wrote to Mr. McLellan’s attorney:

Whoever made [the case file] public may serve their own interest but will not help this particular case. Your client needs to resolve this or litigate it. But publicity about it doesn’t help. It just ratchets up feelings in the agency. My offer is to return 50% of the money. 

What “feelings in the agency” could possibly be “ratchet[ed] up” by highlighting a case in which the owner is accused of no wrongdoing while both the Department of Justice and the Internal Revenue Service have announced reforms to prevent these seizures from occurring?

Perhaps the government is sensitive to the avalanche of negative press that civil asset forfeiture has received over the past several years (thanks to the tireless efforts of organizations like the Institute for Justice and the ACLU).  Perhaps the government feels that the game is nearly up, after dozens of publicized cases of civil asset forfeiture abuse.

Cases like this show that the executive branch, now under a new Attorney General who has her own controversial civil forfeiture history, cannot be trusted to stay its own hand.  State and federal legislators must take the initiative, as some already have, if this abusive practice is going to end.

When Prime Minister Shinzo Abe visited Washington he brought plans for a more expansive international role for his country. But the military burden of defending Japan will continue to fall disproportionately on America.

As occupying power, the U.S. imposed the “peace constitution” on Tokyo, with Article Nine banning possession of a military. As the Cold War developed, however, Washington recognized that a rearmed Japan could play an important security role.

However, Japan’s governments hid between the amendment to cap military outlays and limit the Self-Defense Forces’ role, ensuring American protection. That approach also suited Tokyo’s neighbors, which had suffered under Imperial Japan’s brutal occupation.

In recent years Japanese sentiment has shifted toward a more vigorous role out of fear of North Korea and China. This changing environment generated new bilateral defense “guidelines.”

Yet the focus is Japanese, not American security. In essence, the new standards affirm what should have been obvious all along—Japan will help America defend Japan. In contrast, there is nothing about Tokyo supporting U.S. defense other than as part of “cooperation for regional and global peace and security.”

This approach was evident in the Prime Minister Abe’s speech to Congress, when he emphasized that Tokyo’s responsibility is to “fortify the U.S.-Japan alliance.” He said Japan would “take yet more responsibility for the peace and stability in the world,” but as examples mostly cited humanitarian and peace-keeping operations.

Worse, Japan’s military outlays were essentially flat over the last decade while Washington, and more ominously for Japan, the People’s Republic of China, dramatically increased military expenditures. The U.S. is expected to fill the widening gap.

Obviously Tokyo sees its job is non-combat, relatively costless and riskless social work which will enhance Tokyo’s international reputation. Even Tokyo’s potential new “security” duties appear designed to avoid combat—cyber warfare, reconnaissance, mine-sweeping, logistics.

As I point out in Forbes, “Washington’s job is to do anything bloody or messy. That is, deter and fight wars with other militaries, a task which the prime minister ignored. Indeed, the U.S. is expected to do even more to defend Japan, deploying new military equipment, for instance.”

While America has an obvious interest in Japan’s continued independence, no one imagines a Chinese attempt to conquer Tokyo. Rather, the most likely trigger for conflict today is the Senkaku Islands, a half dozen valueless pieces of rock. Abe so far has preferred confrontation to compromise—a stance reinforced by Washington’s guarantee.

Abe’s historical revisionism further inflames regional tensions. Abe addressed the historical controversy in his speech to Congress but more remains to be done.

U.S. officials appear to have forgotten the purpose of alliances. Abe was eloquent in stating why Japan enjoyed being allied with America. It isn’t evident what the U.S. receives in return.

After World War II the U.S. sensibly shielded allied states from totalitarian assault as they recovered. That policy succeeded decades ago. Now Washington should cede responsibility for defending its populous and prosperous allies.

America should remain a watchful and wary friend, prepared to act from afar against potentially hostile hegemonic threats. In the meantime Washington should let other states manage day-to-day disputes and controversies.

The U.S. should not tell Tokyo what to do. Rather, Washington should explain what it will not do. No promise of war on Japan’s behalf, no forward military deployment, no guarantee for Japanese commerce at sea, no Pentagon backing for contested territorial claims.

This would force the Japanese people to debate their security needs, set priorities, and pay the cost. Moreover, Tokyo would have added incentive to improve its relationships with neighboring states.

After 70 years the U.S. should stop playing globocop, especially in regions where powerful, democratic friends such as Japan can do so much more to defend themselves and their neighborhoods. This would be the best way to enhance security and stability not only of the Asia-Pacific but also of America, which is Washington’s highest responsibility.

Last week, the Department of Justice (DOJ) announced a $20 million police body camera pilot funding scheme to assist law enforcement agencies in developing body camera programs. In the wake of the killings of Michael Brown, Walter Scott, and Freddie Gray there has been renewed debate on police accountability. Unsurprisingly, body cameras feature heavily in this debate. Yet, despite the benefits of police body cameras, we ought to be wary of federal top-down body camera funding programs, which have been supported across the political spectrum.

The $20 million program is part of a three-year $75 million police body camera initiative, which was announced by the Obama administration shortly after the news that Darren Wilson, the officer who shot and killed Michael Brown in Ferguson, Missouri, would not be indicted. It is undoubtedly the case that if Wilson had been wearing a body camera that there would be fewer questions about the events leading up to and including his killing of Brown. And, while there are questions about the extent to which police body cameras prompt some “civilizing effect” on police, the footage certainly provides welcome additional evidence in investigations relating to police misconduct, thereby improving transparency and accountability.

Democratic presidential nominee Hillary Clinton agrees that body cameras improve transparency and accountability. In a speech on criminal justice last week she said that she wants to extend President Obama’s body camera funding program: “We should make sure every police department in the country has body cameras to record interactions between officers on patrol and suspects.” Clinton did not provide any details about her proposed body camera program, but it certainly sounds like it would be more expensive that Obama’s.

On the other side of the political spectrum a more detailed police body camera proposal emerged. In March, Republican presidential candidate Sen. Rand Paul (R-KY) co-sponsored a body camera bill with Sen. Brian Schatz (D-HI) that would establish a federal pilot funding program for police body cameras. I wrote last month about some of the worrying aspects of the bill, such as the requirement that the entities requesting body camera funding publish privacy policies “in accordance with the open records laws, if any, of the State.” This means that Paul and Schatz’s bill could provide financial assistance to departments that are not subject to policies conducive to improved law enforcement transparency and accountability.

Given that law enforcement agencies can propose bad body camera policies and that body cameras can impose a fiscal burden on states it is not hard to see why federal funding for police body cameras might be appealing. But it is important to keep in mind that while the DOJ does require that “a strong BWC policy framework” be in place before body cameras are purchased through the recently announced program, what constitutes a “strong BWC policy framework” is not made clear. The DOJ document which outlines eligibility for the grants does state that law enforcement agencies will have to develop or build on a policy which includes the “Implementation of appropriate privacy policies that at a minimum addresses BWC program issues involving legal liabilities of release of information, civil rights, domestic violence, juveniles, and victims’ groups.” However, the document includes few specific details about what policies will have to include in order to be deemed to have addressed these issues.  

There are numerous policy concerns associated with police body cameras that must be adequately addressed if they are to improve transparency and accountability. A good body camera policy will outline (among other things) when a police body camera must be on, what footage can be requested, how much of that footage can be released to the public, how long the footage is stored, what the punishment will be when officers fail to turn their cameras on, what information will be redacted from footage when it is released, and whether police will be able to view footage before speaking to investigators.

It might be the case that the Bureau of Justice Assistance, which will administer the grants, will require the best police body camera policies so far proposed. But the fact that implementation of “appropriate privacy policies” is a condition for funding means that some law enforcement agencies may adopt privacy policies in order to receive funding rather than because they provide the best privacy protections.

If the DOJ is going to take part in the ongoing debate on police body camera policy it shouldn’t provide a financial incentive for the adoption of its policies. When discussing the best policies for a relatively new technology such as body cameras we ought to consider suggestions from a variety of sources, but none of these suggestions should be accompanied by financial assistance, which could adversely influence the consideration of policy. 

The federal government operates the air traffic control (ATC) system as an old-fashioned bureaucracy, even though ATC is a high-tech business. It’s as if the government took over Apple Computer and tried to design breakthrough products. The government would surely screw it up, which is the situation today with ATC run by the Federal Aviation Administration (FAA).

The Washington Post reports:

A day after the Federal Aviation Administration celebrated the latest success in its $40 billion modernization of the air-traffic control system, the agency was hit Friday by the most scathing criticism to date for the pace of its efforts.

The FAA has frustrated Congress and been subject to frequent critical reports as it struggles to roll out the massive and complex system called NextGen, but the thorough condemnation in a study released Friday by the National Academies was unprecedented.

Mincing no words, the panel of 10 academic experts brought together by the academy’s National Research Council (NRC) said the FAA was not delivering the system that had been promised and should “reset expectations” about what it is delivering to the public and the airlines that use the system.

The “success” the WaPo initially refers to is a component of NextGen that was four years behind schedule and millions of dollars over-budget. That is success for government work I suppose.

The NRC’s findings come on the heels of other critical reports and years of FAA failings. The failings have become so routine—and the potential benefits of improved ATC so large— that even moderate politicians, corporate heads, and bureaucratic insiders now support major reforms:

“We will never get there on the current path,” Rep. Bill Shuster (R-Pa.), chairman of the House Transportation Committee, said two months ago at a roundtable discussion on Capitol Hill. “We’ve spent $6 billion on NextGen, but the airlines have seen few benefits.”

American Airlines chief executive Doug Parker added, “FAA’s modernization efforts have been plagued with delays.”

And David Grizzle, former head of the FAA’s air-traffic control division, said taking that division out of FAA hands “is the only means to create a stable” future for the development of NextGen.

The reform we need is ATC privatization. Following the leads of Canada and Britain, we should move the entire ATC system to a private and self-supporting nonprofit corporation. The corporation would cover its costs by generating revenues from customers—the airlines—which would make it more responsible for delivering results.

Here is an interesting finding from the NRC report:  “Airlines are not motivated to spend money on equipment and training for NextGen.” Apparently, the airlines do not trust the government to do its part, and so progress gets stalled because companies cannot be sure their investments will pay off. So an advantage of privatization would be to create a more trustworthy ATC partner for the users of the system.

ATC privatization should be an opportunity for Democrats and Republicans to forge a bipartisan legislative success. In Canada, the successful ATC privatization was enacted by a Liberal government and supported by the subsequent Conservative government. So let’s use the Canadian system as a model, and move ahead with ATC reform and modernization.

Many state legislatures are proposing to expand E-Verify – a federal government-run electronic system that allows or forces employers to check the identity of new hires against a government database.  In a perfect world, E-Verify tells employers whether the new employee can legally be hired.  In our world, E-Verify is a notoriously error-prone and unreliable system.

E-Verify mandates vary considerably across states.  Currently, Alabama, Arizona, Mississippi and South Carolina have across the board mandates for all employers.  The state governments of Georgia, Utah, and North Carolina force all businesses with at least 10, 15, and 25 employees, respectively, to use E-Verify.  Florida, Indiana, Missouri, Nebraska, Oklahoma, Pennsylvania and Texas mandate-Verify for public employees and state contractors, while Idaho and Virginia mandate E-Verify for public employees. The remaining states either have no state-wide mandates or, in the case of California, limit how E-Verify can be used by employers.

Despite E-Verify’s wide use in the states and problems, some state legislatures are considering forcing it on every employer within their respective states. 

In late April, the North Carolina’s House of Representatives passed a bill (HB 318) 80-39 to lower the threshold for mandated E-Verify to businesses with five or more employees.  HB 318 is now moving on to the North Carolina Senate where it could pass.  Nevada’s AB 172 originally included an E-Verify mandate that the bill’s author removed during the amendment process. Nebraska’s LB611 would have mandated E-Verify for all employers in the state.  LB611 has since stalled since a hostile hearing over in February.

E-Verify imposes a large economic cost on American workers and employers, does little to halt unlawful immigration because it fails to turn off the “jobs magnet,” and is an expansionary threat to American liberties.  Those harms are great while the benefits are uncertain – at best.  At a minimum, state legislatures should thoroughly examine the costs and supposed benefits of E-Verify before expanding or enacting mandates.

Scott Platton helped to write this blog post.

When U.S Congressman Robert C. “Bobby” Scott (D-VA) and U.S. Senator Patty Murray (D-WA) introduced the Raise the Wage Act on April 30, they promised that their bill would “raise wages for nearly 38 million American workers.” Their bill would also phase out the subminimum tipped wage and index the minimum wage to median wage growth.

With rhetorical flourish, Sen. Murray said, “Raising the minimum wage to $12 by 2020 is a key component to helping more families make ends meet, expanding economic security, and growing our economy from the middle out, not the top down.”

The fact sheet that accompanied the bill claims that passing the Raise the Wage Act would reduce poverty and benefit low-wage workers, especially minorities. Indeed, it is taken as given that the Act “would give 37 percent of African American workers a raise”—by the mere stroke of a legislative pen. It is also assumed that “putting more money into the pockets of low-wage workers stimulates consumer demand and strengthens the economy for all Americans.”

The reality is that whenever wages are artificially pushed above competitive market levels jobs will be destroyed, unemployment will increase for lower-skilled workers, and those effects will be stronger in the long run than in the short run.  The least productive workers will be harmed the most as employers substitute new techniques that require fewer low-skilled workers.  There will be less full-time employment for those workers and their benefits will be cut over time.  That is the logic of the market price system.

To deny that logic and, hence, to ignore the law of demand, is to destroy the opportunities that would have otherwise existed.  The minimum wage law increases the price of labor without doing anything to increase a worker’s skills or experience or other traits that would allow that worker to remain employed and earn higher wages in the future.  Moreover, if that worker loses her job because she is priced out of the labor market, her income is zero.

Some workers, particularly those with slightly higher skills, more experience, or better work habits, may retain their jobs at the higher minimum wage, but other workers will lose their jobs or won’t be able to find jobs.  If workers lose their jobs, it is against the minimum wage law to offer to work at the old wage rate—or for employers to hire at that wage rate.  Hence, the minimum wage law restricts freedom of contract and prevents many workers from climbing the income ladder.

Contrary to what proponents of the minimum wage promise, an increase in the minimum wage cannot benefit all workers or all Americans.  Workers who lose their jobs or become unemployable at the higher minimum wage will have lower—not higher—real incomes.  They will have less, not more, spending power.

Linking the minimum wage to the growth of the median wage is a sure way to permanently block some lower-skilled workers out of the market and keep them in poverty.

Proponents of the Raise the Wage Act justify the higher minimum by arguing that inflation has eroded the purchasing power of the nominal minimum wage.  The fact sheet states that “the real value of the minimum wage … is down by more than 30 percent from its peak value in the late 1960s.”  That comparison is largely irrelevant.  What matters is whether the nominal minimum wage is above the nominal market wage rate for low-skilled workers.

No one knows ex ante what the market-clearing wage for low-skilled workers should be, but we do know that the higher the minimum wage is above the prevailing market wage, the greater the number of jobs that will be lost, the higher the rate of unemployment for that category of labor, and the slower the job growth.  We also know that the minimum wage is an ineffective means to reduce poverty, and that an increase in the minimum wage would benefit mostly non-poor families.  

Many studies have strong empirical evidence in support of these effects.  For a summary of some of the many studies that show the negative effects of minimum wage laws, especially in the longer run, see “Minimum Wages: A Poor Way to Reduce Poverty” (Joseph J. Sabia), “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers” (Jeffrey Clemens and Michael Wither), and “The Minimum Wage Delusion and the Death of Common Sense” (James A. Dorn).

Black teen unemployment is abysmal, especially for males.  Raising the minimum wage will make it more so.  Poverty is best reduced by gaining education and experience, not by mandating higher federal minimum wages.  There is no free lunch.  Congress can’t promise workers a pay raise without adverse consequences for those that are priced out of the market.  A more accurate name for Raise the Wage Act would be “Job Destruction Act”—under which Congress makes it illegal to hire low-skilled workers at a rate less than the legal minimum wage—even if they are willing to work at the free-market wage rate.

Raise the Wage Act is a feel good bill, not an effective way to reduce poverty or create jobs for low-skilled workers.  Removing barriers to competition and entrepreneurship, lowering taxes on labor and capital, improving educational opportunities, and lowering inner-city crime would do more to help the poor than rigging the price of labor and making promises that can’t be kept.

The Tax Foundation released its inaugural “International Tax Competitiveness Index” (ITCI) on September 15th, 2014. The United States was ranked an abysmal 32nd out of the 34 OECD member countries for the year 2014. (See accompanying Table 1.) The European welfare states such as Norway, Sweden and Denmark, with their large social welfare systems, still managed to have less burdensome tax systems on local businesses than the U.S. The U.S. is even ranked below Italy, the country that has had such a pervasive problem with tax evasion that the head of its Agency of Revenue (roughly equivalent to the Internal Revenue Service in the United States) recently joked that Italians don’t pay taxes because they were Catholic and hence were used to “gaining absolution.” In fact, according to the ranking, only France and Portugal have the dubious honor of operating less competitive tax systems than the United States.

The ITCI measures “the extent to which a country’s tax system adheres to two important principles of tax policy: competitiveness and neutrality.” The competitiveness of a tax system can be measured by the overall tax rates faced by domestic businesses operating within the country. In the words of the Tax Foundation, when tax rates are too high, it “drives investment elsewhere, leading to slower economic growth.” Tax competitiveness is measured from 40 different variables across five different categories: consumption taxes, individual taxes, corporate income taxes, property taxes, and the treatment of foreign earnings. Tax neutrality, the other principle taken into account when composing the ITCI, refers to a “tax code that seeks to raise the most revenue with the fewest economic distortions.” This would mean that tax systems are fair and equally targeted towards all firms and industries, with no tax breaks for any specific business activity. A neutral tax system would also limit the rate of – amongst others – capital gains and dividends taxes, all of which encourage consumption at the expense of savings and investment. 

Even the two countries that have less competitive tax regimes than the U.S. – France and Portugal – have lower corporate tax rates than the U.S., at 34.4% and 31.5%, respectively. The U.S. corporate rate on average across states, on the other hand, is at 39.1%. This is the highest rate in the OECD, which has an average corporate tax rate of 24.8% across the 34 member countries. According to a report by KPMG, if the United Arab Emirates’ severance tax on oil companies was ignored, the U.S. average corporate tax rate would be the world’s highest.

Table 1.

The poor showing of the U.S. resulted from other countries recognizing the need to improve their competitive position in an increasingly globalized world. Indeed, the only OECD member countries not to have cut their corporate tax rates since the onset of the new millennia are Chile, Norway, and, yes, the United States. The high U.S. corporate tax rate not only raises the cost of doing business in the U.S., but also overseas. The U.S., along with just 5 other OECD countries, imposes a “global tax” on profits earned overseas by domestically-owned businesses. In contrast, Estonia, ranked 1st in the ITCI, does not tax any profit earned internationally. Since these profits earned overseas by U.S.-domiciled companies are already subject to taxes in that specific country, there is a clear incentive for American companies to try to avoid double taxation. Indeed, many of the largest American multinational corporations have established corporate centers overseas, where tax codes are less stringent, to avoid this additional tax.

The ITCI also reported a myriad of other reasons for the low ranking of the U.S., including poorly structured property taxes and onerously high income taxes on individuals. One major reason why the U.S. lags so far behind most of the industrialized world is simply the lack of serious tax code reforms since the Tax Reform Act of 1986.

The annual Doing Business report published by the World Bank has an even more expansive analysis that determines the tax competitiveness in 189 economies, and also provides an equally sobering look at the heavy taxes faced by business in the United States. (See accompanying Table 2.) One of the metrics it incorporates into the assessment is the “total tax rate.” The Doing Business report defines the total tax rate as “the taxes and mandatory contributions that a medium-size company must pay in the second year of operation as well as measures of the administrative burden of paying taxes and contributions.”

According to the rankings in the most recent Doing Business 2015 report (which reported total tax rates for the calendar year 2013), Macedonia had the lowest total tax rate in the world at 7.4% and was followed closely by Vanuatu at 8.5%. The United States, as in previous years, appears near the bottom of the list, at 126th out of 189, with a total tax rate of 43.8%.

Table 2:

The fact that both the ITCI and Doing Business report, whose methodologies and calculations were conducted independent of one another, rank the United States very low shows that the tax rates in this country are non-neutral and uncompetitive, no matter how they are measured. The message is clear, and very simple: taxes on corporations increase costs and decrease margins, and lead to price increases on goods and ultimately hurt the consumer and the development of any country.

As proposed in “Policy Priorities for the 114th Congress,” published by the Cato Institute, to increase the incentives of domestic firms to go into business and become competitive globally, the U.S. would have to drastically reduce its corporate tax rate. 

The Wall Street Journal just offered two articles in one day touting Robert Shiller’s cyclically adjusted price/earnings ratio (CAPE).  One of then, “Smart Moves in a Pricey Stock Market” by Jonathan Clements, concludes that, “U.S. shares arguably have been overpriced for much of the past 25 years.” Identical warnings keep appearing, year after year, despite being endlessly wrong.  

The Shiller CAPE assumes the P/E ratio must revert to some heroic 1881-2014 average of 16.6 (or, in Clements’ account, a 1946-1990 average of 15).  That assumption is completely inconsistent with the so-called “Fed model” observation that the inverted P/E ratio (the E/P ratio or earnings yield) normally tracks the 10 year bond yield surprisingly closely.  From 1970 to 2014, the average E/P ratio was 6.62 and the average 10-Year bond yield was 6.77.  

When I first introduced this “Fed Model” relationship to Wall Street consulting clients in “The Stock Market Like Bonds,” March 1991, I suggested bonds yields were about to fall because a falling E/P commonly preceded falling bond yields. And when the E/P turned up in 1993, bond yield obligingly jumped in 1994.

Since 2010, the E/P ratio has been unusually high relative to bond yields, which means the P/E ratio has been unusually low.  The gap between the earnings yield and bond yield rose from 2.8 percentage points in 2010 to a peak of 4.4 in 2012.  Recylcing my 1991 analysis, the wide 2012 gap suggested the stock market thought bond yields would rise, as they did –from 1.8% in in 2012 to 2.35% in 2013 and 2.54% in 2014.

On May 1, the trailing P/E ratio for the S&P 500 was 20.61, which translates into an E/P ratio of 4.85 (1 divided by 20.61). That is still high relative to a 10-year bond yield of 2.12%.   If the P/E fell to 15, as Shiller fans always predict, the E/P ratio would be 6.7 which would indeed get us close to the Shiller “buy” signal of 6.47 in 1990.  But the 10-year bond yield in 1990 was 8.4%.  And the P/E ratio was so depressed because Texas crude jumped from $16 in late June 1990 to nearly $40 after Iraq invaded Kuwait. Oil price spikes always end in recession, including 2008.

Today’s wide 2.7 point gap between the high E/P ratio and low bond yield will not be closed by shoving the P/E ratio back down to Mr. Shiller’s idyllic level of the 1990 recession.  It is far more likely that the gap will be narrowed by bond yields rising. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

As Pope Francis, this week, focused on examining the moral issues of climate change (and largely ignoring the bigger moral issues that accompany fossil fuel restrictions), he pretty much took as a given that climate change is “a scientific reality” that requires “decisive mitigation.” Concurrently, unfolding scientific events during the week were revealing a different story.

First and foremost, Roy Spencer, John Christy and William Braswell of the University of Alabama-Huntsville (UAH)—developers and curators of the original satellite-derived compilation of the temperature history of the earth’s atmosphere—released a new and improved version of their iconic data set. Bottom line: the temperature trend in the lower atmosphere from the start of the data (1979) through the present came in as 0.114°C/decade (compared with 0.14°C in the previous data version). The new warming trend is less than half what climate models run with increasing atmospheric carbon dioxide emissions project to have occurred.

While the discrepancy between real world observations and climate model projections of temperature rise in the lower atmosphere has been recognized for a number of years, the question has remained as to whether the “problem” lies within the climate models or the observations. With this new data release, the trend in the UAH data now matches very closely with the trend through an independent compilation of the satellite-temperature observations maintained by a team of researchers at Remote Sensing Systems (RSS). The convergence of the observed data sets is an indication the climate models are the odd man out.

As with most long-term, real-world observations, the data are covered in warts. The challenge posed to Spencer et al. was how to splice together remotely sensed data collected from a variety of instruments carried aboard a variety of satellites in unstable orbits—and produce a product robust enough for use in climate studies. The details as to how they did it are explained as clearly as possible in this post over at Spencer’s website (although still quite a technical post). The post provides good insight as to why raw data sets need to be “adjusted”—a lesson that should be kept in mind when considering the surface temperature compilations as well. In most cases, using raw data “as is” is an inherently improper thing to do, and the types of adjustments that are applied may vary based upon the objective.

Here is a summary of the new data set and what was involved in producing it:

Version 6 of the UAH MSU/AMSU global satellite temperature data set is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring. The two most significant changes from an end-user perspective are (1) a decrease in the global-average lower tropospheric (LT) temperature trend from +0.140 C/decade to +0.114 C/decade (Dec. ’78 through Mar. ’15); and (2) the geographic distribution of the LT trends, including higher spatial resolution. We describe the major changes in processing strategy, including a new method for monthly gridpoint averaging; a new multi-channel (rather than multi-angle) method for computing the lower tropospheric (LT) temperature product; and a new empirical method for diurnal drift correction… The 0.026 C/decade reduction in the global LT trend is due to lesser sensitivity of the new LT to land surface skin temperature (est. 0.010 C/decade), with the remainder of the reduction (0.016 C/decade) due to the new diurnal drift adjustment, the more robust method of LT calculation, and other changes in processing procedures.

Figure 1 shows a comparison of the data using the new procedures with that derived from the old procedures. Notice that in the new dataset, the temperature anomalies since about 2003 are less than those from the previous version. This has the overall effect of reducing the trend when computed over the entirety of the record.

 

Figure 1. Monthly global-average temperature anomalies for the lower troposphere from Jan. 1979 through March 2015 for both the old and new versions of LT. (Source: www.drroyspencer.com)

While this new version, admittedly, is not perfect, Spencer, Christy, and Braswell see it as an improvement over the old version. Note that this is not the official release, but rather a version the authors have released for researchers to examine and see if they can find anything that looks irregular that may raise questions as to the procedures employed. Spencer et al. expect a scientific paper on the new data version to be published sometime in 2016.

But unless something major comes up, the new satellite data are further evidence the earth is not warming as expected.  That means that, before rushing into “moral obligations” to attempt to alter the climate’s future course by restricting energy production, we perhaps ought to spend more time trying to better understand what it is we should be expecting in the first place.

One of the things we are told by the more alarmist crowd that we should expect from our fossil fuel burning is a large and rapid sea level rise, primarily a result of a melting of the ice sheets that rest atop Greenland and Antarctica. All too frequently we see news stories telling tales of how the melting in these locations is “worse than we expected.” Some soothsayers even attack the United Nations’ Intergovernmental Panel on Climate Change (IPCC) for being too conservative (of all things) when it comes to projecting future sea level rise. While the IPCC projects a sea level rise of about 18–20 inches from its mid-range emissions scenario over the course of this century, a vocal minority clamor that the rise will be upwards of 3 feet and quite possibly (or probably) greater. All the while, the sea level rise over the past quarter-century has been about 3 inches.

But as recent observations do little to dissuade the hardcore believers, perhaps model results (which they are seemingly more comfortable with) will be more convincing.

A new study available this week in the journal Geophysical Research Letters is described by author Miren Vizcaino and colleagues as “a first step towards fully-coupled higher resolution simulations with more advanced physics”—basically, a detailed ice sheet model coupled with a global climate model.

They ran this model combination with the standard IPCC emissions scenarios to assess Greenland’s contribution to future sea level rise. Here’s what they found:

The [Greenland ice sheet] volume change at year 2100 with respect to year 2000 is equivalent to 27 mm (RCP 2.6), 34 mm (RCP 4.5) and 58 mm (RCP 8.5) of global mean SLR.

Translating millimeters (mm) into inches give this answer: a projected 21st century sea level rise of 1.1 in. (for the low emissions scenario; RCP 2.6), 1.3 in. (for the low/mid scenario; RCP 4.5), and 2.3 in (for the IPCC’s high-end emission scenario). Some disaster.

As with any study, the authors attach some caveats:

The study presented here must be regarded as a necessary first step towards more advanced coupling of ice sheet and climate models at higher resolution, for instance with improved surface-atmosphere coupling (e.g., explicit representation of snow albedo evolution), less simplified ice sheet flow dynamics, and the inclusion of ocean forcing to Greenland outlet glaciers.

Even if they are off by 3–4 times, Greenland ice loss doesn’t seem to be much of a threat. Seems like it’s time to close the book on this imagined scare scenario.

And while imagination runs wild when it comes to linking carbon dioxide emissions to calamitous climate changes and extreme weather events (or even war and earthquakes),  imagination runs dry when it comes to explaining non-events (except when non-events string together to produce some sort of negative outcome [e.g., drought]).

Case in point, a new study looking into the record-long absence of major hurricane (category 3 or higher) strikes on the U.S. mainland—an absence that exceeds nine years (the last major hurricane to hit the U.S was Hurricane Wilma in late-October 2005). The authors of the study, Timothy Hall of NASA’s Goddard Institute for Space Studies and Kelly Hereid from ACE Tempest Reinsurance, concluded that while a streak this long is rare, their results suggest “there is nothing unusual underlying the current hurricane drought. There’s no extraordinary lack of hurricane activity.” Basically they concluded that it’s “a case of good luck” rather than “any shift in hurricane climate.”

That is all well and good, and almost certainly the case. Of course, the same was true a decade ago when the United States was hit by seven major hurricanes over the course of two hurricane seasons (2004 and 2005)—an occurrence that spawned several prominent papers and endless discussion pointing the finger squarely at anthropogenic climate change. And the same is true for every hurricane that hits the United States, although this doesn’t stop someone, somewhere, from speculating to the media that the storm’s occurrence was “consistent with” expectations from a changing climate.

What struck us as odd about the Hall and Hereid paper is the lack of speculation as to how the ongoing record “drought” of major hurricane landfalls in the United States could be tied in with anthropogenic climate change. You can rest assured—and history will confirm—that if we had been experiencing a record run of hurricane landfalls, researchers would be falling all over themselves to draw a connection to human-caused global warming.

But the lack of anything bad happening? No way anyone wants to suggest that is “consistent with” expectations. According to Hall and Hereid:

A hurricane-climate shift protecting the US during active years, even while ravaging nearby Caribbean nations, would require creativity to formulate. We conclude instead that the admittedly unusual 9-year US Cat3+ landfall drought is a matter of luck. [emphasis added]

Right! A good string of weather is “a matter of luck” while bad weather is “consistent with” climate change.

It’s not like it’s very hard, or (despite the authors’ claim) it requires much “creativity” to come up with ways to construe a lack of major hurricane strikes on U.S. soil to be “consistent with” anthropogenic climate change. In fact, there are loads of material in the scientific literature that could be used to construct an argument that under global warming, the United States should experience fewer hurricane landfalls. For a rundown of them, see p. 30 of our comments on the government’s National Assessment on Climate Change, or check out our piece titled, “Global Savings: Billion-Dollar Weather Events Averted by Global Warming.”

It is not for lack of material, but rather, for lack of desire, that keeps folks from wanting to draw a potential link between human-caused climate change and good things occurring in the world.

References:

Hall, T., and K. Hereid. 2015. “The Frequency and Duration of US Hurricane Droughts.” Geophysical Research Letters, doi:10.1002/2015GL063652

Vizcaino, M. et al. 2015. “Coupled Simulations of Greenland Ice Sheet and Climate Change up to AD 2300.” Geophysical Research Letters, doi: 10.1002/2014GL061142

Pages