Feed aggregator

The OECD has just released a report offering “its perspective” on Sweden’s academic decline. Its perspective is too narrow. In launching the new report, OECD education head Andres Schleicher declared that “It was in the early 2000s that the Swedish school system somehow seems to have lost its soul.” The OECD administers the international PISA test, which began in the year 2000.

Certainly Sweden’s academic performance has fallen since the early 2000s, but its decline was substantially faster in the preceding decade. PISA cannot shed light on this, but TIMSS—an alternative international test—can, having been introduced several years earlier. On the 8th grade mathematics portion of TIMSS, Sweden’s rate of decline between 1995 and 2003 was over five points per year. Between 2003 and 2011 it was less than two points per year. Still regrettable, but less grievously so.

Why is this timing important? Because Sweden introduced a nationwide public/private school choice program in 1992 and many critics blame that program for Sweden’s decline. This charge is hopelessly anachronistic. In 2003, at the end of the worst phase of the nation’s academic decline, public schools still enrolled 96% of students. Hence it must have been declining public school performance that brought down the national average. A 4% private sector could have had little effect.

What then can explain the country’s disappointing results?  Gabriel Sahlgren has some intriguing suggestions in a recent piece analyzing trends in Finland, Sweden, and Norway. For instance:

Something extreme clearly happened in Sweden in the mid-to-late 1990s, most probably due to the 1994 national curriculum that emphasised pupil-led methods, which decreased teacher-led instruction. [emphasis added]

If you happened to miss it last week, go catch Bill Keller’s extraordinary Marshall Project interview with David Simon, former Baltimore Sun reporter, creator of the crime drama “The Wire,” and longtime Drug War critic. A few highlights:

I guess there’s an awful lot to understand and I’m not sure I understand all of it. The part that seems systemic and connected is that the drug war — which Baltimore waged as aggressively as any American city — was transforming in terms of police/community relations, in terms of trust, particularly between the black community and the police department. Probable cause was destroyed by the drug war. …

Probable cause from a Baltimore police officer has always been a tenuous thing. It’s a tenuous thing anywhere, but in Baltimore, in these high crime, heavily policed areas, it was even worse. When I came on, there were jokes about, “You know what probable cause is on Edmondson Avenue? You roll by in your radio car and the guy looks at you for two seconds too long.” Probable cause was whatever you thought you could safely lie about when you got into district court.

Then at some point when cocaine hit and the city lost control of a lot of corners and the violence was ratcheted up, there was a real panic on the part of the government. And they basically decided that even that loose idea of what the Fourth Amendment was supposed to mean on a street level, even that was too much. Now all bets were off. Now you didn’t even need probable cause. The city council actually passed an ordinance that declared a certain amount of real estate to be drug-free zones. They literally declared maybe a quarter to a third of inner city Baltimore off-limits to its residents, and said that if you were loitering in those areas you were subject to arrest and search. Think about that for a moment: It was a permission for the police to become truly random and arbitrary and to clear streets any way they damn well wanted.

Former mayor (and later governor and presidential candidate) Martin O’Malley instituted a mass arrest policy made possible by the ready availability of humbles:

A humble is a cheap, inconsequential arrest that nonetheless gives the guy a night or two in jail before he sees a court commissioner. You can arrest people on “failure to obey,” it’s a humble. Loitering is a humble. These things were used by police officers going back to the ‘60s in Baltimore. It’s the ultimate recourse for a cop who doesn’t like somebody who’s looking at him the wrong way. And yet, back in the day, there was, I think, more of a code to it. If you were on a corner, you knew certain things would catch you a humble.  

“The drug war gives everybody permission to do anything.” One way Simon noticed things changing was that his own film crew members kept getting picked up:

…anybody who was slow to clear the sidewalk or who stayed seated on their front stoop for too long when an officer tried to roust them. Schoolteachers, Johns Hopkins employees, film crew people, kids, retirees, everybody went to the city jail. If you think I’m exaggerating look it up.

Under pressure from O’Malley to portray a crime reduction miracle, the BPD cooked its books to undercount serious crimes like rape and armed robbery while also going back to inflate crime numbers in previous years so as to simulate a bigger drop for which to take credit. Even as the arrest mill hummed, clearance rates for offenses like murder and aggravated assault were plummeting, the prolonged footwork needed to solve these crimes affording ambitious cops relatively few opportunities for overtime or advancement.

Meanwhile, the informal but understood street policing “code” was decaying. Under the old code, for example, “the rough ride [in the back of the van] was reserved for the guys who fought the police,” which Freddie Gray did not do, witnesses say.  The Baltimore Sun’s investigation of police misconduct payouts is frightening not so much because it shows patterns of abuse but because of its lack of patterns: “anyone and everyone” can wind up brutalized.

Policing in Baltimore may actually have bounced back from its low point, if Simon is correct, not only because newer police administrators are trying to refocus on serious crime rather than arrest numbers, but – crucially – because the public is now able to film the police: “The smartphone with its small, digital camera, is a revolution in civil liberties.” 

There is much more, in rich detail: which insults cops will informally shrug off, and which they won’t; why replacing white with African-American officers didn’t fix things; how the nightmare ends (“end the drug war”: it would help even if D.A.s just stopped paying cops overtime for penny-ante drug arrests.)  Read the whole thing.

 

When banks are in distress, it is important to assess how easily the bank’s capital cushion can absorb potential losses from troubled assets. To do this, I performed an analysis using Texas Ratios for Greece’s four largest banks, which control 88% of total assets in the banking system.

We use a little known, but very useful formula to determine the health of the Big Four. It is called the Texas Ratio. It was used during the U.S. Savings and Loan Crisis, which was centered in Texas. The Texas Ratio is the book value of all non-performing assets divided by equity capital plus loan loss reserves. Only tangible equity capital is included in the denominator. Intangible capital — like goodwill — is excluded.

Despite the already worry-some numbers, the actual situation is far worse than even I had initially deduced. A deeper analysis of the numbers reveals that Greece’s largest banks include deferred tax assets as part of total equity in their financial statements. Deferred tax assets are created when banks are allowed to declare their losses at a later time, thereby reducing tax liabilities. This is problematic because these deferred tax assets are really just “phantom assets” in the sense that these credits cannot be used (read: worthless) if the Greek banks continue to operate at a pretax loss.

Similar to its neighbors — Portugal, Spain and Italy —Greece provides significant state support to its banks by offering credit for loss deductions for taxable future profits. For the four largest banks, this type of support made up 38-61% of total equity (see accompanying chart).

Adjusting the Texas Ratio to account for the phantom assets yields much higher ratios. These indicate significantly higher risk of bank failures, barring a capital injection (see the accompanying chart).

The federal government runs more than 2,300 subsidy programs. One of the problems created by the armada of hand-outs is that many programs work at cross-purposes.

Government information programs urge women to breastfeed. This website says, “the cells, hormones, and antibodies in breastmilk protect babies from illness. This protection is unique and changes to meet your baby’s needs.” Breastfeeding, the government says, may protect babies against asthma, leukemia, obesity, ear infections, eczema, diarrhea, vomiting, lower respiratory infections, necrotizing enterocolitis, sudden infant death syndrome, and diabetes. 

The alternative to breastfeeding is baby formula. Some moms need to use formula, but you would think given the superiority of breastmilk that the government would not want to encourage formula. But that is exactly what the government does with the Women, Infants, and Children (WIC) program. According to the Wall Street Journal, the “largest single expense” in the $6 billion program is subsidies for formula. If you subsidize something, you get more of it. And, presumably, more formula means less breastmilk.

The government and probably every pediatrician tell moms to breastfeed if they can, yet the government provides huge subsidies for the alternative. “Huge” seems to be the correct word. The WSJ says that WIC provides benefits to the moms of half of all babies in the nation, and the program “accounts for well over half of all infant formula sold in the U.S.” That is remarkable.

Obviously then, ending WIC subsidies for formula would be a good way to trim the bloated federal budget. Another way to trim the budget would be to cut off people on WIC who earn more than the federal income limits, which is the focus of the WSJ article.

So WIC would be a good target for reforms by Republicans, who often rail against bloated spending and promise to eliminate deficits. Alas, rather than a take-charge reform agenda on WIC from the GOP, the WSJ captures just a quiet whimper:

“The focus will remain on preserving the intent of these programs, which is to ensure low-income children—and, in this case, mothers and infants in need—receive supplemental assistance to help protect against inadequate nutrition,” said Senate Agriculture Committee Chairman Pat Roberts (R., Kan.), who has a lead role in renewing the WIC law.

In March, we detailed reforms announced by Attorney General Eric Holder to federal asset forfeitures under the Bank Secrecy Act’s “structuring” law.  Those changes mirror an earlier policy shift by the Internal Revenue Service.  Unfortunately for some, those changes were not made retroactive, meaning people whose property was seized before the announcements in a way that would violate the new policies did not automatically have their property returned.

Lyndon McLellan, the owner of a North Carolina convenience store, has not been charged with a crime.  He has, however, had his entire business account totaling $107,702.66, seized by the federal government.  As Mr. McLellan attempts to recover his money, he is now being represented by the Institute for Justice, which issued this release:

“This case demonstrates that the federal government’s recent reforms are riddled with loopholes and exceptions and fundamentally fail to protect Americans’ basic rights,” said Institute for Justice Attorney Robert Everett Johnson, who represents Lyndon. “No American should have his property taken by the government without first being convicted of a crime.”

In February 2015, during a hearing before the U.S. House of Representatives Ways & Means Oversight Subcommittee, North Carolina Congressman George Holding told IRS Commissioner John Koskinen that he had reviewed Lyndon’s case—without specifically naming it—and that there was no allegation of the kind of illegal activity required by the IRS’s new policy. The IRS Commissioner responded, “If that case exists, then it’s not following the policy.”

The government’s response to the notoriety Mr. McLellan’s case has received was nothing short of threatening.  After the hearing, Assistant U.S. Attorney Steven West wrote to Mr. McLellan’s attorney:

Whoever made [the case file] public may serve their own interest but will not help this particular case. Your client needs to resolve this or litigate it. But publicity about it doesn’t help. It just ratchets up feelings in the agency. My offer is to return 50% of the money. 

What “feelings in the agency” could possibly be “ratchet[ed] up” by highlighting a case in which the owner is accused of no wrongdoing while both the Department of Justice and the Internal Revenue Service have announced reforms to prevent these seizures from occurring?

Perhaps the government is sensitive to the avalanche of negative press that civil asset forfeiture has received over the past several years (thanks to the tireless efforts of organizations like the Institute for Justice and the ACLU).  Perhaps the government feels that the game is nearly up, after dozens of publicized cases of civil asset forfeiture abuse.

Cases like this show that the executive branch, now under a new Attorney General who has her own controversial civil forfeiture history, cannot be trusted to stay its own hand.  State and federal legislators must take the initiative, as some already have, if this abusive practice is going to end.

When Prime Minister Shinzo Abe visited Washington he brought plans for a more expansive international role for his country. But the military burden of defending Japan will continue to fall disproportionately on America.

As occupying power, the U.S. imposed the “peace constitution” on Tokyo, with Article Nine banning possession of a military. As the Cold War developed, however, Washington recognized that a rearmed Japan could play an important security role.

However, Japan’s governments hid between the amendment to cap military outlays and limit the Self-Defense Forces’ role, ensuring American protection. That approach also suited Tokyo’s neighbors, which had suffered under Imperial Japan’s brutal occupation.

In recent years Japanese sentiment has shifted toward a more vigorous role out of fear of North Korea and China. This changing environment generated new bilateral defense “guidelines.”

Yet the focus is Japanese, not American security. In essence, the new standards affirm what should have been obvious all along—Japan will help America defend Japan. In contrast, there is nothing about Tokyo supporting U.S. defense other than as part of “cooperation for regional and global peace and security.”

This approach was evident in the Prime Minister Abe’s speech to Congress, when he emphasized that Tokyo’s responsibility is to “fortify the U.S.-Japan alliance.” He said Japan would “take yet more responsibility for the peace and stability in the world,” but as examples mostly cited humanitarian and peace-keeping operations.

Worse, Japan’s military outlays were essentially flat over the last decade while Washington, and more ominously for Japan, the People’s Republic of China, dramatically increased military expenditures. The U.S. is expected to fill the widening gap.

Obviously Tokyo sees its job is non-combat, relatively costless and riskless social work which will enhance Tokyo’s international reputation. Even Tokyo’s potential new “security” duties appear designed to avoid combat—cyber warfare, reconnaissance, mine-sweeping, logistics.

As I point out in Forbes, “Washington’s job is to do anything bloody or messy. That is, deter and fight wars with other militaries, a task which the prime minister ignored. Indeed, the U.S. is expected to do even more to defend Japan, deploying new military equipment, for instance.”

While America has an obvious interest in Japan’s continued independence, no one imagines a Chinese attempt to conquer Tokyo. Rather, the most likely trigger for conflict today is the Senkaku Islands, a half dozen valueless pieces of rock. Abe so far has preferred confrontation to compromise—a stance reinforced by Washington’s guarantee.

Abe’s historical revisionism further inflames regional tensions. Abe addressed the historical controversy in his speech to Congress but more remains to be done.

U.S. officials appear to have forgotten the purpose of alliances. Abe was eloquent in stating why Japan enjoyed being allied with America. It isn’t evident what the U.S. receives in return.

After World War II the U.S. sensibly shielded allied states from totalitarian assault as they recovered. That policy succeeded decades ago. Now Washington should cede responsibility for defending its populous and prosperous allies.

America should remain a watchful and wary friend, prepared to act from afar against potentially hostile hegemonic threats. In the meantime Washington should let other states manage day-to-day disputes and controversies.

The U.S. should not tell Tokyo what to do. Rather, Washington should explain what it will not do. No promise of war on Japan’s behalf, no forward military deployment, no guarantee for Japanese commerce at sea, no Pentagon backing for contested territorial claims.

This would force the Japanese people to debate their security needs, set priorities, and pay the cost. Moreover, Tokyo would have added incentive to improve its relationships with neighboring states.

After 70 years the U.S. should stop playing globocop, especially in regions where powerful, democratic friends such as Japan can do so much more to defend themselves and their neighborhoods. This would be the best way to enhance security and stability not only of the Asia-Pacific but also of America, which is Washington’s highest responsibility.

Last week, the Department of Justice (DOJ) announced a $20 million police body camera pilot funding scheme to assist law enforcement agencies in developing body camera programs. In the wake of the killings of Michael Brown, Walter Scott, and Freddie Gray there has been renewed debate on police accountability. Unsurprisingly, body cameras feature heavily in this debate. Yet, despite the benefits of police body cameras, we ought to be wary of federal top-down body camera funding programs, which have been supported across the political spectrum.

The $20 million program is part of a three-year $75 million police body camera initiative, which was announced by the Obama administration shortly after the news that Darren Wilson, the officer who shot and killed Michael Brown in Ferguson, Missouri, would not be indicted. It is undoubtedly the case that if Wilson had been wearing a body camera that there would be fewer questions about the events leading up to and including his killing of Brown. And, while there are questions about the extent to which police body cameras prompt some “civilizing effect” on police, the footage certainly provides welcome additional evidence in investigations relating to police misconduct, thereby improving transparency and accountability.

Democratic presidential nominee Hillary Clinton agrees that body cameras improve transparency and accountability. In a speech on criminal justice last week she said that she wants to extend President Obama’s body camera funding program: “We should make sure every police department in the country has body cameras to record interactions between officers on patrol and suspects.” Clinton did not provide any details about her proposed body camera program, but it certainly sounds like it would be more expensive that Obama’s.

On the other side of the political spectrum a more detailed police body camera proposal emerged. In March, Republican presidential candidate Sen. Rand Paul (R-KY) co-sponsored a body camera bill with Sen. Brian Schatz (D-HI) that would establish a federal pilot funding program for police body cameras. I wrote last month about some of the worrying aspects of the bill, such as the requirement that the entities requesting body camera funding publish privacy policies “in accordance with the open records laws, if any, of the State.” This means that Paul and Schatz’s bill could provide financial assistance to departments that are not subject to policies conducive to improved law enforcement transparency and accountability.

Given that law enforcement agencies can propose bad body camera policies and that body cameras can impose a fiscal burden on states it is not hard to see why federal funding for police body cameras might be appealing. But it is important to keep in mind that while the DOJ does require that “a strong BWC policy framework” be in place before body cameras are purchased through the recently announced program, what constitutes a “strong BWC policy framework” is not made clear. The DOJ document which outlines eligibility for the grants does state that law enforcement agencies will have to develop or build on a policy which includes the “Implementation of appropriate privacy policies that at a minimum addresses BWC program issues involving legal liabilities of release of information, civil rights, domestic violence, juveniles, and victims’ groups.” However, the document includes few specific details about what policies will have to include in order to be deemed to have addressed these issues.  

There are numerous policy concerns associated with police body cameras that must be adequately addressed if they are to improve transparency and accountability. A good body camera policy will outline (among other things) when a police body camera must be on, what footage can be requested, how much of that footage can be released to the public, how long the footage is stored, what the punishment will be when officers fail to turn their cameras on, what information will be redacted from footage when it is released, and whether police will be able to view footage before speaking to investigators.

It might be the case that the Bureau of Justice Assistance, which will administer the grants, will require the best police body camera policies so far proposed. But the fact that implementation of “appropriate privacy policies” is a condition for funding means that some law enforcement agencies may adopt privacy policies in order to receive funding rather than because they provide the best privacy protections.

If the DOJ is going to take part in the ongoing debate on police body camera policy it shouldn’t provide a financial incentive for the adoption of its policies. When discussing the best policies for a relatively new technology such as body cameras we ought to consider suggestions from a variety of sources, but none of these suggestions should be accompanied by financial assistance, which could adversely influence the consideration of policy. 

The federal government operates the air traffic control (ATC) system as an old-fashioned bureaucracy, even though ATC is a high-tech business. It’s as if the government took over Apple Computer and tried to design breakthrough products. The government would surely screw it up, which is the situation today with ATC run by the Federal Aviation Administration (FAA).

The Washington Post reports:

A day after the Federal Aviation Administration celebrated the latest success in its $40 billion modernization of the air-traffic control system, the agency was hit Friday by the most scathing criticism to date for the pace of its efforts.

The FAA has frustrated Congress and been subject to frequent critical reports as it struggles to roll out the massive and complex system called NextGen, but the thorough condemnation in a study released Friday by the National Academies was unprecedented.

Mincing no words, the panel of 10 academic experts brought together by the academy’s National Research Council (NRC) said the FAA was not delivering the system that had been promised and should “reset expectations” about what it is delivering to the public and the airlines that use the system.

The “success” the WaPo initially refers to is a component of NextGen that was four years behind schedule and millions of dollars over-budget. That is success for government work I suppose.

The NRC’s findings come on the heels of other critical reports and years of FAA failings. The failings have become so routine—and the potential benefits of improved ATC so large— that even moderate politicians, corporate heads, and bureaucratic insiders now support major reforms:

“We will never get there on the current path,” Rep. Bill Shuster (R-Pa.), chairman of the House Transportation Committee, said two months ago at a roundtable discussion on Capitol Hill. “We’ve spent $6 billion on NextGen, but the airlines have seen few benefits.”

American Airlines chief executive Doug Parker added, “FAA’s modernization efforts have been plagued with delays.”

And David Grizzle, former head of the FAA’s air-traffic control division, said taking that division out of FAA hands “is the only means to create a stable” future for the development of NextGen.

The reform we need is ATC privatization. Following the leads of Canada and Britain, we should move the entire ATC system to a private and self-supporting nonprofit corporation. The corporation would cover its costs by generating revenues from customers—the airlines—which would make it more responsible for delivering results.

Here is an interesting finding from the NRC report:  “Airlines are not motivated to spend money on equipment and training for NextGen.” Apparently, the airlines do not trust the government to do its part, and so progress gets stalled because companies cannot be sure their investments will pay off. So an advantage of privatization would be to create a more trustworthy ATC partner for the users of the system.

ATC privatization should be an opportunity for Democrats and Republicans to forge a bipartisan legislative success. In Canada, the successful ATC privatization was enacted by a Liberal government and supported by the subsequent Conservative government. So let’s use the Canadian system as a model, and move ahead with ATC reform and modernization.

Many state legislatures are proposing to expand E-Verify – a federal government-run electronic system that allows or forces employers to check the identity of new hires against a government database.  In a perfect world, E-Verify tells employers whether the new employee can legally be hired.  In our world, E-Verify is a notoriously error-prone and unreliable system.

E-Verify mandates vary considerably across states.  Currently, Alabama, Arizona, Mississippi and South Carolina have across the board mandates for all employers.  The state governments of Georgia, Utah, and North Carolina force all businesses with at least 10, 15, and 25 employees, respectively, to use E-Verify.  Florida, Indiana, Missouri, Nebraska, Oklahoma, Pennsylvania and Texas mandate-Verify for public employees and state contractors, while Idaho and Virginia mandate E-Verify for public employees. The remaining states either have no state-wide mandates or, in the case of California, limit how E-Verify can be used by employers.

Despite E-Verify’s wide use in the states and problems, some state legislatures are considering forcing it on every employer within their respective states. 

In late April, the North Carolina’s House of Representatives passed a bill (HB 318) 80-39 to lower the threshold for mandated E-Verify to businesses with five or more employees.  HB 318 is now moving on to the North Carolina Senate where it could pass.  Nevada’s AB 172 originally included an E-Verify mandate that the bill’s author removed during the amendment process. Nebraska’s LB611 would have mandated E-Verify for all employers in the state.  LB611 has since stalled since a hostile hearing over in February.

E-Verify imposes a large economic cost on American workers and employers, does little to halt unlawful immigration because it fails to turn off the “jobs magnet,” and is an expansionary threat to American liberties.  Those harms are great while the benefits are uncertain – at best.  At a minimum, state legislatures should thoroughly examine the costs and supposed benefits of E-Verify before expanding or enacting mandates.

Scott Platton helped to write this blog post.

When U.S Congressman Robert C. “Bobby” Scott (D-VA) and U.S. Senator Patty Murray (D-WA) introduced the Raise the Wage Act on April 30, they promised that their bill would “raise wages for nearly 38 million American workers.” Their bill would also phase out the subminimum tipped wage and index the minimum wage to median wage growth.

With rhetorical flourish, Sen. Murray said, “Raising the minimum wage to $12 by 2020 is a key component to helping more families make ends meet, expanding economic security, and growing our economy from the middle out, not the top down.”

The fact sheet that accompanied the bill claims that passing the Raise the Wage Act would reduce poverty and benefit low-wage workers, especially minorities. Indeed, it is taken as given that the Act “would give 37 percent of African American workers a raise”—by the mere stroke of a legislative pen. It is also assumed that “putting more money into the pockets of low-wage workers stimulates consumer demand and strengthens the economy for all Americans.”

The reality is that whenever wages are artificially pushed above competitive market levels jobs will be destroyed, unemployment will increase for lower-skilled workers, and those effects will be stronger in the long run than in the short run.  The least productive workers will be harmed the most as employers substitute new techniques that require fewer low-skilled workers.  There will be less full-time employment for those workers and their benefits will be cut over time.  That is the logic of the market price system.

To deny that logic and, hence, to ignore the law of demand, is to destroy the opportunities that would have otherwise existed.  The minimum wage law increases the price of labor without doing anything to increase a worker’s skills or experience or other traits that would allow that worker to remain employed and earn higher wages in the future.  Moreover, if that worker loses her job because she is priced out of the labor market, her income is zero.

Some workers, particularly those with slightly higher skills, more experience, or better work habits, may retain their jobs at the higher minimum wage, but other workers will lose their jobs or won’t be able to find jobs.  If workers lose their jobs, it is against the minimum wage law to offer to work at the old wage rate—or for employers to hire at that wage rate.  Hence, the minimum wage law restricts freedom of contract and prevents many workers from climbing the income ladder.

Contrary to what proponents of the minimum wage promise, an increase in the minimum wage cannot benefit all workers or all Americans.  Workers who lose their jobs or become unemployable at the higher minimum wage will have lower—not higher—real incomes.  They will have less, not more, spending power.

Linking the minimum wage to the growth of the median wage is a sure way to permanently block some lower-skilled workers out of the market and keep them in poverty.

Proponents of the Raise the Wage Act justify the higher minimum by arguing that inflation has eroded the purchasing power of the nominal minimum wage.  The fact sheet states that “the real value of the minimum wage … is down by more than 30 percent from its peak value in the late 1960s.”  That comparison is largely irrelevant.  What matters is whether the nominal minimum wage is above the nominal market wage rate for low-skilled workers.

No one knows ex ante what the market-clearing wage for low-skilled workers should be, but we do know that the higher the minimum wage is above the prevailing market wage, the greater the number of jobs that will be lost, the higher the rate of unemployment for that category of labor, and the slower the job growth.  We also know that the minimum wage is an ineffective means to reduce poverty, and that an increase in the minimum wage would benefit mostly non-poor families.  

Many studies have strong empirical evidence in support of these effects.  For a summary of some of the many studies that show the negative effects of minimum wage laws, especially in the longer run, see “Minimum Wages: A Poor Way to Reduce Poverty” (Joseph J. Sabia), “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers” (Jeffrey Clemens and Michael Wither), and “The Minimum Wage Delusion and the Death of Common Sense” (James A. Dorn).

Black teen unemployment is abysmal, especially for males.  Raising the minimum wage will make it more so.  Poverty is best reduced by gaining education and experience, not by mandating higher federal minimum wages.  There is no free lunch.  Congress can’t promise workers a pay raise without adverse consequences for those that are priced out of the market.  A more accurate name for Raise the Wage Act would be “Job Destruction Act”—under which Congress makes it illegal to hire low-skilled workers at a rate less than the legal minimum wage—even if they are willing to work at the free-market wage rate.

Raise the Wage Act is a feel good bill, not an effective way to reduce poverty or create jobs for low-skilled workers.  Removing barriers to competition and entrepreneurship, lowering taxes on labor and capital, improving educational opportunities, and lowering inner-city crime would do more to help the poor than rigging the price of labor and making promises that can’t be kept.

The Tax Foundation released its inaugural “International Tax Competitiveness Index” (ITCI) on September 15th, 2014. The United States was ranked an abysmal 32nd out of the 34 OECD member countries for the year 2014. (See accompanying Table 1.) The European welfare states such as Norway, Sweden and Denmark, with their large social welfare systems, still managed to have less burdensome tax systems on local businesses than the U.S. The U.S. is even ranked below Italy, the country that has had such a pervasive problem with tax evasion that the head of its Agency of Revenue (roughly equivalent to the Internal Revenue Service in the United States) recently joked that Italians don’t pay taxes because they were Catholic and hence were used to “gaining absolution.” In fact, according to the ranking, only France and Portugal have the dubious honor of operating less competitive tax systems than the United States.

The ITCI measures “the extent to which a country’s tax system adheres to two important principles of tax policy: competitiveness and neutrality.” The competitiveness of a tax system can be measured by the overall tax rates faced by domestic businesses operating within the country. In the words of the Tax Foundation, when tax rates are too high, it “drives investment elsewhere, leading to slower economic growth.” Tax competitiveness is measured from 40 different variables across five different categories: consumption taxes, individual taxes, corporate income taxes, property taxes, and the treatment of foreign earnings. Tax neutrality, the other principle taken into account when composing the ITCI, refers to a “tax code that seeks to raise the most revenue with the fewest economic distortions.” This would mean that tax systems are fair and equally targeted towards all firms and industries, with no tax breaks for any specific business activity. A neutral tax system would also limit the rate of – amongst others – capital gains and dividends taxes, all of which encourage consumption at the expense of savings and investment. 

Even the two countries that have less competitive tax regimes than the U.S. – France and Portugal – have lower corporate tax rates than the U.S., at 34.4% and 31.5%, respectively. The U.S. corporate rate on average across states, on the other hand, is at 39.1%. This is the highest rate in the OECD, which has an average corporate tax rate of 24.8% across the 34 member countries. According to a report by KPMG, if the United Arab Emirates’ severance tax on oil companies was ignored, the U.S. average corporate tax rate would be the world’s highest.

Table 1.

The poor showing of the U.S. resulted from other countries recognizing the need to improve their competitive position in an increasingly globalized world. Indeed, the only OECD member countries not to have cut their corporate tax rates since the onset of the new millennia are Chile, Norway, and, yes, the United States. The high U.S. corporate tax rate not only raises the cost of doing business in the U.S., but also overseas. The U.S., along with just 5 other OECD countries, imposes a “global tax” on profits earned overseas by domestically-owned businesses. In contrast, Estonia, ranked 1st in the ITCI, does not tax any profit earned internationally. Since these profits earned overseas by U.S.-domiciled companies are already subject to taxes in that specific country, there is a clear incentive for American companies to try to avoid double taxation. Indeed, many of the largest American multinational corporations have established corporate centers overseas, where tax codes are less stringent, to avoid this additional tax.

The ITCI also reported a myriad of other reasons for the low ranking of the U.S., including poorly structured property taxes and onerously high income taxes on individuals. One major reason why the U.S. lags so far behind most of the industrialized world is simply the lack of serious tax code reforms since the Tax Reform Act of 1986.

The annual Doing Business report published by the World Bank has an even more expansive analysis that determines the tax competitiveness in 189 economies, and also provides an equally sobering look at the heavy taxes faced by business in the United States. (See accompanying Table 2.) One of the metrics it incorporates into the assessment is the “total tax rate.” The Doing Business report defines the total tax rate as “the taxes and mandatory contributions that a medium-size company must pay in the second year of operation as well as measures of the administrative burden of paying taxes and contributions.”

According to the rankings in the most recent Doing Business 2015 report (which reported total tax rates for the calendar year 2013), Macedonia had the lowest total tax rate in the world at 7.4% and was followed closely by Vanuatu at 8.5%. The United States, as in previous years, appears near the bottom of the list, at 126th out of 189, with a total tax rate of 43.8%.

Table 2:

The fact that both the ITCI and Doing Business report, whose methodologies and calculations were conducted independent of one another, rank the United States very low shows that the tax rates in this country are non-neutral and uncompetitive, no matter how they are measured. The message is clear, and very simple: taxes on corporations increase costs and decrease margins, and lead to price increases on goods and ultimately hurt the consumer and the development of any country.

As proposed in “Policy Priorities for the 114th Congress,” published by the Cato Institute, to increase the incentives of domestic firms to go into business and become competitive globally, the U.S. would have to drastically reduce its corporate tax rate. 

The Wall Street Journal just offered two articles in one day touting Robert Shiller’s cyclically adjusted price/earnings ratio (CAPE).  One of then, “Smart Moves in a Pricey Stock Market” by Jonathan Clements, concludes that, “U.S. shares arguably have been overpriced for much of the past 25 years.” Identical warnings keep appearing, year after year, despite being endlessly wrong.  

The Shiller CAPE assumes the P/E ratio must revert to some heroic 1881-2014 average of 16.6 (or, in Clements’ account, a 1946-1990 average of 15).  That assumption is completely inconsistent with the so-called “Fed model” observation that the inverted P/E ratio (the E/P ratio or earnings yield) normally tracks the 10 year bond yield surprisingly closely.  From 1970 to 2014, the average E/P ratio was 6.62 and the average 10-Year bond yield was 6.77.  

When I first introduced this “Fed Model” relationship to Wall Street consulting clients in “The Stock Market Like Bonds,” March 1991, I suggested bonds yields were about to fall because a falling E/P commonly preceded falling bond yields. And when the E/P turned up in 1993, bond yield obligingly jumped in 1994.

Since 2010, the E/P ratio has been unusually high relative to bond yields, which means the P/E ratio has been unusually low.  The gap between the earnings yield and bond yield rose from 2.8 percentage points in 2010 to a peak of 4.4 in 2012.  Recylcing my 1991 analysis, the wide 2012 gap suggested the stock market thought bond yields would rise, as they did –from 1.8% in in 2012 to 2.35% in 2013 and 2.54% in 2014.

On May 1, the trailing P/E ratio for the S&P 500 was 20.61, which translates into an E/P ratio of 4.85 (1 divided by 20.61). That is still high relative to a 10-year bond yield of 2.12%.   If the P/E fell to 15, as Shiller fans always predict, the E/P ratio would be 6.7 which would indeed get us close to the Shiller “buy” signal of 6.47 in 1990.  But the 10-year bond yield in 1990 was 8.4%.  And the P/E ratio was so depressed because Texas crude jumped from $16 in late June 1990 to nearly $40 after Iraq invaded Kuwait. Oil price spikes always end in recession, including 2008.

Today’s wide 2.7 point gap between the high E/P ratio and low bond yield will not be closed by shoving the P/E ratio back down to Mr. Shiller’s idyllic level of the 1990 recession.  It is far more likely that the gap will be narrowed by bond yields rising. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

As Pope Francis, this week, focused on examining the moral issues of climate change (and largely ignoring the bigger moral issues that accompany fossil fuel restrictions), he pretty much took as a given that climate change is “a scientific reality” that requires “decisive mitigation.” Concurrently, unfolding scientific events during the week were revealing a different story.

First and foremost, Roy Spencer, John Christy and William Braswell of the University of Alabama-Huntsville (UAH)—developers and curators of the original satellite-derived compilation of the temperature history of the earth’s atmosphere—released a new and improved version of their iconic data set. Bottom line: the temperature trend in the lower atmosphere from the start of the data (1979) through the present came in as 0.114°C/decade (compared with 0.14°C in the previous data version). The new warming trend is less than half what climate models run with increasing atmospheric carbon dioxide emissions project to have occurred.

While the discrepancy between real world observations and climate model projections of temperature rise in the lower atmosphere has been recognized for a number of years, the question has remained as to whether the “problem” lies within the climate models or the observations. With this new data release, the trend in the UAH data now matches very closely with the trend through an independent compilation of the satellite-temperature observations maintained by a team of researchers at Remote Sensing Systems (RSS). The convergence of the observed data sets is an indication the climate models are the odd man out.

As with most long-term, real-world observations, the data are covered in warts. The challenge posed to Spencer et al. was how to splice together remotely sensed data collected from a variety of instruments carried aboard a variety of satellites in unstable orbits—and produce a product robust enough for use in climate studies. The details as to how they did it are explained as clearly as possible in this post over at Spencer’s website (although still quite a technical post). The post provides good insight as to why raw data sets need to be “adjusted”—a lesson that should be kept in mind when considering the surface temperature compilations as well. In most cases, using raw data “as is” is an inherently improper thing to do, and the types of adjustments that are applied may vary based upon the objective.

Here is a summary of the new data set and what was involved in producing it:

Version 6 of the UAH MSU/AMSU global satellite temperature data set is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring. The two most significant changes from an end-user perspective are (1) a decrease in the global-average lower tropospheric (LT) temperature trend from +0.140 C/decade to +0.114 C/decade (Dec. ’78 through Mar. ’15); and (2) the geographic distribution of the LT trends, including higher spatial resolution. We describe the major changes in processing strategy, including a new method for monthly gridpoint averaging; a new multi-channel (rather than multi-angle) method for computing the lower tropospheric (LT) temperature product; and a new empirical method for diurnal drift correction… The 0.026 C/decade reduction in the global LT trend is due to lesser sensitivity of the new LT to land surface skin temperature (est. 0.010 C/decade), with the remainder of the reduction (0.016 C/decade) due to the new diurnal drift adjustment, the more robust method of LT calculation, and other changes in processing procedures.

Figure 1 shows a comparison of the data using the new procedures with that derived from the old procedures. Notice that in the new dataset, the temperature anomalies since about 2003 are less than those from the previous version. This has the overall effect of reducing the trend when computed over the entirety of the record.

 

Figure 1. Monthly global-average temperature anomalies for the lower troposphere from Jan. 1979 through March 2015 for both the old and new versions of LT. (Source: www.drroyspencer.com)

While this new version, admittedly, is not perfect, Spencer, Christy, and Braswell see it as an improvement over the old version. Note that this is not the official release, but rather a version the authors have released for researchers to examine and see if they can find anything that looks irregular that may raise questions as to the procedures employed. Spencer et al. expect a scientific paper on the new data version to be published sometime in 2016.

But unless something major comes up, the new satellite data are further evidence the earth is not warming as expected.  That means that, before rushing into “moral obligations” to attempt to alter the climate’s future course by restricting energy production, we perhaps ought to spend more time trying to better understand what it is we should be expecting in the first place.

One of the things we are told by the more alarmist crowd that we should expect from our fossil fuel burning is a large and rapid sea level rise, primarily a result of a melting of the ice sheets that rest atop Greenland and Antarctica. All too frequently we see news stories telling tales of how the melting in these locations is “worse than we expected.” Some soothsayers even attack the United Nations’ Intergovernmental Panel on Climate Change (IPCC) for being too conservative (of all things) when it comes to projecting future sea level rise. While the IPCC projects a sea level rise of about 18–20 inches from its mid-range emissions scenario over the course of this century, a vocal minority clamor that the rise will be upwards of 3 feet and quite possibly (or probably) greater. All the while, the sea level rise over the past quarter-century has been about 3 inches.

But as recent observations do little to dissuade the hardcore believers, perhaps model results (which they are seemingly more comfortable with) will be more convincing.

A new study available this week in the journal Geophysical Research Letters is described by author Miren Vizcaino and colleagues as “a first step towards fully-coupled higher resolution simulations with more advanced physics”—basically, a detailed ice sheet model coupled with a global climate model.

They ran this model combination with the standard IPCC emissions scenarios to assess Greenland’s contribution to future sea level rise. Here’s what they found:

The [Greenland ice sheet] volume change at year 2100 with respect to year 2000 is equivalent to 27 mm (RCP 2.6), 34 mm (RCP 4.5) and 58 mm (RCP 8.5) of global mean SLR.

Translating millimeters (mm) into inches give this answer: a projected 21st century sea level rise of 1.1 in. (for the low emissions scenario; RCP 2.6), 1.3 in. (for the low/mid scenario; RCP 4.5), and 2.3 in (for the IPCC’s high-end emission scenario). Some disaster.

As with any study, the authors attach some caveats:

The study presented here must be regarded as a necessary first step towards more advanced coupling of ice sheet and climate models at higher resolution, for instance with improved surface-atmosphere coupling (e.g., explicit representation of snow albedo evolution), less simplified ice sheet flow dynamics, and the inclusion of ocean forcing to Greenland outlet glaciers.

Even if they are off by 3–4 times, Greenland ice loss doesn’t seem to be much of a threat. Seems like it’s time to close the book on this imagined scare scenario.

And while imagination runs wild when it comes to linking carbon dioxide emissions to calamitous climate changes and extreme weather events (or even war and earthquakes),  imagination runs dry when it comes to explaining non-events (except when non-events string together to produce some sort of negative outcome [e.g., drought]).

Case in point, a new study looking into the record-long absence of major hurricane (category 3 or higher) strikes on the U.S. mainland—an absence that exceeds nine years (the last major hurricane to hit the U.S was Hurricane Wilma in late-October 2005). The authors of the study, Timothy Hall of NASA’s Goddard Institute for Space Studies and Kelly Hereid from ACE Tempest Reinsurance, concluded that while a streak this long is rare, their results suggest “there is nothing unusual underlying the current hurricane drought. There’s no extraordinary lack of hurricane activity.” Basically they concluded that it’s “a case of good luck” rather than “any shift in hurricane climate.”

That is all well and good, and almost certainly the case. Of course, the same was true a decade ago when the United States was hit by seven major hurricanes over the course of two hurricane seasons (2004 and 2005)—an occurrence that spawned several prominent papers and endless discussion pointing the finger squarely at anthropogenic climate change. And the same is true for every hurricane that hits the United States, although this doesn’t stop someone, somewhere, from speculating to the media that the storm’s occurrence was “consistent with” expectations from a changing climate.

What struck us as odd about the Hall and Hereid paper is the lack of speculation as to how the ongoing record “drought” of major hurricane landfalls in the United States could be tied in with anthropogenic climate change. You can rest assured—and history will confirm—that if we had been experiencing a record run of hurricane landfalls, researchers would be falling all over themselves to draw a connection to human-caused global warming.

But the lack of anything bad happening? No way anyone wants to suggest that is “consistent with” expectations. According to Hall and Hereid:

A hurricane-climate shift protecting the US during active years, even while ravaging nearby Caribbean nations, would require creativity to formulate. We conclude instead that the admittedly unusual 9-year US Cat3+ landfall drought is a matter of luck. [emphasis added]

Right! A good string of weather is “a matter of luck” while bad weather is “consistent with” climate change.

It’s not like it’s very hard, or (despite the authors’ claim) it requires much “creativity” to come up with ways to construe a lack of major hurricane strikes on U.S. soil to be “consistent with” anthropogenic climate change. In fact, there are loads of material in the scientific literature that could be used to construct an argument that under global warming, the United States should experience fewer hurricane landfalls. For a rundown of them, see p. 30 of our comments on the government’s National Assessment on Climate Change, or check out our piece titled, “Global Savings: Billion-Dollar Weather Events Averted by Global Warming.”

It is not for lack of material, but rather, for lack of desire, that keeps folks from wanting to draw a potential link between human-caused climate change and good things occurring in the world.

References:

Hall, T., and K. Hereid. 2015. “The Frequency and Duration of US Hurricane Droughts.” Geophysical Research Letters, doi:10.1002/2015GL063652

Vizcaino, M. et al. 2015. “Coupled Simulations of Greenland Ice Sheet and Climate Change up to AD 2300.” Geophysical Research Letters, doi: 10.1002/2014GL061142

While the Obama administration has focused on tax increases over the years, Canada has focused on tax cuts. The new Canadian budget, out a couple weeks ago, summarized some of the progress that they have made.

The budget says,

The government’s low-tax plan is also giving businesses strong incentives to invest in Canada. This helps the economy grow, spurs job creation, and raises Canada’s standard of living.

That is a refreshing attitude. While the U.S. government’s approach has been to penalize businesses and treat them as a cash box to be raided, Canada’s approach has been to reduce tax burdens and spur growth to the benefit of everybody.

A chart in the new budget—reproduced below the jump—shows that Canada now has the lowest marginal effective tax rate on business investment among major economies. It also shows that the U.S tax rate of 34.7 percent is almost twice the Canadian rate of 17.5 percent.

These “effective” tax rates take into account stated or statutory rates, plus various tax base factors such as depreciation schedules. Skeptics of corporate tax rate cuts in this country often say that while the United States has a high statutory tax rate of 40 percent, we have so many loopholes that our effective rate is low. The new Canadian estimates show that is not true: the United States has both a high statutory rate (which spawns tax avoidance) and a high effective rate (which kills investment).

For the solution to this problem, see here.

For the people of China, there’s good news and bad news.

The good news, as illustrated by the chart below, is that economic freedom has increased dramatically since 1980. This liberalization has lifted hundreds of millions from abject poverty.

 

The bad news is that China still has a long way to go if it wants to become a rich, market-oriented nation. Notwithstanding big gains since 1980, it still ranks in the lower-third of nations for economic freedom.

Yes, there’s been impressive growth, but it started from a very low level. As a result, per-capita economic output is still just a fraction of American levels.

So let’s examine what’s needed to boost Chinese prosperity.

If you look at the Fraser Institute’s Economic Freedom of the World, there are five major policy categories. As you can see from the table below, China’s weakest category is “size of government.” I’ve circled the most relevant data point.

China could–and should–boost its overall ranking by improving its size-of-government score. That would reduce the burden of government spending and lower tax rates.

With this in mind, I was very interested to see that the International Monetary Fund just published a study entitled, “China: How Can Revenue Reforms Contribute to Inclusive and Sustainable Growth.”

Did this mean the IMF is recommending pro-growth tax reform? After reading the following sentence, I was hopeful:

We highlight tax policies that can facilitate economic transition to high income status, promote fiscal sustainability and make growth more inclusive.

After all, surely you make the “transition to high income status” through low tax rates rather than high tax rates, right?

Moreover, the study acknowledged that China’s tax burden already is fairly substantial:

Tax revenue has accounted for about 22 percent of GDP in 2013… The overall tax burden is similar to the tax-to-GDP ratio for other Asian economies such as Australia, Japan, and Korea.

So what did the IMF recommend? A flat tax? Elimination of certain taxes? Reductions in double taxation? Lowering the overall tax burden?

Hardly.

The bureaucrats want China to become more like France and Greece.

I’m not joking. The IMF study actually wants people to believe that making the income tax more punitive will somehow boost prosperity.

Increasing the de facto progressivity of the individual income tax would promote more inclusive growth.

Amazingly, the IMF wants more “progressivity” even though the folks in the top 20 percent are the only ones who pay any income tax under the current system.

Around 80 percent of urban wage earners are not subject to the individual income tax because of the high basic personal allowance.

But a more punitive income tax is just the beginning. The IMF wants further tax hikes.

Broadening the base and unifying rates would increase VAT revenue considerably. … Tax based on fossil fuel carbon emission rates can be introduced. … The current levies on local air pollutants such as [sulfur dioxide] and [nitrogen oxides] emissions and small particulates could be significantly increased.

What’s especially discouraging is that the IMF explicitly wants a higher tax burden to finance an increase in the burden of government spending.

According to the proposed reform scenario, China could potentially aim to increase public expenditures by around 1 percent of GDP for education, 2‒3 percent of GDP for health care, and another 3–4 percent of GDP to fully finance the basic old-age pension and to gradually meet the legacy costs of current obligations. These would add up to additional social expenditures of around 7‒8 percent of GDP by 2030… The size of additional social spending is large but affordable as part of a package of fiscal reforms.

The study even explicitly says China should become more like the failed European welfare states that dominate the OECD:

Compared to OECD economies, China has considerable scope to increase the redistributive role of fiscal policy. … These revenue reforms serve as a key part of a package of reforms to boost social spending.

You won’t be surprised to learn, by the way, that the study contains zero evidence (because there isn’t any) to back up the assertion that a more punitive tax system will lead to more growth. Likewise, there’s zero evidence (because there isn’t any) to support the claim that a higher burden of government spending will boost prosperity.

No wonder the IMF is sometimes referred to as the Dr. Kevorkian of the global economy.

P.S.: If you want to learn lessons from East Asia, look at the strong performance of Hong Kong, Taiwan, Singapore, and South Korea, all of which provide very impressive examples of sustained growth enabled by small government and free markets.

P.P.S.: I was greatly amused when the head of China’s sovereign wealth fund mocked the Europeans for destructive welfare state policies.

P.P.P.S.: Click here if you want some morbid humor about China’s pseudo-communist regime.

P.P.P.P.S. Though I give China credit for trimming at least one of the special privileges provided to government bureaucrats.

At a hearing this week on mobile device security, law enforcement representatives argued that technology companies should weaken encryption, such as by installing back doors, so that the government can have easier access to communications. They even chastised companies like Apple and Google for moving to provide consumers better privacy protections.

As an Ars Technica report put it, “Lawmakers weren’t having it.” But a particular lawmaker’s response stands out. It’s the statement of Rep. Ted Lieu (D-CA), one of the few members of Congress with a computer science degree. He also “gets” the structure of power. Lieu articulated why the Fourth Amendment specifically disables government agents’ access to information, and how National Security Agency spying has undercut the interests of law enforcement with its overreaching domestic spying.

Give a listen to Lieu as he chastises the position taken by  a district attorney from Suffolk County, MA:

The latest Commerce Department report and FOMC press release have, as usual, led to a flood of commentary concerning the various economic indicators that the Fed committee must have mulled over in reaching its decision to put off somewhat longer its plan to start selling off assets it accumulated during the course of several rounds of Quantitative Easing.

Those indicators also inspire me to put in my own two cents concerning things that should, and things that should not, bear on the FOMC’s monetary policy decisions. My thoughts, I hasten to say, pay no heed to the Fed’s dual mandate, which is itself deeply flawed. But then again, since that mandate allows the FOMC all sorts of leeway in making its decisions, I doubt that it would prevent that body from following my advice, assuming it had the least desire to do so.

I have a simple–some may call it quaint–way of deciding whether some information supplies reason for the Fed to either sell off or buy more assets. Here it is: does the information offer reliable evidence of either a shortage or a surplus of nominal money balances?

Why this criterion? Because of two more quaint ideas. The first is that, notwithstanding the contorted arguments that contemporary monetary economists resort to in order to avoid admitting it, monetary policy is fundamentally a matter of altering the nominal quantity of monetary balances of various sorts available to banks and the public, starting with the quantity of base money. The latter quantity is, in any case, the thing that the FOMC decides to expand, or to contract, by its deliberations, whether it expresses its decision in terms of “Quantitative Easing” or in terms of some interest rate target.

The last quaint idea is that, just as a dose of vitamin D can do a world of good to someone suffering from rickets, while too much can prove toxic, monetary expansion, though the best solution to problems that have their roots in a shortage of money, is the wrong medicine otherwise. Call it crazy if you like, but that’s my belief and I’m sticking to it.

So, those indicators. Real GDP growth, first of all, slowed to a miserable 0.2% during the last quarter, or less than one-tenth the previous quarter’s figure, and just one-25th of the quarter before that.

A bad number indeed. But considered alone the number supplies no grounds for Fed easing, for the simple reason that it doesn’t tell us why real GDP growth is so low. If its low because of a money shortage, more money is called for; if its low for other reasons, it isn’t. More information, please.

Here’s some: it was a rough winter, labor disputes closed some West Coast ports, and oil has been dirt cheap. But what have such things got to do with monetary policy? Can a dose of monetary medicine make up for a winter storm, or a strike? Is cheap oil a reason for tightening money, so as to keep general prices from sagging, or one for loosening it to provide relief to domestic energy companies or to counter “weakness in the global economy”? Hard to tell, isn’t it? But that, I say, is because when one gets down to brass tacks such changes in the real economic circumstances ought in themselves to be none of the Fed’s concern.

What’s more, and though the claim may strike many readers as paradoxical, the same can be said about changes in the CPI. Although inflation is still below the Fed’s target, core CPI inflation has been inching up, and the 5-year forward expected inflation rate has been hovering just above the Fed’s 2-percent target for some months now. Surely that means that Fed policy is itself on track, right? Well, no, because these numbers could reflect either a revival of aggregate spending, which would indeed carry such an implication, or, despite the oil glut, a reduction in aggregate supply, which would not.

If all these bits of information shouldn’t shape the FOMC’s actions, what should? The statistics that come closest to serving as reliable guides to whether monetary policy is too tight, too loose, or at least roughly on track, are those that concern neither real developments nor prices but spending. When money balances are abundant, that fact is reflected in increased spending, because when people and businesses find themselves flush with money balances, they tend to dispose of those exceeding their needs by using them to buy goods or securities. If, on the other hand, people and businesses find themselves wanting larger money balances, they try to build them up by spending or investing less.

Slice it any way you like, Q1 spending was down. Way down. The annualized growth rate of consumer spending, which was 4.4% during the last quarter of 2014, or not far from its Great Moderation average, fell to just 1.9%, while business spending dropped from 4.7% to 3.4%. But the annualized growth rate of NGDP, a much broader measure of spending, experienced the sharpest decline, to just one-tenth of one percent, bringing the full-year forecast down from 3.8% to 3.6%. Some of this decline can be written off to winter doldrums, and hence as transient. But much of it can’t.

In short, the only facts that deserve to be considered approximate indicators of whether monetary policy has been too tight, too loose, or on track, suggest that it’s too tight. The others– whether they refer to the weather, or output, or dollar exchange rates, or the CPI and its variants, or stevedores’ discontent–are so many red herrings, and ought therefore to be considered perfectly irrelevant. Whenever FOMC members or any other monetary policymakers refer to such irrelevancies, they must do so either because the press expects them to, or because they are confusing things that the Fed should try to do something about with ones that shouldn’t be any of its business.

By saying all this, do I mean to say that, if the FOMC would just keep a sharp eye on spending, ignoring everything else, we would have sound monetary policy? Not for a minute. The reason, in part, is that spending statistics are themselves imperfect guides to the state of monetary policy, for too many reasons to go into here. More importantly, so long as the policymakers aren’t obliged to conduct policy according to a single, unambiguous target, their decisions will remain shrouded by uncertainty that is itself a big drag on prosperity.

But there’s more to it than that. Even if the Fed were somehow legally committed to target NGDP, or some other broad spending measure, from now on, and even if the measure were itself reliable, it wouldn’t solve our monetary troubles. And that’s because the monetary system itself is dysfunctional, and severely so. If it weren’t, it wouldn’t take more than $4.5 trillion in Fed assets to keep spending going at a reasonable clip. The defects are partly traceable to policies–including some of the Fed’s own–that discourage banks from making certain kinds of worthwhile loans, while encouraging them to hold massive excess reserves.

It’s owing to the crippled state of our monetary system, and not to any ambiguity in relevant indicators, that I myself have grave doubts concerning the gains to be expected from further Fed easing, or even from implementing a strict NGDP targeting rule, under present conditions. For if the experience of the last several years is any guide, it may require still more massive additions to the Fed’s balance sheet to achieve even very modest improvements in spending; and an NGDP-based monetary rule that would serve as a license for the Fed to become a still greater behemoth would not be my idea of an improvement upon the status quo.

You see, unlike some economists, although I’m happy to allow that an increase in the Fed’s nominal size, which is roughly equivalent to a like increase in the monetary base, is neutral in the long run, I don’t accept the doctrine of the neutrality of increases in the Fed’s relative size. I believe that Fed-based financial intermediation is a lousy substitute for private sector intermediation, and that as it takes over, economic growth suffers. The takeover is, in other words, financially repressive.

Which means that the level of spending is, after all, not the only relevant indicator of whether the Fed is or isn’t going in the right direction. Another is the real size of the Fed’s balance sheet relative to that of the economy as a whole, which measures the extent to which our central bank is commandeering savings that might otherwise be more productively employed. Other things equal, the smaller that ratio, the better.

And there, folks, is the rub. If you want to know the real dilemma facing the FOMC, forget about the CPI, oil prices, and last quarter’s weather. Here’s the real McCoy: NGDP growth is too low. But the Fed is too darn big.

Last September Kevin Dowd authored a dandy Policy Analysis called “Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve.” In it Kevin pointed to the dangers inherent in the Federal Reserve’s “stress tests,” and the mathematical risk models on which those tests are often based, as devices for determining whether banks are holding enough capital or not.

Recently my Cato colleague Jeff Miron, who edits Cato’s Research Briefs in Economic Policy, alerted us to a new working paper, entitled “The Limits of Model-Based Regulation,” that independently reaches conclusions very similar to Kevin’s. The study, by Markus Behn, Rainer Haselmann, and Vikrant Vig, is summarized in this month’s Research Brief.

The authors conclude that, instead of limiting credit risk by linking bank capital more tightly to the riskiness of banks’ asset holdings, model-based regulation has actually increased credit risk. At the same time, because the model-based approach is relatively costly, large banks are much more likely to resort to it then smaller ones. Consequently, those banks have been able to expand their lending–and their risky lending especially–at the expense of their smaller rivals. In short, big banks gain, small banks lose, and we all are somewhat less safe than we might be otherwise.

Here is a link to the full working paper.

[Cross-posted from Alt-M.org]

Here’s a headline from today’s Washington Post: “Sexism in science: Peer editor tells female researchers their study needs a male author.” Peer review is the usually-anonymous process by which articles submitted to academic journals are reviewed for quality and relevance to determine whether or not they will be published. Over the past several years, numerous scandals have emerged, made possible by the anonymity at the heart of that process.

The justification for anonymity is that it is supposed to allow reviewers to write more freely than if they were forced to place their names on their reviews. But scientists are increasingly admitting, and the public is increasingly noticing, that the process is… imperfect. As the Guardian newspaper wrote last summer about a leading journal, Nature:

Nature […] has had to retract two papers it published in January after mistakes were spotted in the figures, some of the methods descriptions were found to be plagiarised and early attempts to replicate the work failed. This is the second time in recent weeks that the God-like omniscience that non-scientists often attribute to scientific journals was found to be exaggerated.

In the 1990s I sat on the peer review board of an academic journal and over the years I have occasionally submitted to and been published by such journals. Peer reviews vary wildly in depth and quality. Some reviewers appear to have only skimmed the submitted paper, while others have clearly read it carefully. Some reviewers understand the submissions fully, others don’t. Some double-check numbers and sources. Others don’t. It’s plausible that this variability (particularly on the weak end) is a side-effect of reviwers’ anonymity. I have seen terse, badly-argued reviews to which I doubt the reviewer would have voluntarily attached his or her name. Personally, I try never to write anything as a peer reviewer to which I would not happily sign.

Six years ago, that inspired an idea: it occurred to me to found a journal, called Litmus, that would be comprised of signed peer reviews of already published papers, with authors’ responses when possible. My impression is that this would lead to a much higher average quality of reviews, and reveal to readers the extent of disagreement among scholars on the issues discussed, alternative evidence, etc.

Alas, it would also be potentially dangerous for young scholars to contribute to such a journal, were they to rub a potential employer the wrong way. In the end, I was unable to interest a enough top-notch scholars to flesh out a sufficiently large editorial board. One professor declined, saying:

This strikes me as an interesting idea, but one that is sufficiently outside of what is normal that you might have quite a difficult time getting a consensus that would lead to participation high enough to sustain the journal.  Some people would probably feel that signed reviews were not of the same quality as blind ones.  Others would feel that signed reviews required formality so much beyond that of blind reviews (which at their best are candid and informal but accurate) that they would be unwilling to participate for lack of time.  I am not saying that it is a bad idea, but I think that you’re in for an uphill battle to get the idea off the ground.

Eventually I abandoned the project. But as the failure of the status quo in journal peer reviewing becomes more evident, perhaps someone will rekindle the idea. Conventional journals would have to be on their toes if they knew there was a chance their articles would be held publicly under a microscope by other reviewers.

​Well… there goes our trip to Baltimore. ​We’d been hoping to see the annual Kinetic Sculpture Race, but I see it’s been postponed sine die.

If you’re inclined, now is your chance to laugh. Get it out early.

Here’s a problem in describing how cities work: Any example I might pick to symbolize the decay of Baltimore can always be ridiculed: Weep, weep my friends for that lousy corporate CVS, the one that nobody really liked anyway!

See how easy that was?

The one direct effect I have experienced from the recent riots is that I and my daughter will possibly not be seeing a giant pink taffeta poodle pedaled down the streets of Baltimore by a bunch of probably inebriated art students. I’m unlikely to suffer any of the riots’ more troubling effects, like having to walk an extra half mile to get my asthma medication. Or like getting my car torched.

(And yes: Leading with the pink taffeta poodle might just be the definition of white privilege, but at least I’m, you know, aware of it.)

Cities are hard to explain. They’re made up of millions of tiny little things, and of the networks of trust and expectation that exist among them. Any one of those things – a CVS, a giant pink taffeta poodle, a population of inebriated art students – does not make a city. Almost any one of them can be laughed at, or just dismissed as trivial, in isolation. But good, functional cities are networks. They’re not isolated nodes. A city isn’t the big taffeta poodle, but it might be the expectation that there will be something fun, and free, to do in the streets on some warm spring afternoon. For which we can thank the art students.

And other expectations too: After we see the giant pink taffeta poodle, and when my daughter gets stung by a bee, there’s the CVS, and after that, when we decide we want dinner, we have several choices at hand. And if we want a room for the night, there it is. And if we want to relocate to Baltimore, we might just be able to find decent housing and jobs.

I think we can all agree that that’s what a city should look like. But how does it come into being?

I suspect that some significant trust has to be there first. Without it, few will venture to try new things. Restaurants won’t open. Parades won’t be held. Families won’t move in. Few will try adding new threads to the network. And when the old threads wear out, they will not be replaced.

For a very long time the networks of trust and expectation in the city of Baltimore have been fraying. But it’s not because of the rioting, which is only a symptom, if an advanced one, of an underlying condition. The well-documented culture of police brutality in Baltimore has meant that one of the bigger threads in the network – the ability to turn to police when you or your property are threatened – cannot be depended on. And when that thread goes, so go many others.

It’s long been known in Baltimore that the police can’t be counted on to perform their core functions, particularly in the poorer neighborhoods: In such places the police either can’t or won’t reliably protect persons and property from attack. Not without levels of collateral damage that any reasonable person would deplore. And when you don’t have security, you can forget all about community.

That’s part of why, paradoxically, the poor need property rights even more than the rich: What the poor possess is definitionally small. As a result, it’s all too easy to take everything that they have. Including their sense of dignity. Including their ability to trust. And, finally, including their sense of community, which has to start (and I do feel a bit pedantic saying it) with the understanding that community leaders and enforcers aren’t just out to squeeze them for cash. That the leaders and enforcers don’t see them merely as yet another home to be searched, another gun to seize, another dog to shoot, and another marijuana conviction waiting to happen.

The poor need security not just in their own property, but also in that of others. And these others aren’t necessarily poor. It’s a good thing whenever the owner of a grocery store franchise feels confident enough to get started in a neighborhood that maybe wasn’t so well-off, and that maybe lacked good choices beforehand. But that won’t happen without a measure of trust, and when the community has good reason not to trust, well, outsiders probably won’t trust either.

Contrast all this to the property rights of the rich. Paradoxically, the rich often barely need formal protections of their rights at all. Their property just isn’t threatened all that much, whether by the police or by anyone else. And when the property rights of the rich do get threatened, the rich can fight back. Definitionally, they have many more resources at hand, including non-financial ones: The rich have political influence, private security choices, and just… moving. The would-be owner of a grocery store franchise isn’t compelled to open in any particular neighborhood, or even to go into business at all. His money can always sit safely in a bank.

The rich also aren’t living so precariously: Even if all else fails, and if a rich person’s car does get torched, he can just buy another car. Yes, that’s bad, but it’s not going to ruin him. The same can’t necessarily be said of a poor person, for whom a car might be her single most valuable possession.

So while I’m complaining about the loss of a silly (but fun) kinetic sculpture race, let’s all remember just who depends the most on the networks of trust and expectation that can either live, or die, in our cities. Let’s also remember that those networks depend on protecting the all too fragile property rights of the poor.

Pages