Feed aggregator

Matthew Feeney

Those who have argued for the deregulation of the taxi industry will be familiar with the claim that taxi deregulation was tried in the U.S. and that the results were so undesirable that regulation was introduced. In a recent Washington Post article about ridesharing and taxi regulation, Catherine Rampell states that prices rose in deregulated taxi markets and that the latest calls for deregulation are only the latest in a familiar cycle. However, future taxi deregulation will be different from past deregulation schemes thanks to relatively new changes in technology that allow passengers to overcome knowledge problems that led to price increases in deregulated taxi markets.

Rampell’s article includes some interesting historical insights. Regulations and licensing laws for passenger transport vehicles are nothing new. In the 17th century, Charles I tried to limit the number of horse-drawn carriages in London by passing an order which was ignored. During the Great Depression, some unemployed Americans found a source of income in the unlicensed taxi industry. By the 1990s much of the American taxi industry had been subjected to re-regulation following a wave of deregulation in roughly two dozen cities beginning in the 1960s.

Today, there are calls for the taxi industry to be deregulated amid the growth of ridesharing companies such as Uber, Lyft, and Sidecar. Some argue that taxis cannot fairly compete with ridesharing companies because they are hampered by outdated regulations, and that if taxis were deregulated they would be better suited to compete with rideshare companies. Rampell warns against deregulation, saying that we have “Been there, done that.”

While it is the case that the taxi industry in a number of American cities was re-regulated after a period of deregulation, many of the pricing problems cited as justification for taxi re-regulation are not applicable today thanks to technological advances.

In her article, Rampell links to a 1996 paper on taxi regulation written by Paul Dempsey, a law professor at McGill. The paper highlights an interesting problem that taxi customers face: a lack of good information.

Most taxi customers take the first taxi that appears. As Dempsey points out, it is not worth taxi customers conducting a price or service comparison in a deregulated market:

…consumers buying taxi service in a deregulated market often have little comparative pricing or service information, for the opportunity costs of acquiring it are high.

Taxi consumers do not have perfect information, so it is almost always worth taking the first taxi that appears. As Dempsey notes (citing work by economist Chanoch Shreiber), in the absence of fare regulation the prices of a taxi rides tends to increase:                                                                                                                

… because a prospective passenger who values his or her time will not likely turn down the first available cab on the basis of price, this will have an “upward pressure on the price.” A consumer hailing a cab from a sidewalk has an incentive to take the first taxi encountered, because both the waiting time for the next cab and its price are unknown. Paradoxically, in an open entry regime, prices tend to rise.

Although taxi prices did go up after the deregulation Dempsey discusses, we should not expect taxi deregulation in the future to have the same outcome.

Keep in mind that Dempsey’s paper came out in 1996, before smartphones allowed for companies like Uber and Lyft to emerge as strong taxi competitors.

Part of the appeal of ridesharing is that the apps used by Lyft and Uber customers allow users to overcome the knowledge problems highlighted by Dempsey. Uber and Lyft users can see the location of drivers, and Uber users can estimate a fare before their ride begins. Today, a taxi company could, unlike a taxi company in 1996, develop an app that allows for users to be better informed about fares and the availability of taxi drivers.

However, even if a taxi company were to develop such an app, it would have to compete with rideshare companies. One app that did allow its users to hail taxis, Hailo, was driven out of North America by the fierce competition between Uber and Lyft. MyTaxi, a Germany-based taxi app, is available in Washington, D.C and does allow users to estimate a fare before a ride begins and see the location of available drivers. If taxi companies want to remain competitive in markets where ridesharing drivers are operating an app like MyTaxi may be their best chance of surviving in the long term.

Ridesharing has dramatically changed vehicle-for-hire transportation, and as regulators look to address the rise of the sharing economy we should expect anything but the familiar regulatory cycle Rampell references. Taxi companies are facing strong competition from companies that would have been inconceivable almost twenty years ago, and they have the opportunity to develop products that can address the lack of information which contributed to taxi prices rising in deregulated markets. There may well be good arguments against the deregulation of the taxi industry, but such arguments must take into account changes in technology.  

Jason Kuznicki

This month at Cato Unbound, we’re talking about the Search for Extra-Terrestrial Intelligence, or SETI.

Why’s that, you ask?

Several reasons, really. First, although it’s not exactly a hot public policy topic, it will certainly become one if we ever actually find anything. But that’s hardly where the importance of the topic ends.

Much more interesting to me at least is that SETI can serve as a springboard for discussing all kinds of important concepts in public policy. Our contributors this month - David Brin, Robin Hanson, Jerome H. Barkow, and Douglas Vakoch - have talked about the open societycost-benefit analysisevolutionary psychology, the hubris of experts, the narcissim of small differences, and even Pascal’s Wager (and what’s wrong with it)

So… lots of interesting stuff, particularly for libertarians who are interested in public policy.

Doug Bandow

MOSCOW—Red Square is one of the world’s most iconic locales. Even during the worst of the U.S.S.R. the square was more symbolic than threatening. 

Very different, however, is Lubyanka, just a short walk away. 

In the late 19th century 15 insurance companies congregated on Great Lubyanka Street.  The Rossia agency, one of Russia’s largest, completed its office building in 1900. 

But in 1917 the Bolsheviks seized power.  They took the Rossia building for the new secret police, known as the All-Russian Extraordinary Commission for Combating Counter-Revolution and Sabotage, or Cheka.

The first Cheka head was Felix Dzerzhinsky.  He conducted the infamous “Red Terror,” what he called a “fight to the finish” against the Bolsheviks’ political opponents. 

After his death in 1926 Grand Lubyanka Street was renamed Dzerzhinsky Street.  A great statue of Dzerzhinsky, weighing 15 tons, was erected in a circle in front of the Cheka headquarters. 

After the KGB was dissolved the building went to the Border Guard Service, later absorbed by the Federal Security Service (FSB), responsible for foreign intelligence. Today Lubyanka looks non-threatening, a yellowish color and architectural style less severe than the harshly grandiose Stalinist architecture seen throughout the city.

The KGB faced its greatest challenge in the Gorbachev era.  Demands for reform raced beyond Mikhail Gorbachev’s and the KGB’s control.  In August 1991 KGB head Vladimir Kryuchkov helped plan the coup against Gorbachev. 

After the coup’s collapse a crowd gathered in front of Lubyanka and attempted to pull down the Dzerzhinsky monument.  City officials used a crane to finish the job.

Journalist Yevgenia Albats wrote:  “If either Gorbachev or [Boris] Yeltsin had been bold enough to dismantle the KGB during the autumn of 1991, he would have met little resistance.”  However, these two reformers attempted to fix rather than eliminate the agency.

And the KGB effectively ended up taking over Russia.  Yeltsin named Chekists, or members of the “siloviki” (or power agents), to important government positions, most importantly Vladimir Putin, who headed the FSB and then became prime minister—and Yeltsin’s successor as president when the latter resigned.

In 1999 Vladimir Putin became prime minister under President Yeltsin.  Anne Applebaum, Washington Post columnist, argued “that Putin—and, more importantly, most of the people around him—is deeply steeped in the culture of Andropov’s KGB.”  In her view they are modernizers but authoritarians, who “believe that the rulers of the state must exert careful control over the life of the nation.” 

After taking over Putin turned to his KGB network to run both the government and the economy.  The result, wrote UCLA’s Daniel Treisman, is a “silovarchy” in which “silovarchs” replaced the earlier economic oligarchs.  Whatever the economic consequences of this system, noted Treisman, “the temptation to use secret service tools and techniques predisposes such regimes toward authoritarian politics.”

As I wrote in the American Spectator online, “This system offers a tragic detour for people who desperately need liberty.  But despite the frenzied push in Washington for economic sanctions and military threats, the success of Putinism is well beyond America’s control.  The U.S. certainly should not promote military confrontation with nuclear-armed Moscow over an issue of limited importance to Washington.”

In fact, Putinism may face its strongest challenge on the economic front from declining energy prices, Western sanctions, and domestic distortions.  Putin’s poll ratings have risen since seizing Crimea, but as the nationalistic fervor fades the Russian people’s desire for prosperity may overcome the desire for order.

Finally, the system faces a natural limit:  The silivoki will die off.  Noted Applebaum, “Sooner or later, the generation trained in the mindset of Andropov’s KGB will retire.”  It’s hard to predict what will follow, but change is likely. 

It then will be critical for Russia’s new leaders to eliminate the Chekist mindset.  But  Lubyanka should be preserved, perhaps as a museum about tyranny.  No one should want to repeat the KGB experience.

Randal O'Toole

A left-coast writer named Mark Morford thinks that gas prices falling to $2 a gallon would be the worst thing to happen to America. After all, he says, the wrong people would profit: oil companies (why would oil companies profit from lower gas prices?), auto makers, and internet retailers like Amazon that offer free shipping.

If falling gas prices are the worst for America, then the best, Morford goes on to say, would be to raise gas taxes by $6 a gallon and dedicate all of the revenue to boondoggles “alternative energy and transport, environmental protections, our busted educational system, our multi-trillion debt.” After all, government has proven itself so capable of finding the most cost-effective solutions to any problem in the past, and there’s no better way to reduce the debt than to tax the economy to death.

Morford is right in line with progressives like Naomi Klein, who thinks climate change is a grand opportunity to make war on capitalism. Despite doubts cast by other leftists, Klein insists that “responding to climate change could be the catalyst for a positive social and economic transformation”–by which she means government control of transportation, housing, and just about everything else.

These advocates of central planning remind me of University of Washington international studies professor Daniel Chirot assessment of the fall of the Soviet empire. From the time of Lenin, noted Chirot, soviet planners considered western industrial systems of the late nineteenth century their model for an ideal economy. By the 1980s, after decades of hard work, they had developed “the most advanced industries of the late 19th and early 20th centuries–polluting, wasteful, energy intensive, massive, inflexible–in short, giant rust belts.”

Morford and Klein want to do the same to the United States, using climate change as their excuse, and the golden age they wish to return to is around 1920, when streetcars and intercity passenger trains were at their peak (not counting the WWII era). Sure, there were cars, but only a few compared with today.

What they don’t understand is that, even at their peak, intercity passenger trains carried the average American only about 900 miles a year, while streetcars and other urban transit carried the average American about 700 miles a year. Moreover, nearly all of this travel was by the top 25 or 30 percent: until that evil capitalist Henry Ford made his mass produced automobile available at affordable prices, the working class people that progressives claim to care about were no more mobile than Americans had been a hundred years before.

Thanks to profiteering automakers and greedy oil companies, the average American today travels by car nearly 15,000 miles a year, close to 10 times the total per capita urban and intercity rail travel of 1920. Morford and Klein, of course, think less travel would be a good thing, since it would result (says Morford) in “people shopping more locally and patronizing small businesses again.” Yet there’s no guarantee of that. Higher gas prices could also lead to people shopping on Amazon or seeking out WalMart’s “always” low prices even more than they do today.

Are Morford, Klein, and their allies ignorant of the facts, economically naive, or do they just object to the choices other people make? It always seems like demagoguery to say that opponents are afraid of freedom, but it’s a natural conclusion for progressives like Morford and Klein.

When they say, “shop locally,” what they mean is, “pay more for inferior goods.” When they say, “don’t reward the oil companies,” what they mean is, “most people shouldn’t be allowed to travel as much as they like.” When they say, “capitalism is bad,” what they mean is, “you shouldn’t allowed to buy things that other people make because they might earn a profit from it.” When they say, “a planet of suburbs is a terrible idea,” what they mean is, “everyone should live like I do.”

In reality, low gas prices mean increased mobility which in turn should promote the economic recovery that has been stalled for six years by Obama’s central planning. Cars are getting more fuel efficient no matter what oil and gas prices are, and even if that is partly because of government fiat, it is also a lot more cost-effective than trying to change everyone’s lifestyles.

Freedom means allowing people to make choices you wouldn’t make for yourself. Moreover, it means allowing people to make choices you may not agree with for anyone because in a democracy we agree that no one person has all the answers for everyone else. Ultimately, freedom means understanding that the alternative, no matter how good it sounds on paper, always leads to tyranny and oppression.

If you really care about certain values, and some technologies seem to run counter to those values, then you need to figure out ways to make your values more attractive, not try to tax or regulate those technologies to death. If the price of freedom is a slightly warmer world–and I’m not convinced that it is–then we are better off learning to live with it than having to live under the yoke of well-intentioned but ignorant planners who don’t understand such basic concepts as cost effectiveness or supply and demand.

Steve H. Hanke

Kevin Dowd, a long-time friend and eminent free-banking authority, set his sights on Bitcoin in the book he published this summer: New Private Monies: A Bit Part Player?.  His work delivers a refreshingly accurate and straightforward assessment of Bitcoin, ignoring the hype which surrounds it.

Both Kevin and I appreciate the importance of cryptocurrencies: in his own words, “The broader implications of cryptocurrency are extremely profound.”  The peer-to-peer exchange structure common to cryptocurrencies like Bitcoin cuts the intermediary out of transactions.  This eliminates the need for a third party in exchanges and protects wealth against exchange controls or capital controls.  Because Bitcoin and other cryptocurrencies are entirely digital, the location of the two parties of a transaction is irrelevant: transactions can be carried out anywhere.  This also makes transactions highly anonymous, a feature appealing to consumers who cherish privacy.

The intermediary-free, digital transactions characteristic of cryptocurrencies such as Bitcoin are an important step towards exchanges free of regulatory meddling.  In addition, this technology should enable low-cost banking accessible to anyone with a cellphone.  Indeed, cryptocurrencies should improve access to financial services in developing countries and elsewhere because they will complement existing services that now rely on standard currencies (see the M-Pesa in Kenya).

There is, however, an important line to be drawn between the future of the technology behind Bitcoin and the future of Bitcoin itself, thanks to its notorious volatility.  Kevin is crystal clear on the distinction:

“Though the supply of Bitcoin is limited, the demand is very variable; this variability has made its price very uncertain and created a bubble-bust cycle in the Bitcoin market.  Perhaps the safest prediction is that Bitcoin will eventually be displaced by alternative cryptocurrencies with superior features.” 

I couldn’t agree more.  The uncertainty (read: volatility) of Bitcoin speaks for itself in the accompanying chart.

As a supporter of cryptocurrencies, I only disagree with Kevin on one basic point. We are both well aware that volatility is Bitcoin’s great weakness.  I, however, also believe that Bitcoin’s volatility proves that it is a speculative asset, not a currency.  Thanks to its volatility, Bitcoin is unreliable enough that it is useless as a unit of account.  A unit of account is a well understood measurement for defining and comparing the values of goods, services, or purchases.  This is a crucial qualification of money.  Due to its volatility, Bitcoin fails to be a reliable unit of account.  It cannot be considered a money or currency.

While we might differ on whether Bitcoin should be classified as money, Kevin and I agree on one big thing: expect many more innovations and improvements from cryptocurrency technology.  Do not, however, hope for the same from Bitcoin.

This blogpost was co-authored by Connor Kenehan.

Tim Lynch

Over at Cato’s Police Misconduct web site, we have identified the worst case for the month of November. It turns out to be the Cleveland Police Department.

To begin with, in late November, a Cleveland police officer shot and killed a 12-year old boy, Tamir Rice.

The press reports based on the police accounts at the time of the incident read:

A rookie Cleveland police officer shot a 12-year-old boy outside a city recreation center late Saturday afternoon after the boy pulled a BB gun from his waistband, police said.

Police were responding to reports of a male with a gun outside Cudell Recreation Center at Detroit Avenue and West Boulevard about 3:30 p.m., Deputy Chief of Field Operations Ed Tomba said.

A rookie officer and a 10-15 year veteran pulled into the parking lot and saw a few people sitting underneath a pavilion next to the center. The rookie officer saw a black gun sitting on the table, and he saw the boy pick up the gun and put it in his waistband, Cleveland Police Patrolmen’s Association President Jeffrey Follmer said.

The officer got out of the car and told the boy to put his hands up. The boy reached into his waistband, pulled out the gun and the rookie officer fired two shots, Tomba said.

As detailed in this video report by MSNBC’s Chris Hayes, the initial reports by the police do not jibe with video evidence in several major respects.

The video shows Rice, alone, playing with his toy gun and also with the snow, as 12 year olds are wont to do. He was not, as the police said, with “a few people” in the pavilion. Other police reports to the press said the shooting officer got out of his car and told Rice three times to put his hands up. The video, unfortunately without audio and recording at the speed of two frames per second, shows the officer shooting Rice within 1.5-2 seconds after exiting the police vehicle.

The officers also waited several minutes before administering CPR to the fallen child.

The original call that drew the police to the park in the first place said the person with the gun in the park was likely a minor and likely was a toy gun. Apparently, this information was not relayed to the responding officers, who called-in the shooting victim as “possibly 20” years old.

The officer who shot Rice “was specifically faulted for breaking down emotionally while handling a live gun” according to subsequent reporting. The internal memo that informed the report concluded that the officer be “released from the employment of the City of Independence [,Ohio].”

Here’s the thing: The Cleveland Police Department hired the officer without checking his personnel file from his previous law enforcement job!

This tragic event is just the latest that involves police using deadly force, and likely too quickly. The facts released by the police department that favor the police officers involved were either misleading or inaccurate.

At best, this event highlights poor communication and procedure leading up to and immediately following a tragedy. At worst, this is a police department caught covering up a series of preventable mistakes that cost the life of a young boy.

The Department of Justice recently issued a report after looking into the policies and practices of the Cleveland Police Department.  According to the New York Times,

The Justice Department report on Cleveland cataloged many instances of unjustified force, including officers who assaulted, pepper-sprayed and even Tasered people already being restrained. In one case last year, the police fired two shots at a man wearing only boxer shorts who was fleeing from two armed assailants. In a 2011 case, a man who had been restrained on the ground with his arms and legs spread was then kicked by officers. He was later treated for a broken bone in his face.

The city’s policing problems, [Attorney General] Holder said, stemmed from “systemic deficiencies, including insufficient accountability, inadequate training and equipment, ineffective policies and inadequate engagement with the community.”

 

David Boaz

Former Florida governor – but Texas native – Jeb Bush told the Wall Street Journal CEO Council:

Republicans need to show they’re not just against things, that they’re for a bunch of things. 

Which reminds me of a quotation from Lyndon B. Johnson that George Will often cites:

We’re in favor of a lot of things and we’re against mighty few.

Let’s hope Bush’s “bunch” is different from Johnson’s “lot.” We can’t afford another such escalation in the size, scope, and power of government.

Paul C. "Chip" Knappenberger and Patrick J. Michaels

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

Despite what you may think if you reside in the eastern United States, the world as a whole in 2014 has been fairly warm. For the past few months, several temperature-tracking agencies have been hinting that this year may turn out to be the “warmest ever recorded”—for whatever that is worth (keep reading for our evaluation). The hints have been turned up a notch with the latest United Nations climate confab taking place in Lima, Peru through December 12.  The mainstream media is happy to popularize these claims (as are government-money-seeking science lobbying groups).

But a closer look shows two things: first, whether or not 2014 will prove to be the record warmest year depends on whom you ask; and second, no matter where the final number for the year ranks in the observations, it will rank among the greatest “busts” of climate model predictions (which collectively expected it to be a lot warmer). The implication of the first is just nothing more than a jostling for press coverage. The implication of the latter is that future climate change appears to be less of a menace than assumed by the president and his pen and phone. 

Let’s examine at the various temperature records.

First, a little background. Several different groups compile the global average temperature in near-real time. Each uses slightly different data-handling techniques (such as how to account for missing data) and so each gets a slightly different (but nevertheless very similar) values. Several groups compute the surface temperature, while others calculate the global average temperature in the lower atmosphere (a bit freer from confounding factors like urbanization). All, thus far, only have data for 2014 compiled through October, so the final ranking for 2014, at this point in time, is only a speculation (although a pretty well-founded one).

The three major groups calculating the average surface temperature of the earth (land and ocean combined) all are currently indicating that 2014 will likely nudge out 2010 (by a couple hundredths of a degree Celsius) to become the warmest year in each dataset (which begin in mid-to-late 1800s). This is almost certainly true in the datasets maintained by the U.S. National Oceanographic and Atmospheric Administration (NOAA) and the UK Met Office Hadley Centre. In the record compiled by NASA’s Goddard Institute for Space Studies (GISS), the 2014 year-to-date value is in a virtual dead heat with the annual value for 2010, so the final ranking will depend heavily on the how the data come in for November and December. (The other major data compilation, the one developed by the Berkeley Earth group is not updated in real time).

There is one other compilation of the earth’s surface temperature history that has recently been developed by researchers Kevin Cowtan and Robert Way of the University of York. This dataset rose to prominence a year ago, when it showed that if improved (?) methods were used to fill in data-sparse regions of the earth (primarily in the Arctic), the global warming “hiatus” was more of a global warming “slowdown.” In other words, a more informed guess indicated that the Arctic had been warming at a greater rate than was being expressed by the other datasets. This instantly made the Cowtan and Way dataset the darling of folks who wanted to show that global warming was alive and well and not, in fact, in a coma (a careful analysis of the implications of Cowtan and Way’s findings however proved the data not up to that task). So what are the prospects of 2014 being a record warm year in the Cowtan and Way dataset? Slim. 2014 currently trails 2010 by a couple hundredths of a degree Celsius—an amount that will be difficult to make up without an exceptionally warm November and December. Consquently, the briefly favored dataset is now being largely ignored.

It is worth pointing out, that as a result of data and computational uncertainty,  none of the surface compilations will 2014 be statistically different from 2010—in other words, it is impossible to say with statistical certainty, that 2014 was (or was not) the all-time warmest year ever recorded.

It is a different story in the lower atmosphere.

There, the two groups compiling the average temperature show that 2014 is nowhere near the warmest (in data which starts in 1979), trailing 1998 by several tenths of a degree Celsius. This difference is so great that it statistically clear that 2014 will not be a record year (it’ll probably fall in the lower half of the top five warmest years in both the Remote Sensing Systems (RSS) and the University of Alabama-Huntsville (UAH) datasets). The variability of temperatures in the lower atmosphere is more sensitive to the occurrence of El Niño conditions and thus the super El Niño of 1998 set a high temperature mark that will likely stand for many years to come, or at least until another huge El Niño occurs.

Basically, what all this means, is that if you want 2014 to be the “warmest year ever recorded” you can find data to back you up, and if you prefer it not be, well, you can find data to back up that position as well.

In all cases, the former will make headlines.

But these headlines will be misplaced. The real news is that climate models continue to perform incredibly poorly by grossly overestimating the degree to which the earth is warming.

Let’s examine climate model projections for 2014 against the observations from the dataset which has the greatest chance of 2014 as the warmest year—the NOAA dataset.

Figure 1 shows the average of 108 different climate model projections of the annual surface temperature of the earth from 1980 through 2014 along with the annual temperature as compiled by NOAA.

 

Figure 1. Global annual surface temperature anomalies from 1980 to 2014. The average of 108 climate models (red) and observations from NOAA (blue) are anomalies from the 20th century average. In the case of the NOAA observations, the 2014 value is the average of January-October.

For the past 16 straight years, climate models have collectively projected more warming than has been observed.

Over the period 1980-2014, climate models projected the global temperature to rise at a rate of 0.24°C/decade while NOAA observations pegged the rise at 0.14°C/decade, about 40 percent less. Over the last 16 years, the observed rise is nearly 66 percent less than climate model projections. The situation is getting worse, not better. This is the real news, because it means that prospects for overly disruptive climate change are growing slimmer, as are justifications for drastic intervention.

We don’t expect many stories to look any further than their “2014 is the warmest year ever” headlines.

As to the rest of the picture, and the part which holds the deeper and more important implications, well, you’ll have to keep checking back with us here—we’re happy to fill you in!

Tim Lynch

Last November, voters in Washington, DC overwhelmingly approved a referendum that would have legalized marijuana in the city.  Now that measure has been stymied by House Republicans–led by Rep. Andy Harris (R-MD).

 

From today’s Washington Post: The move “shocked elected DC leaders, advocates for marijuana legalization and civil liberties groups.”

 

As a constitutional matter, the Congress can set policies for the District of Columbia, but this is an awful move.  No vote on marijuana reform, just override the voter-approved measure by inserting language into a gigantic spending bill.

 

Isn’t it interesting that such tactics never seem to be used to downsize the federal government and reduce its powers?  Why not zero out the budget for the DEA or the Export-Import Bank?

Charles Hughes

A new working paper from the National Bureau of Economic Research finds that significant minimum wage increases can hurt the very people they are intended to help. Authors Jeffery Clemens and Michael Wither find that significant minimum wage increases can negatively affect employment, average income, and the economic mobility of low-skilled workers. The authors find that significant “minimum wage increases reduced the employment, average income, and income growth of low-skilled workers over short and medium-run time horizons.”  Most troublingly, these low-skilled workers saw “significant declines in economic mobility,” as these workers were 5 percentage points less likely to reach lower middle-class earnings in the medium-term. The authors provide a possible explanation: the minimum wage increases reduced these workers’ “short-run access to opportunities for accumulating experience and developing skills.” Many of the people affected by minimum wage increases are on one of the first rungs of the economic ladder, low on marketable skills and experience. Working in these entry level jobs will eventually allow them to move up the economic ladder. By making it harder for these low-skilled workers to get on the first rung of the ladder, minimum wage increases could actually lower their chances of reaching the middle class.

Most of the debate over a minimum wage increase centers on the effects of an increase on aggregate employment, or the total number of jobs and hours worked that would be lost. A consensus remains elusive, but the Congressional Budget Office recently weighed in, estimating that a three year phase in of a $10.10 federal minimum wage option would reduce total employment by about 500,000 workers by the time it was fully implemented. Taken with the findings of the Clemens and Wither study, not only can minimum wage increases have negative effects for the economy as a whole, they can also harm the economic prospects of  low-skilled workers at the individual level.

Four states approved minimum wage increases through ballot initiatives in the recent midterm, and the Obama administration has proposed a significant increase at the federal level. This study should give them a reason to reconsider.

Recent Cato work on this topic can be found here and here

Nicole Kaeding

Last night, House and Senate negotiators released the legislative text for the government’s newest spending bill, dubbed the “Cromnibus.” The bill authorizes the government to spend $1.1 trillion on discretionary programs between now and September 30, 2015. The total spending level honors last year’s Ryan-Murray budget deal, but also makes a number of important changes to federal law.

These changes include:

Environmental Protection Agency (EPA): The EPA’s funding was cut by $60 million over last fiscal year. The agency’s budget has been cut by 21 percent since fiscal year 2010.

Department of Homeland Security (DHS): Following President Obama’s executive action on immigration, Republican sought to limit funding for DHS. According to the deal, DHS is only funded through February. The incoming Congress will need to fund the agency for the remainder of the fiscal year.

Internal Revenue Service (IRS): The IRS’ budget is cut by $345.6 million.

ObamaCare: The bill does not cut funding to ObamaCare implementation, but it also does not include any new funding to the Department of Health and Human Services and the Internal Revenue Service, the two agencies with primary implementation responsibilities. The bill also limits ObamaCare’s risk corridor provision, which provided a bailout to insurance companies.

Marijuana: The District of Columbia voted overwhelmingly in November to legalize marijuana. The Cromnibus halts the legalization process.

Yucca Mountain: The bill continues funding for the proposed nuclear storage site. Earlier this year, the Nuclear Regulatory Commission confirmed Yucca Mountain’s safety.

Overseas Contingency Operations: The budget deal also provides $64 billion in funding for military operations, including $5 billion for the fight against ISIS. The $64 billion is in addition to the $1.1 trillion in discretionary spending.

Internet Tax Moratorium: The federal moratorium on state and local internet taxes continues for one year.

 

Neal McCluskey

When I first heard about the White House Summit on Early Education being held today, I worried. “I sure hope this isn’t going to be a PR stunt to cheerlead for government pre-kindergarten programs,” I thought. Then I got the announcement: U.S. Secretary of Education Arne Duncan will be having a Twitter chat with pop sensation Shakira in conjunction with the summit! “Oh, I was just being silly,” I said to myself, relieved that this would be a sober, objective discussion about what we do – and do not – know about the effectiveness of pre-K programs.

Okay, that’s not actually what happened. In fairness to Shakira, she does appear to have a very serious interest in children’s well-being. Unfortunately, the White House does not appear to want to have an objective discussion of early childhood education.

Just look at this, from the official White House blog:

For every dollar we invest in early childhood education, we see a rate of return of $7 or more through a reduced need for spending on other services, such as remedial education, grade repetition, and special education, as well as increased productivity and earnings for these kids as adults.

Early education is one of the best investments our country can make. Participation in high-quality early learning programs—like Head Start, public and private pre-K, and childcare—provide children from all backgrounds with a strong start and a solid foundation for success in school.

Let me count the ways that this is deceptive, or just plain wrong, as largely documented in David Armor’s recent Policy Analysis The Evidence on Universal Preschool:

  • The 7-to-1 ROI figure – for which the White House cites no source – almost certainly comes from work done by James Heckman looking at the rate of return for the Perry Preschool program. It may well be accurate, but Perry was a microscopic, hyperintensive program from the 1960s that cannot be generalized to any modern, large-scale program.
  • If you look at the longitudinal, “gold-standard” research results for Head Start, you see that the modest advantages accrued early on essentially disappear by first grade…as if Head Start never happened. And federal studies released by the Obama administration are what report this.
  • It stretches credulity to call Head Start “high quality,” not just based on its results, but on its long history of waste and paralysis. Throughout the 2000s the federal Government Accountability Office and general media reported on huge waste and failure in the program.
  • Most evaluations of state-level pre-K programs do not randomly assign children to pre-K and compare outcomes with those not chosen, the “gold standard” mentioned above. Instead they often use “regression discontinuity design” which suffers from several shortcomings, arguably the biggest of which is that you can’t do longitudinal comparisons. In other words, you can’t detect the “fade out” that seems to plague early childhood education programs and render them essentially worthless. One large-scale state program that was evaluated using random-assignment – Tennessee’s – appears to be ineffective.
  • The White House says early childhood programs can help “children from all backgrounds.” Not only is that not true if benefits fade to nothing, but a federal, random-assignment evaluation of the Early Head Start program found that it had negative effects on the most at-risk children.

I suspect the vast majority of people behind expanding preschool are well intentioned, and I encourage them to leverage as much private and philanthropic funding as they can to explore different approaches to pre-K and see what might work. But a splashy event intended to proclaim something is true for which we just don’t have good evidence doesn’t help anyone.

Let’s not mislead taxpayers…or kids.

Alex Nowrasteh

In a little-noticed memo on November 20th, Department of Homeland Security Secretary Jeh Johnson ordered Customs and Border Protection and Citizenship and Immigration Services to allow unlawful immigrants who are granted advance parole to depart the United States and reenter legally.  This memo is based on a decision rendered in a 2012 Board of Immigration Appeals case called Matter of Arrabally. Allowing the immigrant to legally leave and reenter on advance parole means he or she can apply for a green card from inside of the United States–if he or she qualifies. 

Advance parole can be granted to recipients of DACA (deferred action for childhood arrivals) and DAPA (deferred action for parental accountability) if they travel abroad for humanitarian, employment, or educational purposes, which are broadly defined

Leaving the United States under advance parole means that the departure doesn’t legally count, so the 3/10 year bars are not triggered, and the unlawful immigrant can apply for a green card once they return to the United States through 8 USC §1255 if he or she is immediately related to a U.S. citizen.  Reentering the United States under advance parole means that the prior illegal entry and/or presence are wiped out in the eyes of the law.  Crucially, individuals who present themselves for inspection and are either admitted or paroled by an immigration officer can apply for their green card from inside of the United States and wait here while their application is being considered.

In such a case, unlawful immigrants who receive deferred action and who are the spouses of American citizens will be able to leave the United States on advance parole and reenter legally, allowing them to apply for a green card once they return.  Unlawful immigrants who are the parents of adult U.S. citizen children will be able to do the same.  Unlawful immigrants who are the parents of minor U.S. citizen children and are paroled back into the country will just have to wait until those children are 21 years of age and then they can be sponsored for a green card.

According to New York based immigration attorney Matthew Kolken, “President Obama’s policy change has the potential to provide a bridge to a green card for what could be millions of undocumented immigrants with close family ties to the United States.” 

When the legal memo ensures the consistent application of the Arrabally decision, Johnson could grant advance parole to DACA and DAPA recipients who will then be able to leave the United States and reenter to adjust their status to earn a green card if they have a family member who can sponsor them.  Advance parole would wipe out the 3/10 year bars threat for millions of unlawful immigrants and allow those who “touch back” in their home country and return legally to apply for their green cards from inside of the United States–a process called “adjustment of status.” 

This will only apply to those unauthorized immigrants who only have one immigration offense, such as entering unlawfully.  An unlawful immigrant who was deported or left voluntarily and then returned will not be eligible.  Immediate relatives of citizens that overstayed a legal visa are already eligible to apply for adjustment of status if they were previously inspected and admitted despite their overstay, so this policy does not affect them.  Advance parole and legal reentry will only allow those unlawful immigrants who entered without inspection one time to legally leave and reenter the United States where they can then apply for a green card if they have a family member that can sponsor them.

There is a potential legal catch.  To be eligible for parole under the statute, the foreigner would have to be a significant public benefit or be paroled for an urgent humanitarian reason.  However, the parole requirements for DACA recipients who have received parole so far are less onerous.  The “significant public benefit” or “urgent humanitarian reason” are potentially very difficult burdens for the DHS to overcome when granting parole to DACA and DAPA recipients.           

Kolken does not think those legal problems will constrain DHS in issuing advance parole.  “Advance parole is generally granted to recipients of deferred action who are able to establish that they intend to travel for humanitarian, employment or educational purposes,” he said.  “The problem lies with the fact that advance parole does not guarantee readmission into the country, which is why we need uniformity in the implementation of policy by inspecting officers.”  In other words, the current problem with advance parole is the unpredictability of the CBP officers at the port of entry.  The DHS memo should reduce that concern.

Advance parole could allow millions of DAPA and DACA recipients to adjust their status to lawful permanent residency.  By contrast, the 2013 Senate bill was only supposed to legalize around 8 million and over a much longer period of time.  Through manipulating the terribly confused and poorly written immigration laws, this executive action could legalize more unlawful immigrants more quickly than the Senate was willing to.  If he can do this legally (BIG question), one wonders: what took him so long to do it?    

Daniel J. Mitchell

Many statists are worried that Republicans may install new leadership at the Joint Committee on Taxation (JCT) and Congressional Budget Office (CBO).

This is a big issue because these two score-keeping bureaucracies on Capitol Hill tilt to the left and have a lot of power over fiscal policy.

The JCT produces revenue estimates for tax bills, yet all their numbers are based on the naive assumption that tax policy generally has no impact on overall economic performance. Meanwhile, CBO produces both estimates for spending bills and also fiscal commentary and analysis, much of it based on the Keynesian assumption that government spending boosts economic growth.

I personally have doubts whether congressional Republicans are smart enough to make wise personnel choices, but I hope I’m wrong.

Matt Yglesias of Vox also seems pessimistic, but for the opposite reason.

He has a column criticizing Republicans for wanting to push their policies by using “magic math” and he specifically seeks to debunk the notion - sometimes referred to as dynamic scoring or the Laffer Curve - that changes in tax policy may lead to changes in economic performance that affect economic performance.

He asks nine questions and then provides his version of the right answers. Let’s analyze those answers and see which of his points have merit and which ones fall flat.

But even before we get to his first question, I can’t resist pointing out that he calls dynamic scoring “an accounting gimmick from the 1970s” in his introduction. That is somewhat odd since the JCT and CBO were both completely controlled by Democrats at the time and there was zero effort to do anything other than static scoring.

I suppose Yglesias actually means that dynamic scoring first became an issue in the 1970s as Ronald Reagan (along with Jack Kemp and a few other lawmakers) began to argue that lower marginal tax rates would generate some revenue feedback because of improved incentives to work, save, and invest.

Now let’s look at his nine questions and see if we can debunk his debunking:

1. The first question is “What is dynamic scoring?” and Yglesias responds to himself by stating it “is the idea that when estimating the budgetary impact of changes in tax policy, you ought to take into account changes to the economy induced by the policy change” and he further states that it “sounds like a reasonable idea.”

But then he says the real problem is that conservatives exaggerate and “say that large tax cuts will have a relatively small impact on the deficit—or even that they make the deficit smaller” and that they “cite an idea known as the Laffer Curve to argue that tax cuts increase growth so much that tax revenues actually rise.”

He’s sort of right. There are definitely examples of conservatives overstating the pro-growth impact of tax cuts, particularly when dealing with proposals—such as expanded child tax credits—that presumably will have no impact on economic performance since there is no change in marginal tax rates on productive behavior.

But notice that he doesn’t address the bigger issue, which is whether the current approach (static scoring) is accurate and appropriate even when dealing with major changes in marginal tax rates on work, saving, and investment. That’s what so-called supply-side economists care about, yet Yglesias instead prefers to knock down a straw man.

2. The second question is “What is the Laffer Curve?” and Yglesias answer his own question by asserting that the “basic idea of the curve is that sometimes lower tax rates lead to more tax revenue by boosting economic growth.” He then goes on to ridicule the notion that tax cuts are self-financing, even citing a column by National Review’s Kevin Williamson.

Once again, Yglesias is sort of right. Some Republicans have made silly claims, but he mischaracterizes what Williamson wrote.

More specifically, he’s wrong in asserting that the Laffer Curve is all about whether tax cuts produce more revenue. Instead, the notion of the curve is simply that you can’t calculate the revenue impact of changes in tax rates without also measuring the likely change in taxable income. The actual revenue impact of changes in tax rates will then depend on whether you’re on the upward-sloping part of the curve or downward-sloping part of the curve.

The real debate is the shape of the curve, not whether a Laffer Curve exists. Indeed, I’m not aware of a single economist, no matter how far to the left (including John Maynard Keynes), who thinks a 100 percent tax rate maximizes revenue. Yet that’s the answer from the JCT. Moreover, the Laffer Curve also shows that tax increases can impose very high economic costs even if they do raise revenue, so the value of using such analysis is not driven by whether revenues go up or down.

3. The third question is “So do tax cuts boost economic growth?” and Yglesias responds by stating “the credible research on the matter is very very mixed.” But he follows that response by citing research which concluded that “a tax cut financed by reductions in wasteful spending or social assistance for the elderly would boost growth.”

But that leaves open the question as to whether the economy does better because of the lower tax burden, the lower spending burden, or some combination of the two effects. But I’ll take any of those three answers.

So is he “sort of right” again? Not so fast. Yglesias also cites the Congressional Research Service (which rubs me the wrong way) and a couple of academic economists who concluded that there is “no systematic correlation between the level of taxation and the level of economic growth.”

The bottom line is that there’s no consensus on the economic impact of taxation (in part because it is difficult to disentangle the impact of taxes from the impact on spending, and that’s not even including all the other policies that determine economic performance). But I still think Yglesias is being a bit misleading because there is far more consensus on the economic impact of marginal tax rates and debates about the Laffer Curve and dynamic scoring very often revolve around those types of tax policies.

4. The fourth question is “How does tax scoring work now?” and Yglesias respond to himself by noting that the various score-keeping bureaucracies measure “demand-side effects” and “behavioral effects.”

He’s right, but CBO uses so-called demand-side effects to justify Keynesian spending, so that’s not exactly reassuring news for people who focus more on real-world evidence.

And he’s also right that JCT measures changes in behavior (such as smokers buying fewer cigarettes if the tax goes up), and this type of analysis (sometimes called microeconomic dynamic scoring) certainly is a good thing.

But the real controversy is about macroeconomic dynamic scoring, which we’ll address below.

5. The fifth question is “Can we take a break from all this macroeconomic modeling?” and is simply an excuse for Yglesias to make a joke, though I can’t tell whether he is accusing Reagan supporters of being racists or mocking some leftists for accusing Reagan supporters of being racist.

So I’m not sure how to react, other than to recommend the fourth video at this link if you want some real Reagan humor.

6. The sixth question is “What do current scoring methods leave out?” and Yglesias accurately notes that what “dynamic-scoring proponents want is a model of macroeconomic consequences. They think that a country with lower tax rates will see more investment in physical and human capital, leading to more productivity, and more economic growth.”

He even cites my blog post from last month and correctly describes me as believing that it is “self-evidently ridiculous that the current CBO model says higher tax rates would lead to faster economic growth via lower deficits.”

I also think he is fair in pointing out that “people sharply disagree about how much tax rates actually influence economic growth” and that “the whole terrain is enormously contested.”

But this is why I think my view is the reasonable middle ground. At one extreme you find (at least in theory) some over-enthusiastic Republican types who argue that all tax cuts are self-financing. At the other extreme you find the JCT saying tax policy has no impact on the economy and actually arguing that you maximize tax revenue with 100 percent tax rates. I suspect that Yglesias, if pressed, will agree the JCT approach is nonsensical.

So why not have the JCT—in a fully transparent manner—begin to incorporate macroeconomic analysis?

7. The seventh question is “Has dynamic scoring ever been tried?” and Yglesias self-responds by pointing out that a Treasury Department dynamic analysis of the 2001 and 2003 tax cuts come to the conclusion that “the resulting budget impact would be 7 percent smaller than what was suggested by conventional scoring methods” and “ended with the conclusion that the Bush tax cuts substantially decreased revenue.”

In other words, dynamic analysis was not used to imply that tax cuts are self-financing. Indeed, the dynamic score in the example of what would happen if the Bush tax cuts were made permanent turned out to be very modest.

So why, then, are folks on the left so determined to block reforms that, in practice, don’t yield dramatic changes in numbers? My own guess, for what it’s worth, is that they don’t want any admission or acknowledgement that lower tax rates are better for growth than higher tax rates.

8. The eighth question is “Why are we talking about dynamic scoring now?” and Yglesias answers his own question by accurately stating that “the Republican takeover of Congress starting in 2015 gives the GOP an opportunity to either change the scoring rules, change the personnel in charge of the scoring, or both.”

He’s not just sort of right. He’s completely right. I have no disagreements.

9. The ninth question is “Why does the score matter?” and his self-response is “the scores matter because perceptions matter in politics.” In other words, politicians don’t want to be accused of enacting legislation that is predicted to increase red ink.

Yglesias is also right when he writes that this “effect shouldn’t be exaggerated. In the past, Republicans haven’t hesitated to vote for tax measures that the CBO says will increase the deficit. That’s because they have a strong preference for low tax rates.”

At the risk of being boring, I also think he’s right about the degree to which scores matter.

The bottom line is that questions #1, #2, #3, and #6 are the ones that matter. Yglesias makes plenty of reasonable points, but I think his argument ultimately falls flat because he spends too much time attacking the all-tax-cuts-pay-for-themselves straw man and not enough time addressing whether it is reasonable for the JCT to use a methodology that assumes taxes have no effect on the overall economy.

But I expect to hear similar arguments, expressed in a more strident fashion, if Republicans take prudent steps—starting with personnel changes—to modernize the JCT and CBO apparatus.

P.S. While tax cuts usually do lead to revenue losses, there is at least one very prominent case of lower tax rates leading to more revenue.

P.P.S. If the JCT approach is reasonable, why do the overwhelming majority of CPAs disagree? Is it possible that they have more real-world understanding of how taxpayers (particularly upper-income taxpayers) respond when tax rates change?

P.P.P.S. If the JCT approach is reasonable, why do international bureaucracies so often produce analysis showing a Laffer Curve?

There’s also some nice evidence from DenmarkCanadaFrance, and the United Kingdom.

Patrick J. Michaels

The 20th annual “Conference of the Parties” to the UN’s 1992 climate treaty (“COP-20”) is in its second week in Lima, Peru and the news is the same as from pretty much every other one.

You don’t need a calendar to know when these are coming up, as the media are flooded with global warming horror stories every November. This year’s version is that West Antarctic glaciers are shedding a “Mount Everest” of ice every year. That really does raise sea level—about 2/100 of an inch per year. As we noted here, that reality probably wouldn’t have made a headline anywhere.

The meetings are also preceded by some great climate policy “breakthrough.” This year’s was the president’s announcement that China, for the first time, was committed to capping its emissions by 2030. They did no such thing; they said they “intend” to level their emissions off “around” 2030. People “intend” to do a lot of things that don’t happen.

During the first week of these two-day meetings, developing nations coalesce around the notion the developed world (read: United States) must pay them $100 billion per year in perpetuity in order for them to even think about capping their emissions. It’s happened in at least the last five COPs.

In the second week, the UN announces, dolefully, that the conference is deadlocked, usually because the developing world has chosen not to commit economic suicide. Just yesterday, India announced that it simply wasn’t going to reduce its emissions at the expense of development.

Then an American savior descends. In Bali, in 2007, it was Al Gore. In 2009, Barack Obama arrived and barged into one of the developing nation caucuses, only to be asked politely to leave. This week it will be Secretary of State John Kerry, who earned his pre-meeting bones by announcing that climate change is the greatest threat in the world.

I guess nuclear war isn’t so bad after all.

As the deadlock will continue, the UN will announce that the meeting is going to go overtime, beyond its scheduled Friday end. Sometime on the weekend—and usually just in time to get to the Sunday morning newsy shows—Secretary Kerry will announce a breakthrough, the meeting will adjourn, and everyone will go home to begin the cycle anew until next December’s COP-21 in Paris, where a historic agreement will be inked.

Actually, there was something a little different in Lima this year: Given all the travel and its relative distance from Eurasia, COP-20 set the all-time record for carbon dioxide emissions associated with these annual gabfests.

Doug Bandow

WALLAY, BURMA—When foreign dignitaries visit Myanmar, still known as Burma in much of the West, they don’t walk the rural hills over which the central government and ethnic groups such as the Karen fought for; for decades. Like isolated Wallay village.

Wallay gets none of the attention of bustling Rangoon or the empty capital of Naypyitaw. Yet the fact that I could visit without risking being shot may be the most important evidence of change in Burma. For three years the Burmese army and Karen National Liberation Army have observed a ceasefire. For the first time in decades Karen children are growing up with the hope of a peaceful future.

The global face of what Burma could become remains Aung Sang Suu Kyi, the heroic Nobel Laureate who won the last truly free election in 1990—which was promptly voided by the military junta. The fact that she is free after years of house arrest demonstrates the country’s progress. The fact that she is barred from running for president next year, a race she almost certainly would win, illustrates the challenges remaining for Burma’s transformation.

The British colony gained its independence after World War II. The country’s short-lived democracy was terminated by General Ne Win in 1962. The paranoid junta relentlessly waged war on the Burmese people.

Then the military made a dramatic U-turn, four years ago publicly stepping back from power. Political prisoners were released, media restrictions were relaxed, and Suu Kyi’s party, the National League for Democracy, was allowed to register.

The U.S. and Europe lifted economic sanctions and exchanged official visits. Unfortunately, however, in recent months the reform process appears to have gone into neutral, if not reverse.

While most of the military battles in the east are over, occasional clashes still occur. None of the 14 ceasefires so far reached has been converted into a permanent peace. While investment is sprouting in some rebel-held areas, most communities, like Wallay, are waiting for certain peace and sustained progress.

Of equal concern, Rakhine State has been torn by sectarian violence, exacerbated by the security forces. At least 200 Muslims Rohingyas have been killed and perhaps 140,000 mostly Rohingyas displaced.

Political reform also remains incomplete. Particularly serious has been the reversal of media freedom and imprisonment of journalists. Khin Ohmar, with Burma Partnership, a civil society network, cited “surveillance, scrutiny, threats and intimidation.”

The 2008 constitution bars Suu Kyi from contesting the presidency. Arbitrarily barring the nation’s most popular political figure from the government’s top position would make any outcome look illegitimate.

Even economic liberalization has stalled. Much of the economy remains in state- or military-controlled hands.

In short, the hopes that recently soared high for Burma have crashed down to reality.

But U.S. influence is limited. Washington could reimpose economic sanctions. However, returning to the policy of the past would be a dead end.

Nor can the U.S. win further reform with more aid. Washington’s lengthy experience attempting to “buy” political change is exceedingly poor. Anyway, participation in the Western economies is worth more than any likely official assistance package.

The administration also hopes to use military engagement as leverage for democracy. Unfortunately, contact with America is not enough to win foreign military men to democracy.

As I wrote in Forbes online:  “The best strategy would be to work with Europe and Japan to develop a list of priority political reforms and tie them to further allied support and cooperation. These powers also should point out that a substantially larger economy would yield plenty of wealth for regime elites and the rest of the population, whose aspirations are rising.”

Finally, friends of liberty worldwide should offer aid and support to Burmese activists.

During his recent visit President Obama said:  “We recognize change is hard and you do not always move in a straight line, but I’m optimistic.” This still impoverished nation has come far yet has equally far to go. America must continue to engage the regime in Naypyitaw with prudence and patience.

Ted Galen Carpenter

As if the United States didn’t already have enough foreign policy worries, a dangerous issue that has been mercifully quiescent over the past five years shows signs of reviving.  Taiwan’s governing Kuomintang Party (KMT) and its conciliatory policy toward Beijing suffered a brutal defeat in elections for local offices on November 29.  Indeed, the extent of the KMT’s rout made the losses the Democratic Party experienced in U.S. midterm congressional elections look like a mild rebuke.  The setback was so severe that President Ma Ying-jeou promptly resigned as party chairman.  Although that decision does not change Ma’s role as head of the government, it does reflect his rapidly declining political influence.

As I discuss in an article over at The National Interest Online, growing domestic political turbulence in Taiwan is not just a matter of academic interest to the United States.  Under the 1979 Taiwan Relations Act, Washington is obligated to assist Taipei’s efforts to maintain an effective defense.  Another provision of the TRA obliges U.S. leaders to regard any coercive moves Beijing might take against the island as a serious threat to the peace of East Asia.  

During the presidencies of Lee Teng-hui and Chen Shui-bian from the mid 1990s to 2008, Beijing reacted badly to efforts by those leaders to convert Taiwan’s low-key, de facto independence into something more formal and far reaching.  As a result, periodic crises erupted between Beijing and Washington.  U.S. officials seemed relieved when voters elected the milder, more conciliatory Ma as Chen’s successor.  That political change also seemed to reflect concern on the part of a majority of Taiwanese that Chen and his explicitly pro-independence Democratic Progressive Party (DPP) had pushed matters to a dangerous level in testing Beijing’s forbearance.

But just as Chen may have overreached and forfeited domestic support by too aggressively promoting a pro-independence agenda, his successor appears to have drifted too far in the other direction.  Domestic sentiment for taking a stronger stance toward the mainland on a range of issues has been building for at least the past two years.  Public discontent exploded in March 2014 in response to a new trade deal between Taipei and Beijing, which opponents argued would give China far too much influence over Taiwan’s economy.  Those disorders culminated with an occupation of Taiwan’s legislature, accompanied by massive street demonstrations that persisted for weeks.  The November election results confirmed the extent of the public’s discontent.

Perhaps reflecting the shift in public sentiment toward Beijing, even Ma’s government began to adopt a more assertive stance on security issues, despite pursuing enhanced economic ties.  Taipei’s decision in the fall of 2014 to spend $2.5 billion on upgraded anti-missile systems reflected a renewed seriousness about protecting Taiwan’s security and deterring Beijing from contemplating aggression.

China’s reaction to the November election results was quick and emphatic.  Chinese media outlets cautioned the victorious DPP against interpreting the election outcome as a mandate for more hard-line positions on cross-strait issues.  Even more ominous, Retired General Liu Jingsong, the former president of the influential Chinese Academy of Military Sciences, warned that the Taiwan issue “will not remain unresolved for a long time.”  Moreover, Chinese officials “will not abandon the possibility of using force” to determine the island’s political status.  Indeed, he emphasized that it remained an option “to resolve the issue by military means, if necessary.” That is a noticeably different tone from Deng Xiaoping’s statement in the late 1970s that there was no urgency to deal with the Taiwan issue—that it could even go on for a century without posing a serious problem.

A key question now is whether Beijing will tolerate even a mildly less cooperative Taiwan.  Chinese leaders have based their hopes on the belief that greater cross-strait economic relations would erode Taiwanese enthusiasm for any form of independence.  That does not appear to have happened.  Opinion polls indicate meager support for reunification with the mainland—even if it included guarantees of a high degree of political autonomy.

But the adoption of a confrontational stance on Beijing’s part regarding Taiwan would quickly reignite that issue as a source of animosity in U.S.-China relations.  The Obama years have already seen a worrisome rise in bilateral tensions.  The announced U.S. “pivot” or “rebalancing” of U.S. forces to East Asia has intensified Beijing’s suspicions about Washington’s motives.  Sharp differences regarding territorial issues in the South China and East China seas have also been a persistent source of friction.  The slumbering Taiwan issue is now poised to join that list of worrisome flashpoints.

Randal O'Toole

Maryland’s Governor-Elect Larry Hogan has promised to cancel the Purple Line, another low-capacity rail boondoggle in suburban Washington DC that would cost taxpayers at least $2.4 billion to build and much more to operate and maintain. The initial projections for the line were that it would carry so few passengers that the Federal Transit Administration wouldn’t even fund it under the rules then in place. Obama has since changed those rules, but not to take any chances, Maryland’s current governor, Martin O’Malley, hired Parsons Brinckerhoff with the explicit goal of boosting ridership estimates to make it a fundable project.

I first looked at the Purple Line in April 2013, when the draft EIS (written by a team led by Parsons Brinckerhoff) was out projecting the line would carry more than 36,000 trips each weekday in 2030. This is far more than the 23,000 trips per weekday carried by the average light-rail line in the country in 2012. Despite this optimistic projection, the DEIS revealed that the rail project would both increase congestion and use more energy than all the cars it took off the road (though to find the congestion result you had to read the accompanying traffic analysis technical report, pp. 4-1 and 4-2).

A few months after I made these points in a blog post and various public presentations, Maryland published Parsons Brinckerhoff’s final EIS, which made an even more optimistic ridership projection: 46,000 riders per day in 2030, 28 percent more than in the draft. If measured by trips per station or mile of rail line, only the light-rail systems in Boston and Los Angeles carry more riders than the FEIS projected for the purple line.

Considering the huge demographic differences between Boston, Los Angeles, and Montgomery County, Maryland, it isn’t credible to think that the Purple Line’s performance will approach Boston and L.A. rail lines. First, urban Suffolk County (Boston) has 12,600 people per square mile and urban Los Angeles County has 6,900 people per square mile, both far more than urban Montgomery County’s 3,500 people per square mile.

However, it is not population densities but job densities that really make transit successful. Boston’s downtown, the destination of most of its light-rail (Green Line) trips, has 243,000 jobs. Los Angeles’s downtown, which is at the end of all but one of its light-rail lines, has 137,000 downtown jobs. LA’s Green Line doesn’t go downtown, but it serves LA Airport, which has and is surrounded by 135,000 jobs.

Montgomery County, where the Purple Line will go, really no major job centers. The closest is the University of Maryland which has about 46,000 jobs and students, a small fraction of the LA and Boston job centers. Though the university is on the proposed Purple Line, the campus covers 1,250 acres, which means many students and employees will not work or have classes within easy walking distance of the rail stations. Thus, the ridership projections for the Purple Line are not credible.

In terms of distribution of jobs and people, Montgomery County is more like San Jose than Boston or Los Angeles. San Jose has three light-rail lines, all of which together carry fewer than 35,000 riders per day, less than was projected by the DEIS for the Purple line.

Given the FEIS’s higher ridership numbers, it’s not surprising that it reported that the line will save energy and reduce congestion, the opposite of the DEIS findings. However, a close look reveals that, even at the higher ridership numbers, these conclusions are suspect.

The traffic analysis for the DEIS estimated the average speeds of auto traffic in 2030 with and without the Purple Line. Without the line, speeds would average 24.5 mph; with they line, they would average 24.4 mph. Multiplied by the large number of travelers in the area and this meant the line would waste 13 million hours of people’s time per year.

The traffic analysis for the FEIS made no attempt to estimate average speeds. Instead, it focused on looking at the level of service (LOS)–a letter grade from A to F–at various intersections affected by the rail line. Without the line, by 2040 15 intersections in the morning and 16 in the afternoon would be degraded to LOS F. With the line, only 8 in the morning and 15 in the afternoon would be LOS F (p. 30). So that makes it appear that the rail line is reducing congestion.

A careful reading reveals this isn’t true. For the no-build alternative, planners assumed that absolutely nothing would be done to relieve congestion. For the rail alternative, planners assumed that various mitigation measures would be applied “to allow the intersections to operate in the most efficient conditions.” It seems likely that these mitigation measures, not the rail line, are the reasons why the preferred alternative has fewer intersections at LOS F.

Meanwhile, the energy analysis contains two serious flaws. First, it assumes that cars in 2040 will use the same energy per mile as cars in 2010. In fact, given the latest fuel-economy standards, the average car on the road in 2040 will use less than half the energy of the average car in 2010.

Even more serious, the final EIS assumed that each kilowatt hour of electricity needed to power the rail line required 3,412 BTUs of energy (calculated by dividing BTUs by KWhs in table 4-41 on page 4-142). While one KWh is equal to 3,412 BTUs, due to energy losses in generation and transmission, it takes 10,339 BTUs of energy to generate and transmit that KWh to the railhead (see page A-18 of the Department of Energy’s Transportation Energy Data Book). This is such a rookie mistake that Parsons Brinckerhoff’s experts would have had to work hard looking the other way for it to slip through. In any case, after correcting both these errors, the rail line ends up using more energy than the cars it take off the road, just as the DEIS found.

In short, Maryland’s ridership projections for the Purple Line are extremely optimistic, but even if they turned out to be correct, the Purple Line would still increase both traffic congestion and energy consumption. There is no valid reason for funding this turkey, and Governor-elect Hogan should chop off its head.

Patrick J. Michaels and Paul C. "Chip" Knappenberger

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

——–

A favorite global warming chesnut is that human-caused climate change will make the planet uninhabitable for Homo sapiens (that’s us). The latest iteration of this cli-fi classic appears in this week’s New York Times’ coverage of the U.N. climate talks taking place in Lima, Peru (talks that are destined to fail, as we point out here).

Back in September, The World Health Organization (WHO) released a study claiming that global warming as a result of our pernicious economic activity will lead to a quarter million extra deaths each year during 2030 to 2050.  Yup, starting a mere 15 years from today. Holy cats!

That raised the antennae of Indur M. Goklany, a science and technical policy analyst who studies humanity’s well-being and the impact of environmental change upon it. Goklany detailed many of his findings in a 2007 book he wrote for Cato, The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet.

As you may imagine, Goklany, found much at fault with the WHO study and wrote his findings up for the Global Warming Policy Foundation (GWPF)—a U.K. think tank which produces a lot of good material on global warming.

In “Unhealthy Exaggeration: The WHO report on climate change” Goklany doesn’t pull any punches. You ought to have a look at the full report, but in the meantime, here is the Summary:

In the run-up to the UN climate summit in September 2014, the World Health Organization (WHO) released, with much fanfare, a study that purported to show that global warming will exacerbate under nutrition (hunger), malaria, dengue, excessive heat and coastal flooding and thereby cause 250,000 additional deaths annually between 2030 and 2050. This study, however, is fundamentally flawed.

Firstly, it uses climate model results that have been shown to run at least three times hotter than empirical reality (0.15◦C vs 0.04◦C per decade, respectively), despite using 27% lower greenhouse gas forcing.

Secondly, it ignores the fact that people and societies are not potted plants; that they will actually take steps to reduce, if not nullify, real or perceived threats to their life, limb and well-being. Thus, if the seas rise around them, heatwaves become more prevalent, or malaria, diarrhoeal disease and hunger spread, they will undertake adaptation measures to protect themselves and reduce, if not eliminate, the adverse consequences. This is not a novel concept. Societies have been doing just this for as long as such threats have been around, and over time and as technology has advanced they have gotten better at it. Moreover, as people have become wealthier, these technologies have become more affordable. Consequently, global mortality rates from malaria and extreme weather events, for instance, have been reduced at least five-fold in the past 60 years.

Yet, the WHO study assumes, explicitly or implicitly, that in the future the most vulnerable populations – low income countries in Africa, Europe, southeast Asia and the western Pacific – will not similarly avail themselves of technology or take any commonsense steps to protect themselves. This is despite many suitable measures already existing – adapting to sea level rise for example – while others are already at the prototype stage and are being further researched and developed: early-warning systems for heatwaves or the spread of malaria or steps to improve sanitation, hygiene or the safety of drinking water.

Finally, the WHO report assumes, erroneously, if the IPCC’s Fifth Assessment Report is to be believed, that carbon dioxide levels above 369 ppm – today we are at 400ppm and may hit 650ppm if the scenario used by the WHO is valid – will have no effect on crop yields. Therefore, even if one assumes that the relationships between climatic variables and mortality used by the WHO study are valid, the methodologies and assumptions used by WHO inevitably exaggerate future mortality increases attributable to global warming, perhaps several-fold.

In keeping with the topic of bad predictions, check out the “Friday Funny” at the Watts Up With That blog where guest blogger Tom Scott has compiled a list of failed eco-climate claims dating back nearly a century. He’s collected some real doozies. Here are a few of the best:

“By the year 2000 the United Kingdom will be simply a small group of impoverished islands, inhabited by some 70 million hungry people … If I were a gambler, I would take even money that England will not exist in the year 2000.” -Paul Ehrlich, Speech at British Institute For Biology, September 1971

Some predictions for the next decade (1990’s) are not difficult to make… Americans may see the ’80s migration to the Sun Belt reverse as a global warming trend rekindles interest in cooler climates. -Dallas Morning News December 5th 1989

Giant sand dunes may turn Plains to desert – Huge sand dunes extending east from Colorado’s Front Range may be on the verge of breaking through the thin topsoil, transforming America’s rolling High Plains into a desert, new research suggests. The giant sand dunes discovered in NASA satellite photos are expected to re- emerge over the next 20 to 50 years, depending on how fast average temperatures rise from the suspected “greenhouse effect,” scientists believe. -Denver Post April 18, 1990

There are many more where these came from. To lighten your day, you ought to have a look!

David Boaz

The royals are coming, the royals are coming! In this case, the grandson of the Queen of England, along with his wife, who took a fairytale leap from commoner to duchess by marrying him. (Just imagine, Kate Middleton a duchess while Margaret Thatcher was only made a countess.) And once again Americans who have forgotten the American Revolution are telling us to bow and curtsy before them, and address them as “Your Royal Highness,” and stand when William enters the room.

So one more time: Americans don’t bow or curtsy to foreign monarchs. (If you don’t believe me, ask Miss Manners, repeatedly.)

This is a republic. We do not recognize distinctions among individuals based on class or birth. We are not subjects of the queen of the England, the emperor of Japan, the king of Swaziland, or the king of Saudi Arabia. Therefore we don’t bow or curtsy to foreign heads of state.

Prince William’s claim to such deference is that he is a 24th-generation descendant of William the Conqueror, who invaded England and subjugated its inhabitants. In Common Sense, one of the founding documents of the American Revolution, Thomas Paine commented on that claim:

Could we take off the dark covering of antiquity, and trace them to their first rise, that we should find the first [king] nothing better than the principal ruffian of some restless gang, whose savage manners or pre-eminence in subtility obtained him the title of chief among plunderers; and who by increasing in power, and extending his depredations, over-awed the quiet and defenceless to purchase their safety by frequent contributions….

England, since the conquest, hath known some few good monarchs, but groaned beneath a much larger number of bad ones; yet no man in his senses can say that their claim under William the Conqueror is a very honorable one. A French bastard landing with an armed banditti, and establishing himself king of England against the consent of the natives, is in plain terms a very paltry rascally original.—It certainly hath no divinity in it.

Citizens of the American republic don’t bow to monarchs, or their grandsons.

 

Pages