Feed aggregator

Alex Nowrasteh

In a little-noticed memo on November 20th, Department of Homeland Security Secretary Jeh Johnson ordered Customs and Border Protection and Citizenship and Immigration Services to allow unlawful immigrants who are granted advance parole to depart the United States and reenter legally.  This memo is based on a decision rendered in a 2012 Board of Immigration Appeals case called Matter of Arrabally. Allowing the immigrant to legally leave and reenter on advance parole means he or she can apply for a green card from inside of the United States–if he or she qualifies. 

Advance parole can be granted to recipients of DACA (deferred action for childhood arrivals) and DAPA (deferred action for parental accountability) if they travel abroad for humanitarian, employment, or educational purposes, which are broadly defined

Leaving the United States under advance parole means that the departure doesn’t legally count, so the 3/10 year bars are not triggered, and the unlawful immigrant can apply for a green card once they return to the United States through 8 USC §1255 if he or she is immediately related to a U.S. citizen.  Reentering the United States under advance parole means that the prior illegal entry and/or presence are wiped out in the eyes of the law.  Crucially, individuals who present themselves for inspection and are either admitted or paroled by an immigration officer can apply for their green card from inside of the United States and wait here while their application is being considered.

In such a case, unlawful immigrants who receive deferred action and who are the spouses of American citizens will be able to leave the United States on advance parole and reenter legally, allowing them to apply for a green card once they return.  Unlawful immigrants who are the parents of adult U.S. citizen children will be able to do the same.  Unlawful immigrants who are the parents of minor U.S. citizen children and are paroled back into the country will just have to wait until those children are 21 years of age and then they can be sponsored for a green card.

According to New York based immigration attorney Matthew Kolken, “President Obama’s policy change has the potential to provide a bridge to a green card for what could be millions of undocumented immigrants with close family ties to the United States.” 

When the legal memo ensures the consistent application of the Arrabally decision, Johnson could grant advance parole to DACA and DAPA recipients who will then be able to leave the United States and reenter to adjust their status to earn a green card if they have a family member who can sponsor them.  Advance parole would wipe out the 3/10 year bars threat for millions of unlawful immigrants and allow those who “touch back” in their home country and return legally to apply for their green cards from inside of the United States–a process called “adjustment of status.” 

This will only apply to those unauthorized immigrants who only have one immigration offense, such as entering unlawfully.  An unlawful immigrant who was deported or left voluntarily and then returned will not be eligible.  Immediate relatives of citizens that overstayed a legal visa are already eligible to apply for adjustment of status if they were previously inspected and admitted despite their overstay, so this policy does not affect them.  Advance parole and legal reentry will only allow those unlawful immigrants who entered without inspection one time to legally leave and reenter the United States where they can then apply for a green card if they have a family member that can sponsor them.

There is a potential legal catch.  To be eligible for parole under the statute, the foreigner would have to be a significant public benefit or be paroled for an urgent humanitarian reason.  However, the parole requirements for DACA recipients who have received parole so far are less onerous.  The “significant public benefit” or “urgent humanitarian reason” are potentially very difficult burdens for the DHS to overcome when granting parole to DACA and DAPA recipients.           

Kolken does not think those legal problems will constrain DHS in issuing advance parole.  “Advance parole is generally granted to recipients of deferred action who are able to establish that they intend to travel for humanitarian, employment or educational purposes,” he said.  “The problem lies with the fact that advance parole does not guarantee readmission into the country, which is why we need uniformity in the implementation of policy by inspecting officers.”  In other words, the current problem with advance parole is the unpredictability of the CBP officers at the port of entry.  The DHS memo should reduce that concern.

Advance parole could allow millions of DAPA and DACA recipients to adjust their status to lawful permanent residency.  By contrast, the 2013 Senate bill was only supposed to legalize around 8 million and over a much longer period of time.  Through manipulating the terribly confused and poorly written immigration laws, this executive action could legalize more unlawful immigrants more quickly than the Senate was willing to.  If he can do this legally (BIG question), one wonders: what took him so long to do it?    

Daniel J. Mitchell

Many statists are worried that Republicans may install new leadership at the Joint Committee on Taxation (JCT) and Congressional Budget Office (CBO).

This is a big issue because these two score-keeping bureaucracies on Capitol Hill tilt to the left and have a lot of power over fiscal policy.

The JCT produces revenue estimates for tax bills, yet all their numbers are based on the naive assumption that tax policy generally has no impact on overall economic performance. Meanwhile, CBO produces both estimates for spending bills and also fiscal commentary and analysis, much of it based on the Keynesian assumption that government spending boosts economic growth.

I personally have doubts whether congressional Republicans are smart enough to make wise personnel choices, but I hope I’m wrong.

Matt Yglesias of Vox also seems pessimistic, but for the opposite reason.

He has a column criticizing Republicans for wanting to push their policies by using “magic math” and he specifically seeks to debunk the notion - sometimes referred to as dynamic scoring or the Laffer Curve - that changes in tax policy may lead to changes in economic performance that affect economic performance.

He asks nine questions and then provides his version of the right answers. Let’s analyze those answers and see which of his points have merit and which ones fall flat.

But even before we get to his first question, I can’t resist pointing out that he calls dynamic scoring “an accounting gimmick from the 1970s” in his introduction. That is somewhat odd since the JCT and CBO were both completely controlled by Democrats at the time and there was zero effort to do anything other than static scoring.

I suppose Yglesias actually means that dynamic scoring first became an issue in the 1970s as Ronald Reagan (along with Jack Kemp and a few other lawmakers) began to argue that lower marginal tax rates would generate some revenue feedback because of improved incentives to work, save, and invest.

Now let’s look at his nine questions and see if we can debunk his debunking:

1. The first question is “What is dynamic scoring?” and Yglesias responds to himself by stating it “is the idea that when estimating the budgetary impact of changes in tax policy, you ought to take into account changes to the economy induced by the policy change” and he further states that it “sounds like a reasonable idea.”

But then he says the real problem is that conservatives exaggerate and “say that large tax cuts will have a relatively small impact on the deficit—or even that they make the deficit smaller” and that they “cite an idea known as the Laffer Curve to argue that tax cuts increase growth so much that tax revenues actually rise.”

He’s sort of right. There are definitely examples of conservatives overstating the pro-growth impact of tax cuts, particularly when dealing with proposals—such as expanded child tax credits—that presumably will have no impact on economic performance since there is no change in marginal tax rates on productive behavior.

But notice that he doesn’t address the bigger issue, which is whether the current approach (static scoring) is accurate and appropriate even when dealing with major changes in marginal tax rates on work, saving, and investment. That’s what so-called supply-side economists care about, yet Yglesias instead prefers to knock down a straw man.

2. The second question is “What is the Laffer Curve?” and Yglesias answer his own question by asserting that the “basic idea of the curve is that sometimes lower tax rates lead to more tax revenue by boosting economic growth.” He then goes on to ridicule the notion that tax cuts are self-financing, even citing a column by National Review’s Kevin Williamson.

Once again, Yglesias is sort of right. Some Republicans have made silly claims, but he mischaracterizes what Williamson wrote.

More specifically, he’s wrong in asserting that the Laffer Curve is all about whether tax cuts produce more revenue. Instead, the notion of the curve is simply that you can’t calculate the revenue impact of changes in tax rates without also measuring the likely change in taxable income. The actual revenue impact of changes in tax rates will then depend on whether you’re on the upward-sloping part of the curve or downward-sloping part of the curve.

The real debate is the shape of the curve, not whether a Laffer Curve exists. Indeed, I’m not aware of a single economist, no matter how far to the left (including John Maynard Keynes), who thinks a 100 percent tax rate maximizes revenue. Yet that’s the answer from the JCT. Moreover, the Laffer Curve also shows that tax increases can impose very high economic costs even if they do raise revenue, so the value of using such analysis is not driven by whether revenues go up or down.

3. The third question is “So do tax cuts boost economic growth?” and Yglesias responds by stating “the credible research on the matter is very very mixed.” But he follows that response by citing research which concluded that “a tax cut financed by reductions in wasteful spending or social assistance for the elderly would boost growth.”

But that leaves open the question as to whether the economy does better because of the lower tax burden, the lower spending burden, or some combination of the two effects. But I’ll take any of those three answers.

So is he “sort of right” again? Not so fast. Yglesias also cites the Congressional Research Service (which rubs me the wrong way) and a couple of academic economists who concluded that there is “no systematic correlation between the level of taxation and the level of economic growth.”

The bottom line is that there’s no consensus on the economic impact of taxation (in part because it is difficult to disentangle the impact of taxes from the impact on spending, and that’s not even including all the other policies that determine economic performance). But I still think Yglesias is being a bit misleading because there is far more consensus on the economic impact of marginal tax rates and debates about the Laffer Curve and dynamic scoring very often revolve around those types of tax policies.

4. The fourth question is “How does tax scoring work now?” and Yglesias respond to himself by noting that the various score-keeping bureaucracies measure “demand-side effects” and “behavioral effects.”

He’s right, but CBO uses so-called demand-side effects to justify Keynesian spending, so that’s not exactly reassuring news for people who focus more on real-world evidence.

And he’s also right that JCT measures changes in behavior (such as smokers buying fewer cigarettes if the tax goes up), and this type of analysis (sometimes called microeconomic dynamic scoring) certainly is a good thing.

But the real controversy is about macroeconomic dynamic scoring, which we’ll address below.

5. The fifth question is “Can we take a break from all this macroeconomic modeling?” and is simply an excuse for Yglesias to make a joke, though I can’t tell whether he is accusing Reagan supporters of being racists or mocking some leftists for accusing Reagan supporters of being racist.

So I’m not sure how to react, other than to recommend the fourth video at this link if you want some real Reagan humor.

6. The sixth question is “What do current scoring methods leave out?” and Yglesias accurately notes that what “dynamic-scoring proponents want is a model of macroeconomic consequences. They think that a country with lower tax rates will see more investment in physical and human capital, leading to more productivity, and more economic growth.”

He even cites my blog post from last month and correctly describes me as believing that it is “self-evidently ridiculous that the current CBO model says higher tax rates would lead to faster economic growth via lower deficits.”

I also think he is fair in pointing out that “people sharply disagree about how much tax rates actually influence economic growth” and that “the whole terrain is enormously contested.”

But this is why I think my view is the reasonable middle ground. At one extreme you find (at least in theory) some over-enthusiastic Republican types who argue that all tax cuts are self-financing. At the other extreme you find the JCT saying tax policy has no impact on the economy and actually arguing that you maximize tax revenue with 100 percent tax rates. I suspect that Yglesias, if pressed, will agree the JCT approach is nonsensical.

So why not have the JCT—in a fully transparent manner—begin to incorporate macroeconomic analysis?

7. The seventh question is “Has dynamic scoring ever been tried?” and Yglesias self-responds by pointing out that a Treasury Department dynamic analysis of the 2001 and 2003 tax cuts come to the conclusion that “the resulting budget impact would be 7 percent smaller than what was suggested by conventional scoring methods” and “ended with the conclusion that the Bush tax cuts substantially decreased revenue.”

In other words, dynamic analysis was not used to imply that tax cuts are self-financing. Indeed, the dynamic score in the example of what would happen if the Bush tax cuts were made permanent turned out to be very modest.

So why, then, are folks on the left so determined to block reforms that, in practice, don’t yield dramatic changes in numbers? My own guess, for what it’s worth, is that they don’t want any admission or acknowledgement that lower tax rates are better for growth than higher tax rates.

8. The eighth question is “Why are we talking about dynamic scoring now?” and Yglesias answers his own question by accurately stating that “the Republican takeover of Congress starting in 2015 gives the GOP an opportunity to either change the scoring rules, change the personnel in charge of the scoring, or both.”

He’s not just sort of right. He’s completely right. I have no disagreements.

9. The ninth question is “Why does the score matter?” and his self-response is “the scores matter because perceptions matter in politics.” In other words, politicians don’t want to be accused of enacting legislation that is predicted to increase red ink.

Yglesias is also right when he writes that this “effect shouldn’t be exaggerated. In the past, Republicans haven’t hesitated to vote for tax measures that the CBO says will increase the deficit. That’s because they have a strong preference for low tax rates.”

At the risk of being boring, I also think he’s right about the degree to which scores matter.

The bottom line is that questions #1, #2, #3, and #6 are the ones that matter. Yglesias makes plenty of reasonable points, but I think his argument ultimately falls flat because he spends too much time attacking the all-tax-cuts-pay-for-themselves straw man and not enough time addressing whether it is reasonable for the JCT to use a methodology that assumes taxes have no effect on the overall economy.

But I expect to hear similar arguments, expressed in a more strident fashion, if Republicans take prudent steps—starting with personnel changes—to modernize the JCT and CBO apparatus.

P.S. While tax cuts usually do lead to revenue losses, there is at least one very prominent case of lower tax rates leading to more revenue.

P.P.S. If the JCT approach is reasonable, why do the overwhelming majority of CPAs disagree? Is it possible that they have more real-world understanding of how taxpayers (particularly upper-income taxpayers) respond when tax rates change?

P.P.P.S. If the JCT approach is reasonable, why do international bureaucracies so often produce analysis showing a Laffer Curve?

There’s also some nice evidence from DenmarkCanadaFrance, and the United Kingdom.

Patrick J. Michaels

The 20th annual “Conference of the Parties” to the UN’s 1992 climate treaty (“COP-20”) is in its second week in Lima, Peru and the news is the same as from pretty much every other one.

You don’t need a calendar to know when these are coming up, as the media are flooded with global warming horror stories every November. This year’s version is that West Antarctic glaciers are shedding a “Mount Everest” of ice every year. That really does raise sea level—about 2/100 of an inch per year. As we noted here, that reality probably wouldn’t have made a headline anywhere.

The meetings are also preceded by some great climate policy “breakthrough.” This year’s was the president’s announcement that China, for the first time, was committed to capping its emissions by 2030. They did no such thing; they said they “intend” to level their emissions off “around” 2030. People “intend” to do a lot of things that don’t happen.

During the first week of these two-day meetings, developing nations coalesce around the notion the developed world (read: United States) must pay them $100 billion per year in perpetuity in order for them to even think about capping their emissions. It’s happened in at least the last five COPs.

In the second week, the UN announces, dolefully, that the conference is deadlocked, usually because the developing world has chosen not to commit economic suicide. Just yesterday, India announced that it simply wasn’t going to reduce its emissions at the expense of development.

Then an American savior descends. In Bali, in 2007, it was Al Gore. In 2009, Barack Obama arrived and barged into one of the developing nation caucuses, only to be asked politely to leave. This week it will be Secretary of State John Kerry, who earned his pre-meeting bones by announcing that climate change is the greatest threat in the world.

I guess nuclear war isn’t so bad after all.

As the deadlock will continue, the UN will announce that the meeting is going to go overtime, beyond its scheduled Friday end. Sometime on the weekend—and usually just in time to get to the Sunday morning newsy shows—Secretary Kerry will announce a breakthrough, the meeting will adjourn, and everyone will go home to begin the cycle anew until next December’s COP-21 in Paris, where a historic agreement will be inked.

Actually, there was something a little different in Lima this year: Given all the travel and its relative distance from Eurasia, COP-20 set the all-time record for carbon dioxide emissions associated with these annual gabfests.

Doug Bandow

WALLAY, BURMA—When foreign dignitaries visit Myanmar, still known as Burma in much of the West, they don’t walk the rural hills over which the central government and ethnic groups such as the Karen fought for; for decades. Like isolated Wallay village.

Wallay gets none of the attention of bustling Rangoon or the empty capital of Naypyitaw. Yet the fact that I could visit without risking being shot may be the most important evidence of change in Burma. For three years the Burmese army and Karen National Liberation Army have observed a ceasefire. For the first time in decades Karen children are growing up with the hope of a peaceful future.

The global face of what Burma could become remains Aung Sang Suu Kyi, the heroic Nobel Laureate who won the last truly free election in 1990—which was promptly voided by the military junta. The fact that she is free after years of house arrest demonstrates the country’s progress. The fact that she is barred from running for president next year, a race she almost certainly would win, illustrates the challenges remaining for Burma’s transformation.

The British colony gained its independence after World War II. The country’s short-lived democracy was terminated by General Ne Win in 1962. The paranoid junta relentlessly waged war on the Burmese people.

Then the military made a dramatic U-turn, four years ago publicly stepping back from power. Political prisoners were released, media restrictions were relaxed, and Suu Kyi’s party, the National League for Democracy, was allowed to register.

The U.S. and Europe lifted economic sanctions and exchanged official visits. Unfortunately, however, in recent months the reform process appears to have gone into neutral, if not reverse.

While most of the military battles in the east are over, occasional clashes still occur. None of the 14 ceasefires so far reached has been converted into a permanent peace. While investment is sprouting in some rebel-held areas, most communities, like Wallay, are waiting for certain peace and sustained progress.

Of equal concern, Rakhine State has been torn by sectarian violence, exacerbated by the security forces. At least 200 Muslims Rohingyas have been killed and perhaps 140,000 mostly Rohingyas displaced.

Political reform also remains incomplete. Particularly serious has been the reversal of media freedom and imprisonment of journalists. Khin Ohmar, with Burma Partnership, a civil society network, cited “surveillance, scrutiny, threats and intimidation.”

The 2008 constitution bars Suu Kyi from contesting the presidency. Arbitrarily barring the nation’s most popular political figure from the government’s top position would make any outcome look illegitimate.

Even economic liberalization has stalled. Much of the economy remains in state- or military-controlled hands.

In short, the hopes that recently soared high for Burma have crashed down to reality.

But U.S. influence is limited. Washington could reimpose economic sanctions. However, returning to the policy of the past would be a dead end.

Nor can the U.S. win further reform with more aid. Washington’s lengthy experience attempting to “buy” political change is exceedingly poor. Anyway, participation in the Western economies is worth more than any likely official assistance package.

The administration also hopes to use military engagement as leverage for democracy. Unfortunately, contact with America is not enough to win foreign military men to democracy.

As I wrote in Forbes online:  “The best strategy would be to work with Europe and Japan to develop a list of priority political reforms and tie them to further allied support and cooperation. These powers also should point out that a substantially larger economy would yield plenty of wealth for regime elites and the rest of the population, whose aspirations are rising.”

Finally, friends of liberty worldwide should offer aid and support to Burmese activists.

During his recent visit President Obama said:  “We recognize change is hard and you do not always move in a straight line, but I’m optimistic.” This still impoverished nation has come far yet has equally far to go. America must continue to engage the regime in Naypyitaw with prudence and patience.

Ted Galen Carpenter

As if the United States didn’t already have enough foreign policy worries, a dangerous issue that has been mercifully quiescent over the past five years shows signs of reviving.  Taiwan’s governing Kuomintang Party (KMT) and its conciliatory policy toward Beijing suffered a brutal defeat in elections for local offices on November 29.  Indeed, the extent of the KMT’s rout made the losses the Democratic Party experienced in U.S. midterm congressional elections look like a mild rebuke.  The setback was so severe that President Ma Ying-jeou promptly resigned as party chairman.  Although that decision does not change Ma’s role as head of the government, it does reflect his rapidly declining political influence.

As I discuss in an article over at The National Interest Online, growing domestic political turbulence in Taiwan is not just a matter of academic interest to the United States.  Under the 1979 Taiwan Relations Act, Washington is obligated to assist Taipei’s efforts to maintain an effective defense.  Another provision of the TRA obliges U.S. leaders to regard any coercive moves Beijing might take against the island as a serious threat to the peace of East Asia.  

During the presidencies of Lee Teng-hui and Chen Shui-bian from the mid 1990s to 2008, Beijing reacted badly to efforts by those leaders to convert Taiwan’s low-key, de facto independence into something more formal and far reaching.  As a result, periodic crises erupted between Beijing and Washington.  U.S. officials seemed relieved when voters elected the milder, more conciliatory Ma as Chen’s successor.  That political change also seemed to reflect concern on the part of a majority of Taiwanese that Chen and his explicitly pro-independence Democratic Progressive Party (DPP) had pushed matters to a dangerous level in testing Beijing’s forbearance.

But just as Chen may have overreached and forfeited domestic support by too aggressively promoting a pro-independence agenda, his successor appears to have drifted too far in the other direction.  Domestic sentiment for taking a stronger stance toward the mainland on a range of issues has been building for at least the past two years.  Public discontent exploded in March 2014 in response to a new trade deal between Taipei and Beijing, which opponents argued would give China far too much influence over Taiwan’s economy.  Those disorders culminated with an occupation of Taiwan’s legislature, accompanied by massive street demonstrations that persisted for weeks.  The November election results confirmed the extent of the public’s discontent.

Perhaps reflecting the shift in public sentiment toward Beijing, even Ma’s government began to adopt a more assertive stance on security issues, despite pursuing enhanced economic ties.  Taipei’s decision in the fall of 2014 to spend $2.5 billion on upgraded anti-missile systems reflected a renewed seriousness about protecting Taiwan’s security and deterring Beijing from contemplating aggression.

China’s reaction to the November election results was quick and emphatic.  Chinese media outlets cautioned the victorious DPP against interpreting the election outcome as a mandate for more hard-line positions on cross-strait issues.  Even more ominous, Retired General Liu Jingsong, the former president of the influential Chinese Academy of Military Sciences, warned that the Taiwan issue “will not remain unresolved for a long time.”  Moreover, Chinese officials “will not abandon the possibility of using force” to determine the island’s political status.  Indeed, he emphasized that it remained an option “to resolve the issue by military means, if necessary.” That is a noticeably different tone from Deng Xiaoping’s statement in the late 1970s that there was no urgency to deal with the Taiwan issue—that it could even go on for a century without posing a serious problem.

A key question now is whether Beijing will tolerate even a mildly less cooperative Taiwan.  Chinese leaders have based their hopes on the belief that greater cross-strait economic relations would erode Taiwanese enthusiasm for any form of independence.  That does not appear to have happened.  Opinion polls indicate meager support for reunification with the mainland—even if it included guarantees of a high degree of political autonomy.

But the adoption of a confrontational stance on Beijing’s part regarding Taiwan would quickly reignite that issue as a source of animosity in U.S.-China relations.  The Obama years have already seen a worrisome rise in bilateral tensions.  The announced U.S. “pivot” or “rebalancing” of U.S. forces to East Asia has intensified Beijing’s suspicions about Washington’s motives.  Sharp differences regarding territorial issues in the South China and East China seas have also been a persistent source of friction.  The slumbering Taiwan issue is now poised to join that list of worrisome flashpoints.

Randal O'Toole

Maryland’s Governor-Elect Larry Hogan has promised to cancel the Purple Line, another low-capacity rail boondoggle in suburban Washington DC that would cost taxpayers at least $2.4 billion to build and much more to operate and maintain. The initial projections for the line were that it would carry so few passengers that the Federal Transit Administration wouldn’t even fund it under the rules then in place. Obama has since changed those rules, but not to take any chances, Maryland’s current governor, Martin O’Malley, hired Parsons Brinckerhoff with the explicit goal of boosting ridership estimates to make it a fundable project.

I first looked at the Purple Line in April 2013, when the draft EIS (written by a team led by Parsons Brinckerhoff) was out projecting the line would carry more than 36,000 trips each weekday in 2030. This is far more than the 23,000 trips per weekday carried by the average light-rail line in the country in 2012. Despite this optimistic projection, the DEIS revealed that the rail project would both increase congestion and use more energy than all the cars it took off the road (though to find the congestion result you had to read the accompanying traffic analysis technical report, pp. 4-1 and 4-2).

A few months after I made these points in a blog post and various public presentations, Maryland published Parsons Brinckerhoff’s final EIS, which made an even more optimistic ridership projection: 46,000 riders per day in 2030, 28 percent more than in the draft. If measured by trips per station or mile of rail line, only the light-rail systems in Boston and Los Angeles carry more riders than the FEIS projected for the purple line.

Considering the huge demographic differences between Boston, Los Angeles, and Montgomery County, Maryland, it isn’t credible to think that the Purple Line’s performance will approach Boston and L.A. rail lines. First, urban Suffolk County (Boston) has 12,600 people per square mile and urban Los Angeles County has 6,900 people per square mile, both far more than urban Montgomery County’s 3,500 people per square mile.

However, it is not population densities but job densities that really make transit successful. Boston’s downtown, the destination of most of its light-rail (Green Line) trips, has 243,000 jobs. Los Angeles’s downtown, which is at the end of all but one of its light-rail lines, has 137,000 downtown jobs. LA’s Green Line doesn’t go downtown, but it serves LA Airport, which has and is surrounded by 135,000 jobs.

Montgomery County, where the Purple Line will go, really no major job centers. The closest is the University of Maryland which has about 46,000 jobs and students, a small fraction of the LA and Boston job centers. Though the university is on the proposed Purple Line, the campus covers 1,250 acres, which means many students and employees will not work or have classes within easy walking distance of the rail stations. Thus, the ridership projections for the Purple Line are not credible.

In terms of distribution of jobs and people, Montgomery County is more like San Jose than Boston or Los Angeles. San Jose has three light-rail lines, all of which together carry fewer than 35,000 riders per day, less than was projected by the DEIS for the Purple line.

Given the FEIS’s higher ridership numbers, it’s not surprising that it reported that the line will save energy and reduce congestion, the opposite of the DEIS findings. However, a close look reveals that, even at the higher ridership numbers, these conclusions are suspect.

The traffic analysis for the DEIS estimated the average speeds of auto traffic in 2030 with and without the Purple Line. Without the line, speeds would average 24.5 mph; with they line, they would average 24.4 mph. Multiplied by the large number of travelers in the area and this meant the line would waste 13 million hours of people’s time per year.

The traffic analysis for the FEIS made no attempt to estimate average speeds. Instead, it focused on looking at the level of service (LOS)–a letter grade from A to F–at various intersections affected by the rail line. Without the line, by 2040 15 intersections in the morning and 16 in the afternoon would be degraded to LOS F. With the line, only 8 in the morning and 15 in the afternoon would be LOS F (p. 30). So that makes it appear that the rail line is reducing congestion.

A careful reading reveals this isn’t true. For the no-build alternative, planners assumed that absolutely nothing would be done to relieve congestion. For the rail alternative, planners assumed that various mitigation measures would be applied “to allow the intersections to operate in the most efficient conditions.” It seems likely that these mitigation measures, not the rail line, are the reasons why the preferred alternative has fewer intersections at LOS F.

Meanwhile, the energy analysis contains two serious flaws. First, it assumes that cars in 2040 will use the same energy per mile as cars in 2010. In fact, given the latest fuel-economy standards, the average car on the road in 2040 will use less than half the energy of the average car in 2010.

Even more serious, the final EIS assumed that each kilowatt hour of electricity needed to power the rail line required 3,412 BTUs of energy (calculated by dividing BTUs by KWhs in table 4-41 on page 4-142). While one KWh is equal to 3,412 BTUs, due to energy losses in generation and transmission, it takes 10,339 BTUs of energy to generate and transmit that KWh to the railhead (see page A-18 of the Department of Energy’s Transportation Energy Data Book). This is such a rookie mistake that Parsons Brinckerhoff’s experts would have had to work hard looking the other way for it to slip through. In any case, after correcting both these errors, the rail line ends up using more energy than the cars it take off the road, just as the DEIS found.

In short, Maryland’s ridership projections for the Purple Line are extremely optimistic, but even if they turned out to be correct, the Purple Line would still increase both traffic congestion and energy consumption. There is no valid reason for funding this turkey, and Governor-elect Hogan should chop off its head.

Patrick J. Michaels and Paul C. "Chip" Knappenberger

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

——–

A favorite global warming chesnut is that human-caused climate change will make the planet uninhabitable for Homo sapiens (that’s us). The latest iteration of this cli-fi classic appears in this week’s New York Times’ coverage of the U.N. climate talks taking place in Lima, Peru (talks that are destined to fail, as we point out here).

Back in September, The World Health Organization (WHO) released a study claiming that global warming as a result of our pernicious economic activity will lead to a quarter million extra deaths each year during 2030 to 2050.  Yup, starting a mere 15 years from today. Holy cats!

That raised the antennae of Indur M. Goklany, a science and technical policy analyst who studies humanity’s well-being and the impact of environmental change upon it. Goklany detailed many of his findings in a 2007 book he wrote for Cato, The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet.

As you may imagine, Goklany, found much at fault with the WHO study and wrote his findings up for the Global Warming Policy Foundation (GWPF)—a U.K. think tank which produces a lot of good material on global warming.

In “Unhealthy Exaggeration: The WHO report on climate change” Goklany doesn’t pull any punches. You ought to have a look at the full report, but in the meantime, here is the Summary:

In the run-up to the UN climate summit in September 2014, the World Health Organization (WHO) released, with much fanfare, a study that purported to show that global warming will exacerbate under nutrition (hunger), malaria, dengue, excessive heat and coastal flooding and thereby cause 250,000 additional deaths annually between 2030 and 2050. This study, however, is fundamentally flawed.

Firstly, it uses climate model results that have been shown to run at least three times hotter than empirical reality (0.15◦C vs 0.04◦C per decade, respectively), despite using 27% lower greenhouse gas forcing.

Secondly, it ignores the fact that people and societies are not potted plants; that they will actually take steps to reduce, if not nullify, real or perceived threats to their life, limb and well-being. Thus, if the seas rise around them, heatwaves become more prevalent, or malaria, diarrhoeal disease and hunger spread, they will undertake adaptation measures to protect themselves and reduce, if not eliminate, the adverse consequences. This is not a novel concept. Societies have been doing just this for as long as such threats have been around, and over time and as technology has advanced they have gotten better at it. Moreover, as people have become wealthier, these technologies have become more affordable. Consequently, global mortality rates from malaria and extreme weather events, for instance, have been reduced at least five-fold in the past 60 years.

Yet, the WHO study assumes, explicitly or implicitly, that in the future the most vulnerable populations – low income countries in Africa, Europe, southeast Asia and the western Pacific – will not similarly avail themselves of technology or take any commonsense steps to protect themselves. This is despite many suitable measures already existing – adapting to sea level rise for example – while others are already at the prototype stage and are being further researched and developed: early-warning systems for heatwaves or the spread of malaria or steps to improve sanitation, hygiene or the safety of drinking water.

Finally, the WHO report assumes, erroneously, if the IPCC’s Fifth Assessment Report is to be believed, that carbon dioxide levels above 369 ppm – today we are at 400ppm and may hit 650ppm if the scenario used by the WHO is valid – will have no effect on crop yields. Therefore, even if one assumes that the relationships between climatic variables and mortality used by the WHO study are valid, the methodologies and assumptions used by WHO inevitably exaggerate future mortality increases attributable to global warming, perhaps several-fold.

In keeping with the topic of bad predictions, check out the “Friday Funny” at the Watts Up With That blog where guest blogger Tom Scott has compiled a list of failed eco-climate claims dating back nearly a century. He’s collected some real doozies. Here are a few of the best:

“By the year 2000 the United Kingdom will be simply a small group of impoverished islands, inhabited by some 70 million hungry people … If I were a gambler, I would take even money that England will not exist in the year 2000.” -Paul Ehrlich, Speech at British Institute For Biology, September 1971

Some predictions for the next decade (1990’s) are not difficult to make… Americans may see the ’80s migration to the Sun Belt reverse as a global warming trend rekindles interest in cooler climates. -Dallas Morning News December 5th 1989

Giant sand dunes may turn Plains to desert – Huge sand dunes extending east from Colorado’s Front Range may be on the verge of breaking through the thin topsoil, transforming America’s rolling High Plains into a desert, new research suggests. The giant sand dunes discovered in NASA satellite photos are expected to re- emerge over the next 20 to 50 years, depending on how fast average temperatures rise from the suspected “greenhouse effect,” scientists believe. -Denver Post April 18, 1990

There are many more where these came from. To lighten your day, you ought to have a look!

David Boaz

The royals are coming, the royals are coming! In this case, the grandson of the Queen of England, along with his wife, who took a fairytale leap from commoner to duchess by marrying him. (Just imagine, Kate Middleton a duchess while Margaret Thatcher was only made a countess.) And once again Americans who have forgotten the American Revolution are telling us to bow and curtsy before them, and address them as “Your Royal Highness,” and stand when William enters the room.

So one more time: Americans don’t bow or curtsy to foreign monarchs. (If you don’t believe me, ask Miss Manners, repeatedly.)

This is a republic. We do not recognize distinctions among individuals based on class or birth. We are not subjects of the queen of the England, the emperor of Japan, the king of Swaziland, or the king of Saudi Arabia. Therefore we don’t bow or curtsy to foreign heads of state.

Prince William’s claim to such deference is that he is a 24th-generation descendant of William the Conqueror, who invaded England and subjugated its inhabitants. In Common Sense, one of the founding documents of the American Revolution, Thomas Paine commented on that claim:

Could we take off the dark covering of antiquity, and trace them to their first rise, that we should find the first [king] nothing better than the principal ruffian of some restless gang, whose savage manners or pre-eminence in subtility obtained him the title of chief among plunderers; and who by increasing in power, and extending his depredations, over-awed the quiet and defenceless to purchase their safety by frequent contributions….

England, since the conquest, hath known some few good monarchs, but groaned beneath a much larger number of bad ones; yet no man in his senses can say that their claim under William the Conqueror is a very honorable one. A French bastard landing with an armed banditti, and establishing himself king of England against the consent of the natives, is in plain terms a very paltry rascally original.—It certainly hath no divinity in it.

Citizens of the American republic don’t bow to monarchs, or their grandsons.

 

Paul C. "Chip" Knappenberger

The mainstream media has lit up the past few days with headlines of “alarming” news coming out of Antarctica highlighting new research on a more rapid than expected loss of ice from glaciers there.

But, as typical with blame-it-on-humans climate change stories, the coverage lacks detail, depth, and implication as well as being curiously timed.

We explain.

The research, by a team led by University of Cal-Irvine doctoral candidate Tyler Sutterley, first appeared online at the journal Geophysical Research Letters on November 15th, about two weeks before Thanksgiving. So why is it making headlines now? Probably because the National Aeronautics and Space Administration issued a press release on the new paper on December 2nd. Why wait so long? Because on December 1st, the United Nations kicked off its annual climate confab and the Obama administration is keen on orchestrating its release of scary-sounding climate stories so as to attempt to generate support for its executively commanded (i.e., avoiding Congress) carbon dioxide reduction initiatives that will be on display there. This also explains the recent National Oceanic and Atmospheric Administration speculation that 2014 is going to be the “warmest year on record”—another headline grabber—two months before all the data will be collected and analyzed.

This is all predictable—and will essentially be unsuccessful.

Missing from the hype are the broader facts.

The new Sutterley research finds that glaciers in the Amundsen Sea Embayment region along the coast of West Antarctica are speeding up and losing ice. This is potentially important because the ice loss contributes to global sea level rise. The press coverage is aimed to make this sound alarming—“This West Antarctic region sheds a Mount Everest-sized amount of ice every two years, study says” screamed the Washington Post.

Wow! That sounds like a lot. Turns out, it isn’t.

The global oceans are vast. Adding a “Mount Everest-sized amount of ice every two years” to them results in a sea level rise of 0.02 inches per year. But “New Study Finds Antarctic Glaciers Currently Raise Sea Level by Two-Hundredths of an Inch Annually” doesn’t have the same ring to it.

Nor does the coverage draw much attention to the fact that the Amundsen Sea Embayment is but one of a great many watersheds across Antarctica that empty into the sea. A study published in Nature magazine back in 2012 by Matt King and colleagues provided a more comprehensive look at glacier behavior across Antarctica. They did report, in agreement with the Sutterley findings, that glacial loss in the Amundsen Sea Embayment was rapid, but they also reported that for other large areas of Antarctica, ice loss was minimal or even negative (i.e., ice was accumulating). Figure 1, taken from the King paper, presents the broader and more relevant perspective (note that the Amundsen Sea Embayment is made up by the areas labelled 21 and 22 in Figure 1).

 

Figure 1. Best estimate of rate of ice loss from watershed across Antarctica. The Amundsen Sea Embayment, the focus of the Sutterley study, is encompassed by areas labeled 21 and 22 (taken from King et al., 2012).

We discussed the King and colleagues study in more detail when it first came out. We concluded:

So King and colleagues’ latest refinement puts the Antarctic contribution to global sea level rise at a rate of about one-fifth of a millimeter per year (or in English units, 0.71 inches per century).

Without a significantly large acceleration—and recall the King et al. found none—this is something that we can all live with for a long time to come.

The strategically timed new findings being hyped this week do not change this conclusion.

References:

King, M., et al., 2012. Lower satellite-gravimetry estimates of Antarctic sea-level contribution. Nature, doi:10.1038/nature.

Sutterley, T.C., et al., 2014. Mass loss of the Amundsen Sea Embayment of West Antarctica from four independent techniques. Geophysical Research Letters, doi: 10.1002/2014GL061940

Nicole Kaeding

Earlier this week, I noted that some Inspectors General provide insufficient oversight of federal government activities. They should be more aggressive in uncovering waste and abuse in federal agencies.  

Nonetheless, many Inspectors General issue helpful reports that alert Congress and the public to wrongdoing. Here is a sampling of recent reports showing the widespread mishandling of federal tax dollars:

  • Internal Revenue Service (IRS): Tax fraud by incarcerated individuals amounted to $1 billion in 2012, growing from $166 million in 2007. One inmate defrauded the government of $4 million over a 10-year period.
  • Department of Homeland Security (DHS): The Inspector General for DHS issued a new report highlighting 68 ways that the agency has wasted tax dollars. The list includes a $1.5 billion cost overrun for construction of the agency’s new headquarters, FEMA’s botched handling of relief for Hurricanes Katrina and Isaac, DHS employees claiming unearned overtime, and insufficient oversight of DHS’s procurement processes.
  • Housing and Urban Development (HUD): According to HUD’s Inspector General, New York City misspent $183 million it received from the federal government to rebuild hospitals following Hurricane Sandy. The city used the funds for employee pay and benefits, which were not allowable grant expenses.
  • Drug Enforcement Agency (DEA): Over the course of 20 years, the DEA allowed an individual running a Ponzi scheme to conduct onsite investment training for employees. The trainer, Kenneth McLeod, used the seminars to solicit clients for his bond investment fund, which promised risk-free returns of 8 to 10 percent. More than half of McLeod’s 130 investors came from the DEA. The Inspector General cites DEA for numerous oversight lapses including failing to verify McLeod’s credentials.

Even with the recent politicization of some Inspector General reports, the reports can be useful to illuminate waste, mismanagement, and fraud within the federal government.

Juan Carlos Hidalgo

This week a Venezuelan judge indicted opposition leader María Corina Machado on flimsy charges of conspiracy to kill President Nicolás Maduro. If found guilty, she could spend up to 16 years in prison. Can she expect a fair trial from the Venezuelan judiciary?

Not at all, according to the findings of an investigation led by three Venezuelan lawyers and published in a new book, El TSJ al Servicio de la Revolución (“The Supreme Court at the Service of the Revolution”). According to their research, since 2005 Venezuela’s justice system has issued 45,474 sentences, but not once has it ruled against the government.

Machado’s fate thus depends entirely on the whims of Maduro and his entourage. The precedent of Leopoldo López, another opposition leader who has been jailed since February on charges of arson and conspiracy, does not bode well for Machado. 

Tim Lynch

Inside grand juries: Growing criticism over who controls the evidence

Here is a link to, “A Grand Facade: How the Grand Jury Was Captured by Government.”

Excerpt:

The prosecutor calls the shots and dominates the entire grand jury process. The prosecutor decides what matters will be investigated, what subpoenas will issue, which witnesses will testify, which witnesses will receive “immunity,” and what charges will be included in each indictment.

Because defense counsel are barred from the grand jury room and because there is no judge overseeing the process, the grand jurors naturally defer to the prosecutor since he is the most knowledgeable official on the scene. That overbearing presence explains the old saw that a competent prosecutor can “get a grand jury to indict a ham sandwich” if he is really determined to do so.

And the reverse also holds true: If a prosecutor does not want an indictment, he can secure that outcome if he is really determined to do so.

Randal O'Toole

The Highway Trust Fund hasn’t worked, says a new report from the Eno Transportation Foundation, so Congress should consider getting rid of it and funding all transportation out of general funds. In other words, the transportation system is breaking down because it has become too politicized, so we should solve the problem by making transportation even more political.

Eno (which was founded by William Phelps Eno, who is known as the “father of traffic safety”) claims this report is the result of 18 months of work by its policy experts. Despite all that work, the report’s conclusions would only make matters worse.

“The user pay principle works in theory,” says the report, “but has not worked in practice, at least as applied to federal transportation funding in the United States to date.” Actually, it worked great as long as Congress respected that principle, which it did from roughly 1956 through 1982. It only started to break down when Congress began diverting funds from highways to other programs. Then it really broke down when Congress, in its infinite wisdom, decided to spend more from the Trust Fund than it was earning from user fees. (It made the decision to spend a fixed amount each year regardless of revenues in 1998, but spending only actually exceeded revenues starting around 2008.)

Some argue that such breakdowns in the user-fee principle are inevitable when politicians get involved. This suggests that the government should get out of the way and let user fees work again. But Eno ignores that idea, and simply dismisses user fees altogether.

Eno suggests Congress has three options:

  1. Adjust spending to revenues, either by raising gas taxes or reducing spending.
  2. Fund some things out of gas taxes and some things out of general funds (which is more-or-less the status quo).
  3. Get rid of the Highway Trust Fund and just fund all transportation out of general funds.

“Any of these ideas would represent a dramatic improvement over the existing system,” says Eno, which isn’t true since the second idea is, pretty much, the existing system. But “based on our analysis, solution 3 is at least worth exploring.”

In fact, all of the problems with our transportation system are the result of politicians departing from the user-fee principle.

  • Crumbling infrastructure is the predictable result of political decisionmaking, because politicians would rather fund new infrastructure than maintain what they have.
  • Wasteful spending on grandiose capital projects that produce few benefits is the predictable result of giving special-interest groups more say over budgets than transportation users.
  • Increased congestion is the predictable result of the fact that so many of those special interest groups benefit from not solving the congestion problem.

Eno never considers the possibility of getting the federal government out of the transportation business, most of which is not interstate and doesn’t need federal involvement. The only mentions of “devolution” in the report are in a case study of United Kingdom transportation, which only involved a partial devolution and is far from committed to the user-fee principle as petrol taxes all go into general funds.

The report only mentions substituting vehicle-mile fees for gas taxes in order to dismiss it by saying that it would “require Congress to raise taxes.” Actually, it wouldn’t because those fees would be charged and collected by state and local agencies and private parties that own and operate the nation’s highways, roads, and streets. The only reason why the federal government is involved at all is because the federal government can cheaply charge taxes on gasoline at refineries and ports of entry, a benefit that disappears if we switch to mileage-based user fees.

Eno’s solution would take us out of the traffic jam and into total and complete gridlock. Politicians would merrily allocate funds to projects that enriched their pals and campaign contributors while doing nothing for mobility. Cities and states would eagerly propose the most wasteful projects they can find in order to get “their share” of the federal largess. Anyone daring enough to complain about congestion and deteriorating infrastructure would be told that it’s their own fault for using politically incorrect modes of transport. Those who really care about the nation’s transportation system need to look deeper than the authors of Eno’s report.

Randal O'Toole

The White House has applauded Portland, Ore., and 15 other local governments as “climate action champions” for promising to reduce greenhouse gas emissions. Perhaps the White House should have waited to see whether any of the communities managed to meet their goals before patting them on the back.

Portland’s “modest” goal is to reduce the city and Multnomah County emissions by 80 percent from 1990 levels by 2050. Planners claim that, as of 2010, the city and county had reduced emissions by 6 percent from 1990 levels. However, this claim is full of hot air as all of the reductions are due to causes beyond planners’ control.

Almost two-thirds of the reduction was in the industrial sector, and virtually all of that was due to the closure in 2000 of an aluminum plant that once employed 520 people. The closure of that plant hasn’t led anyone to use less aluminum, so all it did was move emissions elsewhere.

Another 22 percent of the reduction was in residential emissions, and that was due solely to 2010’s “anomalously mild winter” and below-average summer temperatures, as 2009 emissions were greater than those in 1990. Only 7 percent of the reduction was in the transportation sector, for which Portland is famous. But all of that reduction was due to the recession, not the city’s climate plan, as transport-related emissions grew through 2005 and the city didn’t record a reduction until 2009. 

Portland doesn’t have many more large factories that it can put out of business to achieve its climate goals. Nor can the city count on a continued economic depression to keep people from driving or an anomalously mild climate to keep people from turning on their heat or air conditioning.

The lesson here is that cities and counties are the wrong level to try to reduce emissions of something like greenhouse gases. This is a lesson we should have learned already based on our experience with toxic pollutants such as carbon monoxide and nitrogen oxides.

In 1970, Congress required urban areas with dirty air to write transportation plans that aimed to reduce air pollution. Since then, total tons of transport-related air pollution (carbon monoxide, nitrous oxides, sulfur dioxide, volatile organic compounds, lead, and particulates) have declined by 83 percent—and all of that decline came from making cleaner cars. If anything, too many regional transportation plans have made air dirtier by focusing on trying to get people out of their cars and using increased congestion as a tool for doing so. Cars pollute more in traffic congestion, but planners didn’t built that into their models, so they could claim that their plans would work when they actually didn’t.

Despite this failed record of trying to reduce air emissions at the city and county level, the White House is very grateful to Portland and other local governments for writing greenhouse gas reduction plans that make promises they won’t be able to keep. They will, however, be able to use those plans to increase transportation, housing, and other consumer costs. The cities consider that a small price to pay to be declared a climate action champion.

Nicole Kaeding

The federal government has a long history of “green energy” failures. Many states have also foolishly subsidized green energy, including Mississippi.

KiOR biofuels launched several years ago with much fanfare. The company was supposed to turn wood chips into liquid hydrocarbons for use as fuel and promised to revolutionize the energy industry. Its chief investor, Vinod Khosla, described KiOR’s refinery as “an amazing facility.”

The company benefited from a federal biofuel requirement that mandated refiners use 16 billion gallons of biofuels annually by 2022. It then sought out state subsidies. The company decided to locate in Mississippi after the state offered a $75 million, no-interest loan. In exchange, the company promised to create 1,000 jobs by December 2015.

Yet the company had financial problems that were apparent from the start. Operating costs  ran $5 to $10 a gallon. The Washington Post reports that court papers estimated KiOR’s revenue at just $2.25 million but losses of $629.3 million.  

Production issues also plagued the facility. The system that fed wood chips into the plant frequently malfunctioned. The process converted less than 40 percent of its inputs into gasoline or diesel, leading to higher costs.

The problems were too much for the company to overcome. It filed bankruptcy in November and  still owes Mississippi $69.5 million.

This loan is just one of the many types of energy subsidies that Mississippi provides to green energy companies. The state exempts some green energy manufacturers from taxes. It has provided grants and loans to multiple companies.

Former Mississippi governor Haley Barbour was in KiOR’s “cheering section” and drove Mississippi’s foray into green energy subsidies. Khosla leveraged Barbour’s central-planning approach to energy policy to benefit KiOR and ventures. Khosla invested in at least three other companies that received subsidies from Mississippi, including Soladigm (now View Inc.) and Stion. Khosla is no stranger to energy subsidies. Several of his other investments, including Range Fuels and Coskata, were also failures that benefited from loans from the federal Department of Energy and Department of Agriculture.

KiOR is not the first of Mississippi’s green investments to fail. Twin Creeks Technologies, a solar firm, received a $26 million loan from the state before it filed for bankruptcy.

Energy subsidies waste millions in state and federal dollars annually, but sadly policymakers have not yet learned their lesson.

Chris Edwards

There are many types of federal government waste. Perhaps the most glaring is spending on projects that simply do not work. The money is spent, but taxpayers receive no benefit.

From the Washington Post:

Social Security officials have acknowledged that the agency spent nearly $300 million on a computer project that doesn’t work. The agency, however, is trying to revive it. The program is supposed to help workers process and manage claims for disability benefits.

Six years ago, the agency embarked on an aggressive plan to replace outdated computer systems overwhelmed by a growing flood of disability claims. But the project has been racked by delays and mismanagement, according to an internal report the agency commissioned.

As a wild guess, let’s say that skilled computer techs cost $150,000 a year in wages and benefits. Apparently then, about 333 of them have been paid for six years, yet have made little or no progress on this mishandled Social Security project.

Here’s a much larger taxpayer black hole, also reported in the Washington Post this week:

One of the first casualties was the Crusader artillery program, which was canceled after the Pentagon spent more than $2 billion on it. Then there was the Comanche helicopter debacle, which got the ax after $8 billion. More than twice that amount had been sunk into the Army’s Future Combat System, but that program got killed, too.

In all, between 2001 and 2011 the Defense Department spent $46 billion on at least a dozen programs—including a new version of the president’s helicopter—that never became operational, according to an analysis by the Center for Strategic and Budgetary Assessments.

Any organization will go down some wrong paths when it comes to advanced technologies, but $46 billion is a remarkable amount to have sunk into dead-end projects. Let’s say that engineers, machinists, managers, and other workers at defense firms earn an average of $200,000 a year. The $46 billion lost would be like having a small city of 23,000 such high-skill people beavering away for a decade on projects that all end up in the trash bin. I’m not an expert on procurement, but I do know that is a lot of human talent for the government to waste.

Emma Ashford

Today at the Kremlin, Russian President Vladimir Putin gave his annual address to the Federal Assembly. The speech made the news for its antagonistic tone and, in particular, for Putin’s comparison of Crimea with Jerusalem. But for all the hype surrounding the speech, it said little new, emphasizing instead the impasse that Russia and the West find themselves locked in. Putin’s message was clear: Russia’s foreign policy is not changing.

The foreign policy narratives pervading the speech were strongly familiar, reiterating the points made by Russian leaders and state-owned television throughout the last year. Yet the twisted worldview presented bears little resemblance to reality.

Putin argued that Russia is being persecuted for seeking only to peacefully engage with the world. He presented Russia as a key proponent of international law, describing the annexation of Crimea as the result of a peaceful self-determination vote. In contrast, the United States was portrayed as a meddling hegemonic menace that, he insinuated, aids Russia’s enemies, foreign and domestic. Putin even implied that European states are vassals of the United States:

Sometimes it is even unclear whom to talk to: to the governments of certain countries or directly with their American patrons and sponsors.

The speech went on to describe international sanctions on Russia as illegitimate, with Putin arguing that sanctions are largely unrelated to Crimea or to the ongoing conflict. Instead, he insinuated, sanctions are an attempt by the United States to curtail Russia’s growth and power:

I’m sure that if these events had never happened… they [the US] would have come up with some other excuse to try to contain Russia’s growing capabilities.

These points aren’t true or accurate, but they are certainly consistent with the narrative advanced by the Kremlin. This is one key reason why Putin’s approval rate is still a massive 85%, with many Russians blaming the West for Russia’s woes. Putin thus spent much of the speech deflecting blame. In particular, he focused on Russia’s faltering economy, and while he touched on key economic concerns—the collapsing ruble, the falling price of oil, stalling economic growth, rising inflation—he largely glossed over them, focusing instead on blaming the West. 

But while the content of the speech was predictable, the tone offered more insight into Russian intentions. Putin’s tone remained defiant, signaling no change in policy. The speech invoked historical struggles, reminding citizens that they survived “containment” once before, during the Cold War. On the economic front, Putin highlighted new programs encouraging entrepreneurship and self-sufficiency in manufacturing and technology, as well as promising amnesty for capital returned to Russia from abroad. He even suggested that Russia may engage in import-substitution industrialization. 

Despite the slow-motion collapse of the ruble and other economic problems, the Kremlin has no intention of backing down on the issue of Ukraine. Instead, it will attempt to mitigate the economic crisis domestically. None of this is surprising, but it does highlight how unsuccessful U.S. strategy toward Russia has been over the last year. The poor performance of the Russian economy is at least as much the result of falling global oil prices as it is of sanctions. Yet neither has served to alter the foreign policy incentives of Russian leaders. 

Now is the time for a new approach to the Russia–U.S. crisis. Conflict over Ukraine serves the interests of neither side. Rather than adding to already ineffectual sanctions, U.S. policymakers should seek a negotiated settlement with Russia to end the crisis. Otherwise, the impasse will continue. 

Jim Harper

A couple of years ago I wrote here about the Supreme Court case denying that a person could collect damages from the government under the Privacy Act based on mental and emotional distress. It’s a narrow point, but an important one, because the harm privacy invasions produce is often only mental and emotional distress. If such injuries aren’t recognized, the Privacy Act doesn’t offer much of a remedy.

Many privacy advocates have sought to bloat privacy regulation by lowering the “harm” bar. They argue that the creation of a privacy risk is a harm or that worrisome information practices are harmful. But I think harm rises above doing things someone might find “worrisome.” Harm can occur, as I think it may have in this case, when one’s (hidden) HIV status and thus sexual orientation is revealed. It’s shown by proving emotional distress to a judge or jury.

Rep. Gerry Connolly (D-VA) has introduced the fix for the Supreme Court’s overly narrow interpretation of the Privacy Act. His Safeguarding Individual Privacy Against Government Invasion Act of 2014 would allow for non-pecuniary damages—that is, mental and emotional distress—in Privacy Act cases.

It’s a simple fix to a contained problem in federal privacy legislation. It’s passage would not only close a gap in the statute. It would help channel the privacy discussion in the right way, toward real harms, which include provable mental and emotional distress.

Chris Edwards

 

In his new book, Saving Congress from Itself, James Buckley argues that Congress should abolish the entire federal aid-to-state system to save money and improve American governance. A recent Cato study shows that there is substantial public support for reforms in that direction.

In “Public Attitudes toward Federalism,” John Samples and Emily Ekins review decades of polling data to discern views on federal policymaking vs. state/local policymaking. They find strong support for state/local primacy in many policy areas, including education, housing, transportation, welfare, and health care.

The authors find that Americans have become more strongly in favor of state/local control—as opposed to federal control—since the 1970s. For example, when asked whether “major decisions” about housing policy ought to be made at the federal level or state/local level, just 18 percent favor federal today compared to 28 percent four decades ago.

The political opening here is obvious: reformers on Capitol Hill should push to reduce the federal role—by cutting spending and regulations—in those areas where the public has a clear preference for state/local primacy. From constitutional and good governance perspectives, many federal agencies and programs ought to be eliminated, but the Samples/Ekins study indicates areas that reformers should target first.

Why is reviving federalism a politically appealing reform? Because the public has a much more favorable view of state/local governments than the federal government. Samples and Ekins find that 58 percent of people have a favorable view of local government, compared to just 32 percent for the federal government. Asked which level of government provides the most value for their tax dollars, 33 percent said the federal government and 67 percent said state/local governments. Asked whether government provides “competent service,” 31 percent agreed with regard to the federal government and 48 percent agreed with regard to local governments. On average, Americans believe that the federal government wastes 60 cents out of every dollar it spends.

The Samples/Ekins results show that self-identified Republicans have a stronger belief in decentralized policymaking than do Democrats. So reviving federalism is a ripe opportunity for the incoming Republican majorities in Congress.

***

George Will reviews Buckley’s book today in the Washington Post, and you can read more about federalism here.

Jason Bedrick

Over the weekend, Florida’s Sun-Sentinel editorialized against Florida’s scholarship tax credit law. But, as I detail at Education Next today, the editorial was rife with errors, distortions, and omissions of crucial context. Here’s just one example of many:

Rather than put the scholarship tax credit law in the context of Florida’s overall education spending, the Sun-Sentinel compares it to… Iowa.

“No state has a bigger voucher [sic] system. Last year, Florida spent $286 million on just 2.7 percent of all students. Iowa spent $13.5 million on 2.6 percent of its students.”

Setting aside the fact that the state of Florida did not “spend” even one red penny on the scholarships, this comparison is misleading. Do the editors at the Sun-Sentinel really believe that Iowa has as many students as Florida? If so, why haven’t they decried the fact that Florida spends more than $25 billion on its public schools while Iowa spends barely $5 billion? Perhaps because Florida has more than five times the number of students?

Comparing apples to apples, fewer than 10,500 students received tax-credit scholarships in Iowa last year compared to more than 69,000 in Florida. And while the tax-credit scholarships are larger in Florida than Iowa – about $4,660 on average versus about $1,090 on average – they are dwarfed by the more than $10,000 per pupil spent on average at Florida public schools.

The Sun-Sentinel owes its readers and the public a full and detailed retraction.

Pages