When I first learnt via the Twittersphere that something known as Rogoff & Reinhart had been called into question over its methodological inconsistencies, it seemed to be just another instance of quibbling economists – unique only in that instead of occupying the corridors and libraries of universities, it was permeating my Twitter stream.
However, as demonstrated by the media firestorm over the last week or so, the revelation that Harvard Kennedy School professor Carmen Reinhart and Harvard University professor Kenneth Rogoff had made significant errors in their analysis on the impact of national debt upon GDP growth was much more than academic posturing. In the wake of all the controversy, one question that is undeniably back at the forefront of public debate is that of how economists should analyse, interpret and then present data and what influence such data should have over public policymaking.
Let’s go back to the beginning to try and demystify exactly what happened in this debate rich in policy/academic/data-wonkery.
In 2010, writing a working paper for the National Bureau of Economic Research, Rogoff and Reinhart asserted that economic growth slows down when debt exceeds a threshold of 90 percent of GDP, evidently a timely conclusion to make as a wave of austerity measures began to take hold on both sides of the Atlantic. As Velichka Dimitrova explains in her London School of Economics blog, the paper resonated strongly in the academic and policy world, featuring in such prestigious journals as the American Economic Review and constantly trumpeted by pro-austerity politicians as academic justification for their policies to slash spending, reduce the fiscal deficit and thus promote economic growth. However, earlier in the year, Thomas Herndon, Michael Ash and Robert Pollin from the University of Massachusetts had tried to replicate Reinhart and Rogoff’s results and concluded that they had made significant errors in the very analysis that had captivated the attention of the economic world.
Their counterargument focused on three areas of methodological failure on the part of Rogoff and Reinhart:
- Selective exclusion of available data and data gaps: The authors exclude Australia (1946-1950), New Zealand (1946-1949) and Canada (1946-1950) from their main datasets, leading to a significant reduction of the estimated real GDP growth in the highest public debt/GDP category
- Unconventional weighting of summary statistics: The authors do not discuss their decision to weight equally by country rather than by country-year, which is arbitrary and ignores the issue of serial correlation
- Coding errors: Most significantly, due to an Excel spreadsheet error, five countries were completely excluded from the sample, resulting in significant errors in average real GDP growth and the debt-to-GDP ratio in several categories
It’s this coding error that has brought the most scorn from those in the economic world, demonstrating a fundamental lack of willingness by Rogoff and Reinhart to open up their work to peer-review before publishing. What’s more, when the analytical error is rectified, the results of the report would actually suggest that there is a wide range of GDP growth performances at every level of public debt among the twenty advanced economies constituting Rogoff and Reinhart’s survey. Far less conclusive than the original report, these results cast even the notion of a supposed threshold level of debt-to-GDP above which growth is markedly hampered into complete doubt.
Certainly the likes of Paul Krugman feel very strongly about this error of analysis, particularly because of “how their work was read: austerity enthusiasts trumpeted that supposed 90 percent tipping point as a proven fact and a reason to slash government spending even in the face of mass unemployment.” What’s clear is that this paper and its flawed conclusions served an influential role in the policymaking space over recent years in the context of austerity. Less clear is how that influence actually played out – was this paper the catalyst for a wave of austerity to spread or was it merely used as academic cover for those already yearning to implement a stricter fiscal policy?
What’s interested me throughout this debate has been a more abstract one that can affect all the work we do as economists, how we use data and effectively convey it those less economically-literate and those in policymaking roles.
As I discussed in my review of Nate Silver’s, The Signal and the Noise, it is economists’ failure to properly explain the uncertainty in the data that can create misleading reports to the general public. So long as these failures continue, one can easily see why economists might remain the subjects of open derision by certain elements of society.
Given a seemingly perpetual lack of impetus to improve how we use data to convey economic analysis, the travails of Rogoff and Reinhart should serve as a valuable lesson to us going forward.
You can follow me on Twitter @CRJWeinberg.