April 17, 2013
Carmen Reinhart and Kenneth Rogoff of Harvard University fell victim to one of the classic blunders of research and analysis: proofread your spreadsheets!
These established academics are the authors of “This Time is Different: Eight Centuries of Financial Folly,” analysis which presents a correlation between debt to GDP ratios above 90% and stymied economic growth. Their work, which has been broadly cited in economic debates, is being called into question due to errors in their dataset in Excel. The Financial Times’ headline on the story is “Harvard duo defend case for austerity Rogoff and Reinhart accused of sloppy analysis”.
In my last blog post I asserted that Excel was the (shameful) common thread which ties Pricers together and I stand by this claim. I’ve never actually encountered something that Excel has mathematically calculated wrong, but I’ve made and seen my fair share of mistakes in spreadsheets. In my experience, mistakes in Excel are all user error, stemming from a lack of understanding (averaging averages) or fat fingering (mistaken cell references). Yesterday’s big news in the world of economic policy serves as a reminder of the dangers of relying on Excel and ad hoc analysis.
When you’re conducting a business review, you want to ensure its not a profitability witch hunt, but rather a systematic, repeatable process. If possible, you want your organization to move toward a single source of truth for your pricing and profitability measures. One of the benefits to production level pricing software is that part of the implementation process is comprehensive user acceptance testing and professional quality assurance (QA) reviews.
Copious amounts of common sense need to be applied during the QA process to ferret out bad data and incorrect calculations before the software hits production. Before any coding is started, a series of reasonability tests should be documented so that at least the obvious things (Does your pricing waterfall go up or down? How much revenue do your transactions represent?) are caught early on in the process. This rigor could and should be applied to testing of crucial Excel analysis, but commonly isn’t. There is a field of academic study tracking the frequency of errors in spreadsheets and quantifying the impact which suggests they are almost never flawlessly constructed. (See Panko 1998, “What we know about Spreadsheet Errors”)
Below is a table we recommend our clients use during the data validation process:
Later the application should be vetted against existing reporting or, ironically, calculations should be re-run in Excel or other systems for verification. The schadenfreude we experience in watching Reinhart and Rogoff defend their research should be channeled into preventing similar mistakes ourselves.