Economic forecasts are almost always wrong - often wildly so. Our lack of success in what most economists see as a minor subfield reflects badly on the entire discipline, so why isn't it done better?
Complexity is one answer: small errors accumulate and become amplified as they work through a forecasting model. Data is another: third quarter GDP growth numbers won't be available until the end of November, and they will be revised five times before the final numbers are published in May, 2014 (revisions of half of a percentage point are typical). These and other issues pose significant technical problems, and progress is slow in dealing with them.
Forecasters who work outside academia don't have the time or resources to give more than cursory attention to these issues, so they rely on a mix of models and subjective opinion to produce their projections. There's nothing wrong with using intuition in a forecast, but its contribution should be transparent. When deficits were high, the Department of Finance developed the bad habit of basing its budgets on implausibly optimistic scenarios. By 1994, its credibility had eroded to the point where Ottawa felt obliged to abandon its own forecasting exercise and simply use an average of private sector forecasts. This may have succeeded in removing a talking point from opposition critics, but did it just substitute one source of bias for another?
Using the average of private-sector forecasts makes sense if private-sector forecasts are being produced independently. But instead of trying to produce forecasts that match the data, private forecasters seem to be trying to match each other. One study notes that "[t]e range of forecasts underestimates the degree of uncertainty facing forecasters, sometimes substantially."
As ever, the problem is one of incentives. A plausible hypothesis is that private-sector forecasters face the same incentives as do fund managers: it is relative, not absolute, performance that matters. The worst case scenario for them is not to make a bad forecast; it's to make a forecast that is significantly worse than their competitors. As this study in Canadian Public Policy suggests, "[i] may be the case that the incentives to offer accurate forecasts are outweighed by the disincentives associated with appearing to be too far off the mark."
One of the main lessons of economics is that people respond to incentives. If the government intends on continuing its practice of relying on private-sector forecasts, then it might want to think about giving them a reason to aim for being right, instead of just being less wrong than their competitors.
Stephen Gordon is a professor of economics at Laval University in Quebec City and a fellow of the Centre interuniversitaire sur le risque, les politiques économiques et l'emploi (CIRPÉE). He also maintains the economics blog Worthwhile Canadian Initiative.Report Typo/Error