Published: SEPTEMBER 2021
Inaccurate economic predictions rile politics and markets
Analysis by STAN CROCK
Writing from Washington, D.C.
President Joe Biden on Friday [Sept. 3] laid blame for the weaker than expected August jobs report squarely on the delta variant of the coronavirus and said he’d lay out more steps next week to combat it. The government on Friday reported just 235,000 jobs were created in August, far below economists’ expectations...
– MarketWatch, Sept. 3, 2021
Many news outlets had a similar take: Biden was in trouble because the 235,000 jobs reported by the Bureau of Labor Statistics was one-third of what economists had projected (720,000 according to The Wall Street Journal and 725,000 according to Bloomberg).
MarketWatch’s slant makes it look as if the problem was the jobs number, but the real problem is the erroneous predictions, which were off by more than 200%. The pernicious focus on expectations not only affects a president’s political standing and potentially policy, but also often riles markets, which can gyrate if a number beats or misses the consensus estimate. Unfortunately, the models used for estimates are inherently inaccurate, and reliance on them is artificial, unnecessary, and destructive.
Consider, for example, what would have happened if economists had overcompensated for the effects of the delta variant and had predicted a jump of only 117,500 jobs in August. The 235,000 number, double the prediction, would have made Biden look like a genius. But the economic reality would have been unchanged—a sharp drop from the million jobs created the previous month. The 235,000 jobs was, as The New York Times pointed out, a “respectable” number pre-pandemic.
The fact is that Biden is doing brilliantly by historical measures. Bill Clinton, the biggest job creator since Calvin Coolidge, averaged 2.3 million new jobs a year. Yet Biden gets pummeled by the expectations game while he is delivering at a breakneck pace: 4.7 million jobs through August 2021.
The estimates affect markets, too, of course. On Aug. 6, the government reported a 943,000 increase in jobs in July. That dwarfed the Bloomberg survey of economists average estimate of 870,000 and the Dow Jones survey estimate of 843,000. The result: stock prices soared to record highs. And when the July ADP payroll number came in at 330,000, well below the 700,000 estimate, the Dow Jones Index plummeted 300 points.
These wrong predictions are hardly aberrations. Economists have been off an average of nearly 50% this year in their predictions for the ADP payroll jobs number – sometimes high, sometimes low, once reasonably on target, worse than a broken clock that’s right twice a day. (See table below.) Going back further –for example, December 2020 – predictions were even further off when the Bloomberg consensus called for a 75,000-job increase and jobs actually fell by 123,000, a 160% mistake in the wrong direction.
The disinformation is not just about jobs. New home sales slumped 6.6% in June, while economists had forecast sales would jump 3%, a 150% error that again was directionally wrong. And economists predicted 8.5% growth in second quarter 2021 gross domestic product, well above the actual 6.5%, a 31% error.
Indeed, an August 29 New York Times article noted: “After years of getting it wrong, most economists have learned not to be too optimistic.” The story quotes Neil Dutta, head of economics at the research firm Renaissance Macro, as believing his colleagues are moving toward lower estimates at exactly the wrong time. The profession thus may make a 180-degree turn and be wrong both times.
Analysts who estimate corporate earnings do no better. With 59% of S&P 500 companies reporting second quarter earnings, 88% beat Wall Street's earnings per share estimates. So these analysts are wrong 88% of the time.
All of this raises troubling questions since this tragedy of errors has real consequences.
What explains the shoddy performance? As Danish physicist Neils Bohr once said, “Prediction is very difficult, especially if it's about the future!" It’s especially dicey in the unprecedented pandemic environment. The inexact modeling process itself aggravates matters. In a lecture, Ty Ferre, professor of hydrology and atmospheric sciences at the University of Arizona, has given multiple times, he noted: “Our data are sparse. Our models are incomplete. But we must decide.”
Uncertain data are not the only problem. Ferre, whose models can be used for such things as estimating the amount of groundwater that can be pumped sustainably into the future, argues that even if you had perfect data for, say, a cost-benefit analysis, and weighed them on a fulcrum, you couldn’t be sure you were using the right fulcrum. He calls that structural uncertainty.
Inherent in models are “unexplained variances”– differences from the actual numbers no one can explain. “Forecasting methods, tools, processes, and measures have enormous amounts of error built in,” says Ralph Finos, a PhD in applied research in psychology and program evaluation who has spent decades forecasting business trends in the information technology industry. “They typically don’t account for even half of the variables that underlie the object of prediction.” In addition, predictive models, like polls, have margins of error and confidence intervals, that is, whether economists have a 90% or 95% confidence level in the projections.
Beyond the models, judgment plays a role. The delta strain may suppress growth while the huge increase in personal savings during the pandemic could lead to a spike in spending, which would spur growth. Economists looking at the same data thus could come to defensible but opposite conclusions about where the economy is headed.
Investment analysts face a different problem: do companies game the system by lowballing earnings guidance so that they can beat estimates and give their stock a bump? Whatever the explanation, something clearly is amiss.
Why are these error-prone estimators still employed and hired by clients? They are smart people, and clients are hungry for any leg up they think they can get on competitors or markets. The customers are sophisticated enough to know the shortcomings of estimates. And “depending on what you're doing (i.e. trying to manage or nudge the U.S. economy), estimates in skilled hands can be a useful tool,” Finos says.
But why do media outlets quote them? Habit and laziness. These experts are the reflexive go-to sources for employment and other economic and financial stories, but it’s far from clear that anyone checks their records. As a former journalist, I can tell you that aside from trotting out Afghanistan “experts” who have been wrong for 20 years, reporters normally would never rely on sources who are so wrong so often.
What should be done to protect the public against this disinformation?
Economists and investment analysts must do a better job of refining their methods so that their estimates resemble reality more frequently. Refining methods faces two big challenges, however. One is money. It’s a costly process. Another is confirmation bias, which weds people to processes they think are valid. They favor information that confirms their beliefs.
For investment analysts, the needed change is clear. To avoid being wrong 88% of the time, they need to adjust their estimates upward to reflect any games companies are playing and any flaws in current processes.
The news media should eliminate references to predictions in stories. The Washington Post never mentioned that the August 2021 jobs number was lower than expected in its August jobs piece, which focused on an internal administration debate on whether to extend unemployment benefits. (Every Monday, however, the Post’s front-page calendar for the week notes the estimates for forthcoming economic reports.) The media should focus instead on the previous month’s number, year-to-date numbers, and comparisons with the same period the previous year and before the pandemic. That is, actual numbers instead of fatuous estimates.
A less sweeping approach would be, heaven forbid, to provide some context. Stories could note that the estimates have been off by 50% so far this year. Or they could publish the range of estimates that led to the consensus estimate. If the 720,000 consensus was an average of estimates that ranged from 100,000 to 1.3 million, readers would realize the single consensus number is not magical – or worth very much. In fact, Ferre points out that as an average of estimates, the consensus may be an outlier – a number no one predicted. The big money that pays for private estimates may know this, but the public doesn’t.
Estimates distort the political debate, markets, and possibly policy. President Biden said the August jobs numbers were less than he expected, and one wonders how or if the expectations game will affect the debate on extending unemployment benefits or other programs. That would be worse than unfortunate. Real data, not faulty forecasts, should guide decision-making. The media should end its undue reliance on dubious estimates to create a healthier environment for that outcome.
Stan Crock was a journalist for three decades and wrote for such business outlets as Business Week, The Wall Street Journal, Investopedia, and TheStreet.com. His profile can be found here.