Last Updated Aug 20, 2007 6:18 PM EDT
There's also the whole judgmental approach to forecasting, where managers tap their knowledge and experience to supply information that statistical forecasting can't really dig into. (Example: a sales manager knows his customer is about to place an uncharacteristically huge order.) When the dust settles, however, judgmental and analytical techniques may fall short of expectations. But the first thing to realize is that expectations about forecast accuracy are often totally unreasonable.
There is, indeed, an upper limit to forecast accuracy -- and it sure isn't 100 percent! If demand for a particular service is highly volatile, there's no way you're going to be able to get a forecast with 60 percent accuracy at best. If demand for a particular service is extremely stable, you may reasonably be able to expect a forecast with 90 percent accuracy. How can you know what forecast accuracy is reasonable to expect? Use the naÃ¯ve method as a baseline -- if your advanced forecasts aren't beating the naÃ¯ve, then you'd better make some changes, fast.
What is this so-called "naÃ¯ve" method?
It generally comes in two flavors. The easiest method is just to say "what happened this month (or week or quarter or year) will happen next month," and just forecast incrementally like that. (You can see why it's called "naÃ¯ve.") But if there's any seasonality to your demand, there's another type of naÃ¯ve forecast that's a lot more effective and not a bit harder to calculate.
Incidentally, it's called the seasonal naÃ¯ve method, and it just says "what happens in Q3 (or any time interval) this year will happen in Q3 next year." And while there are much more sophisticated methods, you'll never know if they're actually more accurate (and therefore more valuable) than a simple naÃ¯ve forecast. And even more worrisome -- your sophisticated techniques may actually make things worse! A SAS Institute white paper cautions against this peril of perils:
The naÃ¯ve forecast provides a baseline level of accuracy against which all other forecasting efforts must be compared. Very few companies utilize naÃ¯ve models, but everyone should. If you fi nd, for example, that a naÃ¯ve model forecasts your business with 70 percent accuracy, but your existing systems and processes generate forecasts that are only 60 percent accurate, then something is terribly wrong!In other words, don't have blind faith that more math means a better forecast. And don't think that more input from managers and executives always improves forecasts, either. Just test your forecast (or have your analyst test her forecast) against the naÃ¯ve one be sure that all your highfalutin forecasts are worth the investment.
All of this sounds unfathomable â€" how could million dollar systems and elaborate collaborative processes produce worse forecasts than a naÃ¯ve model â€" but it happens every day. Until you've [checked against the naÃ¯ve] and proven otherwise, don't be so sure it isn't happening at your organization.