## Sunday, December 18, 2011

### The ragged edge of control

My friend Chris writes:

·

• “Operating at the ragged edge of control means inevitably that things are going to get away from you! That's the deal. Our everyday sense that things will simply continue, be alright, that the past will be the best predictor of the future--these are assumptions that don't reliable work out here. Beyond the horizon of personal control looms a violent world of abrupt and no doubt final consequences. But if you're eager to take that plunge into essential expereince, the rest of life gradually becomes just the place everybody else lives--but not you.”

This reminds me I’m teaching forecasting next term.  This is the second time I’ve taught it. It was odd to start teaching forecasting at a time when we were in (and still really are in) a severe recession due in part to idiot forecasts and the fraudulent ability to market those idiot forecasts as AAA rated.

The underlying idiot assumptions were laid out by Taleb in Black Swan, before the bubble burst. I’ve largely worked in different areas than Taleb, so I’m going to lay out a few key points a little differently.

1.     Sure, the past is the best predictor of the future. As Damon Runyon said, “The race is not always to the swift, nor the battle to the strong, but that’s the way to bet.”

2.     #1 is true until it isn’t. But predicting turning points is very, very difficult and usually seen only with hindsight.

3.     People underestimate the error in forecasts. Even when it’s on the page, they don’t want to see it.

4.     Worse, statistical estimates of error are often underestimates. In general, a good statistician will give a broader estimate of error than a bad statistician, given the same data and even the same modeling family, and tend to take longer to produce that estimate. This does not tend to make statisticians any more popular.

a.     A simple example may help. The error around a survey estimate is often estimated just as 2((p(1-p)/n)^.5), as if the data were from a simple random sample (rather than a complex design with a generally larger error term), and as if the non-respondents had the same opinions at the respondents (probably not completely true).

b.     Another example: companies may pick the supplier who promises the smaller error term – if that estimate is not correct, this will likely be apparent only after some time delay and the switching cost to a different supplier may not be worth it.

5.     If we are dealing with political polling, as in point #4, the error is not critical. But if we are dealing with risk assessment in financial instruments, this difference can be catastrophic – and has been.

All of which means it’s a humbling time to be teaching forecasting methods.  These methods are of value, but have to be taken with some skepticism. Not too much, though.  Graduate business school isn’t really preparing students to be “eager to take that plunge into essential experience”.  It’s preparing them to have better careers and build better institutions, hopefully with integrity.