"The whole intellectual edifice, however, collapsed in the summer of last year because the data inputted into the risk management models generally covered only the past two decades — a period of euphoria."
This made me angry, for several reasons:
#1: The past decades weren't all rosy
So we're blaming (a) data entry clerks???? (b) the high cost of data storage?? (c) sheer laziness or (d) failure to even look at the last 20 years very clearly.
I'm going with (d).
I'm sure others can note various unfortunate events in the last 20 years that indicate all was not positive: the failure of the giant hedge fund Long Term Capital Management, the Russian meltdown, the Thai meltdown, various problems in Argentina, the failure of a US municipal bond insurer ... these are just a few things one might mention.
In terms of major data series being available: sure, there is more data now, but DRI already had the major econometric data series in the US on-line in the late 1970's.
#2: There is a lot of poor statistical judgment out there.
Andrew, a prominent Bayesean statistician, suggests better use of Bayesean priors (put nonstatistically, this means incorporating some notion of what the overall universe might look like in a broader sense). But there's a question of whose prior distributions we might want to look at.
The evidence that statisticians are less blind than other people doesn't seem to be there. For example, there is this humorous juxtaposition of articles:
The August 2007 American Statistician (an ASA journal) has 3 articles on "The Black Swan", a book by Taleb that very unsubtly argued that financial statistical models and financial professionals were underestimating risk of unusual events.
In the 3 statistician's reviews, there's a heavy dose of "real statisticians wouldn't make the errors Taleb accuses us of", such as this quote: "the book is statistically reckless about many issues in our profession". Taleb has a reply.
IMMEDIATELY AFTER these articles is an article by Frey providing a method to use in case you want to be sure a Bayesean probability evaluates to exactly 0 or 1. [The classic problem is how to assign a probability of 1 to "the sun will rise tomorrow"]
Of course, assigning a probability of 0 to events such as "Black Swans" that have never occurred but that can actually occur (or assigning too low a probability) is exactly the problem Taleb was talking about.
Note that current cosmological theory does NOT assign a probability of 1 to "the sun will rise tomorrow". That represents an earlier, more theologically based conception of the universe.