Why not subscribe?

Tuesday, November 06, 2007

"When you're done, quit" -- A least effort principle

Statistician Andrew Gelman, in a skewering of the "Beautiful people have more girls" research, notes:

"The modern solution to difficulties of statistical communication is to have more open exchange of methods and ideas. More transparency is apparently needed, however: For example, Psychology Today did not seem to notice the Gelman (2007a) critique of Kanazwawa's findings .. or other methdological criticismsthat have appeared."

http://www.stat.columbia.edu/~gelman/research/unpublished/power.pdf

Gelman fails to consider the payoff matrix for the harried writer. The writer finds out about a set of exciting findings, finds out they have been vetted in some way (e.g. accepted for publication) and then has a decision to make:
a. Go with a nice story and start writing the next one.
b. Do further investigation, which (1) takes more time, and (2) has a decent probability of either complicating the story or killing it completely because contradictory evidence is found.

Note that in most cases, (a) is the economic choice. Even if the story gets attacked later -- well, that's ANOTHER story, isn't it?

The risk of investigation might be shown by a recent article by a Scientific American writer for their column called "Fact or Fiction" which is designed for "investigations into popular myths". For example, the July column definitely concluded that premium gas is useless for standard cars.

On the web entry, this shows up as follows:
"Strange but True: Helmets Attract Cars to Cyclists"

a conclusion that was probably the writer's original intention and based on a junk science study by Ian Walker (for more about that, see here). But, by getting more sources, the article ends up more wishy-washy with two or three experts on each side. By the time it gets to the October 2007 Scientific American in print, it has the more accurate headline:

"Do Helmets Attract Cars to Cyclists?"

In research, the "When you're done, quit" principle discourages validation and replication. If you have a result that's publication-ready or client-ready, mostly bad things can happen by validation. This is particularly true if you had a lot of trouble finding an acceptable result in the first place.

The "When you're done, quit" principle tells you to quit when you get minimally presentable results and then go on to the next task ... and there's always a next task.

Pause for anecdote:

Two weeks ago, I had a younger researcher in my office who'd been analyzing a subset of data (200,000+ observations, about 10% of the total). This was intended to provide ample holdout for validation, but also had the virtue of making the exploratory models faster to run. In such circumstances, there's no reason not to try as many hypotheses as you feel like trying, because the validation sets are your safety net. But, eventually, you have to figure out what you've got and end this step of the research.

She had an OK model. Not wonderful, but adequate. It seemed like we were at the point of diminishing returns. Then she asked the question: "But what if it doesn't validate?"

"Well, then we see what does and doesn't validate and go on from there."

Given her expression (and the client deadline), it was clear she was tempted to forget the validation, draw up the PowerPoint slides, and go with the results as is. I tried again: "And when it does validate, think of how much more confident you'll feel in the results. And validation is a lot like doing science!"

Luckily, all went well. She validated on an additional set of 400,000 and got results that graphed right on top of the first results and with just slightly LOWER prediction error. She presented these results confidently to a skeptical audience that by the end was convinced she knew her stuff.

But does this work in a media context? Or, even in a peer reviewed journal context, unless there's a demand for validation by the editor?

No comments:

Post a Comment