Why not subscribe?

Monday, November 21, 2016

Trump needs an enemy, and a weakened press is it.

So here's the Trump strategy as I see it.

You can't blame Democrats for much because Republicans control the Senate, the House, the Presidency, and most of the governors and state legislators.

Blaming liberals (a subset of the Democratic party) makes even less sense. Liberals control ... what, exactly?

But, he needs The Enemy, and that's likely to be The Mainstream Media.

Note as of July 2016, Donald Trump had over 10 million Twitter followers. That's a bully pulpit.

The combined circulation of the 3 largest newspapers is considerably less. The circulation of the daily New York Times is 2.2 million (print 590,000), the Wall Street Journal 2.3 million (print 1.4 million). USA Today (hardly known for hard hitting investigative reporting) is at 3.3 million (print 1.2 million).
And, although Murdoch seems to have left the WSJ pretty much alone until recently, let's not forget it's controlled by the same person who controls Fox News.

In addition, investigative reporting is expensive at a time newspapers are losing revenue and circulation. Celebrity news is popular and cheap for all media, as are those shows on CNN where two "experts" yell over each other repetitively.

Evidently the first post-election meeting of the press with Trump was not encouraging, and was compared to a firing squad.


The meeting was off the record, but, of course,

Trump spokeswoman Kellyanne Conway told reporters the gathering went well.

“Excellent meetings with the top executives of the major networks,” she said during a gaggle in the lobby of Trump Tower. “Pretty unprecedented meeting we put together in two days.”

Monday, November 14, 2016

How I would fix Obamacare, part 1

I'm convinced part of the late surge in Trump voting over Clinton voting was the large increases in Obamacare coverage costs. This is a real pocketbook issue for people who have to buy on the exchanges.

Not helping at all were the Jonathan Gruber interviews (he's the economist who had a strong hand in drafting Obamacare).  He minimized the problems with the increases (fine for an academic with paid healthcare to say) and said the solution was to substantially increase the penalty for not buying insurance.  In other words, penalize those who feel they can't afford to be insured.  This is fine economic theory, but sounds bloodless to those actually in the situation of buying unsubsidized policies on the exchanges.

Side note: I had to buy a policy on the Obamacare exchange for a few months between the end of my COBRA coverage and the beginning of Medicare.  This was over a year after they had supposedly fixed the site after the initial problems.  The site was still very difficult to use, slow, and used terminology that only a bureaucrat could love.  Worse yet, when I became eligible for Medicare I had a lot of trouble cancelling the policy -- it couldn't be done via the website, and after calling for help I sat on hold for a long time, then when I got a human they hung up on me as soon as I described my problem.  Luckily, I was paying by check and could just not pay until the insurer cancelled me. (The insurer explained that they couldn't cancel the policy themselves, because it had been purchased through the exchange.)  I kept comparing this to my experience shopping on Amazon.

So, what would I do?


1. I would get employers out of the picture entirely. Involving employers means there is a huge incentive to hire part-time workers, who do not get health care benefits.  In retirement, I'm still working 20% at my old employer, and teaching one course a semester at a local university. In neither case do I get any of the health care benefits that full time employees get.

Part time work is fine if you're a retiree who doesn't need the money, but not good if you are trying to raise a family or move forward in a career.  Yet, current government policy encourages it. Also, since many employers self-insure, there is pressure not to hire older, potentially sicker employees or get rid of them. Yes, there are laws against this, but unless there's a blatant violation there won't be any repercussions.

We could easily replace this system by requiring an employer to pay into a fund so much per hour. If this was $2, then a full time worker would get $80 a week into the fund. Someone working 8 hours a week would get $16 a week put into the fund.

2. I would require only catastrophic coverage. This is a less expensive mandate, and also is what people most fear -- being bankrupted by suddenly having a catastrophic accident or illness.

This has the side effect of being less controversial than the broader coverage that led to the Hobby Lobby case.

This doesn't mean you couldn't buy a more expensive plan, just that you wouldn't need to.

To be continued ...

Wednesday, November 09, 2016

What went wrong with the presidential polling pundits?

Prediction Overconfidence

As I write this, it is 5 a.m. November 9, 2016.  I got up to see if, perhaps, Clinton had pulled out an unlikely victory. She had not. Trump will be president.

The predictors did not do a good job. Here’s what I got when Googling the Huffington Post presidential prediction a few minutes ago (i.e. AFTER the election):

The Huffington modelers were outliers, but let’s look at what the major modeling groups said the day before the election[1]

New York Times: 84% chance Democrats will win the presidency
FiveThiryEight: 64%
Hufffington Post: 99%
PW: 89%
PEC: >99%
DK: 87%
Cook: Lean Dem
Rothenberg and Gonzales: Lean Dem
Sabato: Likely Dem

So where did they go wrong? Certainly there are difficulties in polling now, with nonresponse rates being very high.  Pew has done a series of studies using the same methodology over the years, so we can compare response rates[2]:

This makes it a challenge to adjust for these nonresponse rates, which are not random. I’d worried earlier that there might be some decent sized pocket of Trump voters who weren’t admitting they were Trump voters, because they thought that was a socially unpopular thing to do. That would technically be a bias, and the bias would be similar (correlated) across all states and polls.

And the results are likely to be within the margin of error of the individual polls. But, still, let’s not gloss over the fact that, in the end, there was a failure.

Oddly enough, this failure seems to me to be similar to the error in financial modeling that was one of the causes of the faulty risk assessments prior to the financial crisis in 2008, that led us into years of recession. That’s a failure to accurately measure the amount of intercorrelation.

I see an inkling of this in statistician and political scientist Andrew Gelman’s blog post election night[3]:

Election forecasting updating error: We ignored correlations in some of our data, thus producing illusory precision in our inferences

Posted by Andrew on
The election outcome is a surprise in that it contradicts two pieces of information: Pre-election polls and early-voting tallies. We knew that each of these indicators could be flawed (polls because of differential nonresponse; early-voting tallies because of extrapolation errors), but when the two pieces of evidence came to the same conclusion, they gave us a false feeling of near-certainty.
In retrospect, a key mistake in the forecast updating that Kremp and I did, was that we ignored the correlation in the partial information from early-voting tallies. Our model had correlations between state-level forecasting errors (but maybe the corrs we used were still too low, hence giving us illusory precision in our national estimates), but we did not include any correlations at all in the errors from the early-voting estimates. That’s why our probability forecasts were, wrongly, so close to 100%.
Put simply, if there is either a late surge for Trump in opinion, or there was a hidden batch of Trump supporters, or Trump supporters were more likely to show up to vote than expected by the models, these errors would not be random, they would be correlated. There would be more Trump votes across nearly ALL states.  Similarly, if Hillary supporters were less likely to show up to vote than expected, this would be likely to affect nearly ALL states, not occur randomly.

Note Gelman is aware of this problem, but doesn’t feel that he adjusted completely enough for it.

So how is this related to the financial crisis? Recall those mortgages that were packaged together, each with a certain probability of failing.  But each mortgage had, say, a 2% chance of failing, then a bundle of 1000 mortgages would have about 20 failing, with a 95% chance that the number will be between 12 and 28. But that’s if the mortgages failures were independent.  They aren’t.  Like Gelman in his note above, it was known that the mortgages weren’t independent and that a correlation needed to be estimated.  Felix Salmon, in his article “Recipe for Disaster: The Formula That Killed Wall Street”[4] notes that this estimation of the correlation by David X. Li using a Gaussian copula function
“looked like an unambiguously positive breakthrough, a piece of financial technology tha allowed hugely complex risks to be modeled with more easy and accuracy than ever before… His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched – and was making people so much money – that warnings about its limitations were largely ignored.”
“Using some relatively simple math—by Wall Street standards, anyway—Li came up with an ingenious way to model default correlation without even looking at historical default data. Instead, he used market data about the prices of instruments known as credit default swaps…. When the price of a credit default swap goes up, that indicates that default risk has risen. Li's breakthrough was that instead of waiting to assemble enough historical data about actual defaults, which are rare in the real world, he used historical prices from the CDS market.”
But there’s a problem, as Salmon notes:
“The damage was foreseeable and, in fact, foreseen. In 1998, before Li had even invented his copula function, Paul Wilmott wrote that "the correlations between financial quantities are notoriously unstable." Wilmott, a quantitative-finance consultant and lecturer, argued that no theory should be built on such unpredictable parameters. And he wasn't alone. During the boom years, everybody could reel off reasons why the Gaussian copula function wasn't perfect. Li's approach made no allowance for unpredictability: It assumed that correlation was a constant rather than something mercurial. Investment banks would regularly phone Stanford's Duffie and ask him to come in and talk to them about exactly what Li's copula was. Every time, he would warn them that it was not suitable for use in risk management or valuation.

“In hindsight, ignoring those warnings looks foolhardy. But at the time, it was easy. Banks dismissed them, partly because the managers empowered to apply the brakes didn't understand the arguments between various arms of the quant universe. Besides, they were making too much money to stop.”
So, in both the election forecasting and in the financial forecasting of those tranched mortgage securities we have a problem in not accurately understanding the correlations (and the stability of the correlations) between events. There are a lot of differences, of course, but still those high level similarities.

And there is the human tendency to overconfidence in predictions, which has been amply demonstrated many times[5], including even in a survey of Messy Matters blog readers (who tend to be professional statisticians) taking part in a survey called “Are You Overconfident”![6]

“The bad news is that you’re terrible at making 90% confidence intervals. For example, not a single person had all 10 of their intervals contain the true answer, which, if everyone were perfectly calibrated, should’ve happened by chance to 35% of you. Getting less than 6 good intervals should, statistically, not have happened to anyone. How many actually had 5 or fewer good intervals? 76% of you.”

So, in both the financial collapse and in the 2016 election predictions we have an inability to accurately understand the correlation between events, combined with the bias toward overconfidence that seems to be a persistently human trait. 

Regardless of our posthac understanding, we still had a deep recession after the financial collapse and we will still have Donald Trump as president. So, there may be understanding, but there will also be pain.  Can we not have a gain in learning without pain?

I’m trying not to think about the more complex situation of climate models.


Fun Fact: Here's a surprising number. I downloaded the polls data from FiveThirtyEight (polls only forecast), and only looked at polls since Sept 1. The total sample size in these polls? 3,155,370 (these include polls done in states, mostly in swing states). That's a truly staggering number. So, while individual polls have a sampling error margin of error, the error in polling as a whole is due to nonsampling errors (most commonly summarized under the term "biases".



[1] Josh Katz, The Upshot, New York Times, 2016 Election Forecast: Who Will Be President, updated Monday Nov 7, 2016 6:58a.m.
[2] http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/  accessed November 9, 2016. “Assessing the Representativeness of Public Opinion Surveys“ (May 16, 2012 report)
[4] Felix Salmon “Recipe for Disaster: The Formula That Killed Wall Street” Wired, February 23, 2009 https://www.wired.com/2009/02/wp-quant/ , accessed November 9, 2016.
[5] Mannes, A. and Moore, D. (2013), I know I'm right! A behavioural view of overconfidence. Significance, 10: 10–14. doi:10.1111/j.1740-9713.2013.00674.x
[6] Daniel Reeves “Are You Overconfident?” Messy Matters (blog) Sunday, February 2010 http://messymatters.com/calibration/ and results “Yes, You Are (Maybe) Overconfident”, Wednesday, March 31, 2010. http://messymatters.com/calibration-results/ (accessed November 9, 2016)

Monday, October 24, 2016

Is Trump cleverly setting us up for fraud?

Trump's gotten a lot of heat for saying that the election is rigged and he won't necessarily accept the outcome.

But, just to engage in a bit of conspiracy speculation -- what if he's really setting us up?

The media has responded to Trump by arguing that there's practically no voter fraud. But they are talking there about voter fraud due to dead people voting, people voting twice, illegal aliens voting, etc.

But in the history of American elections, it's hard to argue there's not reason to be careful.


  1. There's 2000 Bush-Gore. That's not really fraud so much as bad ballot design, ambiguous situations (ballots from troops arriving too late, but not due to the fault of the troops), and conflict of interest (Katherine Harris).
  2. There's 1960, with Daley holding back Chicago ballots until very, very late -- with some speculation this was to be sure JFK carried Illinois over Nixon.
  3. There's the election of Lyndon Johnson to the Senate in 1948, pretty clearly due to ballot box stuffing.


So, despite the media's current insistence that there's no way the 2016 election could be rigged, it's hard to feel like it's impossible. Particularly since We've had Russian hacking of the DNC, and this week's large DNS attack on a variety of websites.

So, what if Trump is actually setting us up?  What if the Russians (or some other nefarious group) have figured out a way to change the counts in key precincts in enough swing states to get a narrow Trump victory? It's hard to prove fraud; easy to suspect it. And if it is cyber-rigging, we probably won't know exactly who did it -- just like we are pretty sure the Russians hacked the DNC, but don't seem to have a good idea exactly which Russians.

But, if fraud is suspected, we'll have to have all those media "experts" eat their words and do a 180 degree turn.  Suspicion will be both rampant and understandable.

Is this Trump's game? Does he know the Russians / Chinese / etc. will be hacking the election, so he's setting up all these "election is rigged" statements so that he can get nearly everyone else in the U.S. telling us (incorrectly!) that this is impossible?

And, if it is Putin, he doesn't actually have to get Trump in office to succeed. He succeeds if U.S. political institutions are thrown into disarray, which they surely would be if we have something bizarre.

So, what might be a "bizarre" conclusion?  Fraud is highly suspect, but can't be proven. Electors desert Trump, and some sort of compromise is worked out (?Paul Ryan as president?). Trump supporters cry foul.

I don't think that's the case. I think Trump's ego just doesn't allow him to think he could lose unless the other side is unfair.  But, still, it's been a very strange election cycle so far, so it's hard for me to 100% rule this out.

Monday, October 10, 2016

On women and the 2016 election

A few points after listening to the 2nd presidential debate, October 9, 2016.

1. These are mostly thought about the harassment of women. I'm going to use the term harassment here, but clearly some of this behavior is beyond harassment. Let me state at the outset that I am not a woman. I have 7 aunts (1 uncle), 4 sisters (no brothers), 1 wife, 2 daughters (no sons) -- but this is NOT the same as being a woman.

2. Donald Trump's relationships with women clearly show a lack of respect. The affairs, the locker room talk (which seems to be more than just "talk"), the need to have a trophy, arm-candy wife, all indicate objectification.   Is this the type of person we want as president? No.

3. What kept the debate from being entirely X-rated was likely the presence of four women in the audience who've accused Bill Clinton of various improprieties. There's an awful lot of smoke there, and at least some fire with the settlement with Paula Jones and the Monica Lewinsky affair.

4. I'm not going to defend Bill Clinton on this. I voted for a 3rd party candidate in 1996 because I didn't think Bill Clinton was trustworthy, and that was before Monica. I can't, and I won't try.

5. Bill's not running for president; Hillary is. But the allegations that Hillary was in the lead group of trashing the female accusers seem credible.  Was she doing the noble thing of standing by your man or making a cold political calculation for the two of them? And in that cold political calculation, was she selling out her sisters for their political future?

But now I'm getting to the part I really wanted to get to.

6. Do we have a syndrome here? Trump has responded that he's had a fair number of female executives. On Frontline (I think; somewhere on PBS) they interviewed a woman who Trump put in charge of managing the contractors on Trump Tower in New York City, an unusual job to be give to a woman, particularly at that time.  Bill Clinton appointed more women to cabinet posts and judgeships than his predecessors. And I am reminded of Senator Robert Packwood, who had a strong reputation for years for being a supporter of women's issues in the U.S. Senate, before his career collapsed in a harassment scandal.

For years, Packwood, the embodiment of a quirky Oregon species, the socially progressive Republican, has been a strong supporter of women's causes. A leader of the abortion-rights brigades, he introduced the first Senate bill to legalize abortion in 1970; a decade later, after Bill Bradley and Daniel Patrick Moynihan demurred, he led a lonely filibuster against his own party's bill to make abortion the equivalent of murder. He has also regularly hired women to run his campaigns and to serve as his top aides.
But after the first wave of news accounts, many more women came forward with accusations of sexual misconduct, raising the total to at least 24.
http://www.nytimes.com/1993/08/29/magazine/the-trials-of-bob-packwood.html?pagewanted=all 
 7. So what's the nature of this syndrome? What do these cases have in common?

  • Man with charm, good looks, money/power.
  • Man either thinks because he's supporting women in some areas (women's issues) he can harass them in others? 
  • Or maybe, man is trying to balance out the sexual harassment by promoting some women to responsible positions?
  • We might call these men with charm, good looks, and money/power alpha males.  Which means, culturally, that in order get sex they wouldn't need to resort to bad behavior. So why do it?  Is it perhaps a desire for conquest? And is that conquest more satisfying if it's over a member of a class of people (women) that you've ceded some power to? (sort of like beating another pickup team in basketball after you've given them a good player to make the game more competitive?) 
  • That line of reasoning seems to work a bit, but he have to recognize that many of the victims (e.g. all those women accusing Bill Clinton) were not powerful women at all. So I'm left with that either/or argument in italics above.

Friday, September 23, 2016

Is 99.8% accuracy good enough?

There's an article in the Wall Street Journal about the complications of the government declaring you dead, when you aren't. http://www.wsj.com/articles/if-the-government-thinks-youre-dead-thats-really-hard-to-fix-1474474793?mod=e2fb  I couldn't read the article because it was behind a paywall, but did read a couple of other articles about how terrible this can be.

https://theawl.com/what-happens-when-the-government-thinks-youre-dead-27326ef102ef#.zb35rfto1

http://fusion.net/story/113131/why-you-really-dont-want-to-end-up-in-the-governments-death-master-file/

From the fusion.net story:

During his testimony, SSA’s inspector general, Patrick O’Carroll said there is still room for improvement in addressing DMF errors, noting that in a 2008 report they’d found more than 20,000 individuals over a three year period who had been incorrectly declared dead.
“Erroneous death entries can lead to benefit termination—and government underpayments—and cause severe financial hardship and distress to affected individuals,” he said.
The WSJ story evidently includes this quote: " "The VA said its accuracy is around 99.8%...“It might not seem like a big problem statistically but it’s happening more often—and it’s a huge problem if you are not really dead"." 

Is 99.8% good enough?

99.8% doesn't seem very impressive to me. After all, it's not an ambiguous situation. You're either dea or not. It's not like trying to figure out whether you are still a practicing member of a religion, for example.

My friend responded: "One keystroke and the person is deceased. Not unlike a drone pilot except the former is reversible."

Both my friend and I have spent years in marketing. 99.8% would be impressive in marketing (and, in fact, is far above the typical accuracy in, say, target marketing).  But drones? Is that a good comparison?

Drones operate in hostile territory during war. Let's take O'Hare airport, instead. In one month (July 2016) there were over 76 thousand takeoffs and landings. At an error rate of 2 per thousand, that would be about 150 crashes at O'Hare per month. 7.4 million passengers -- that would be almost 15,000 casualties per month. [actually, thinking about numbers like this makes me say a prayer of thanks for the entire airline industry.]
http://www.flychicago.com/.../0716%20ORD%20SUMMARY.pdf

So, clearly some systems both involve the government and are substantially more than 99.8% accurate.  If being declared dead involves a single random keystroke, shouldn't we change that to require a bit more? After all, usually when you want to collect an estate's assets you must provide a death certificate.

Tuesday, September 20, 2016

Review of "Our Mother" in AVClub

Nice review of Luke Howard's book, Our Mother, in AVClub today:

http://www.avclub.com/article/mental-illness-fluorescent-storm-challenging-our-m-242253?utm_source=facebook&utm_medium=ShareTools&utm_campaign=default 

The conceit is similar to that of Pixar’s Inside Out, but Howard’s metaphor lacks the cloying and reductive simplicity of the film and features the incompleteness and pervasive unknowability that makes a rendering of mental illness sensible without being insulting. 

Sunday, September 18, 2016

Long term care insurance: post #6. More people agreeing with me.

I previously had a series of posts on long term care insurance.

http://www.truncatedthoughts.com/search?q=long+term+care+insurance 

To summarize these posts: I'm skeptical that this insurance generally makes any sense, and skeptical of government agencies' attempts to push it.

Today, I find further support from an article in Decision Analysis. Here's the abstract:

The purchase of long-term care (LTC) insurance is a difficult lifetime choice made in the face of highly uncertain risks, including mortality, morbidity, timing and length of LTC, and portfolio investment risk. Many individuals do not know how to think about this decision properly and, in the face of too much anecdotal and too little objective information, will not proactively decide.

We used Monte Carlo simulation modeling with detailed, experience-based distributions for LTC uncertainties and their correlations to project investment growth to death given alternative levels of LTC insurance. Using constant risk aversion, we calculate certainty equivalents for the resulting distributions of final holdings at death. Decisions were separated for male and female individuals and group and individual market insurance opportunities. Sensitivity analysis was conducted varying age, cost of coverage, starting investment amount, risk tolerance, return on portfolio investment, inflation, and length of LTC coverage.

Optimality results suggest low levels of coverage or no insurance, [emphasis added] with higher use of insurance only for individuals who are young, have low risk tolerance, low starting portfolio amounts, or combinations of these characteristics. While the contribution of this work is to assist individual decision making, it will also be informative to policy makers and insurance companies.

Long-Term Care Insurance Decisions

 and 

Permalink: http://dx.doi.org/10.1287/deca.2016.0332 

Saturday, September 10, 2016

Another nice review of Luke Howard's book, in Slate

Son-in-law Luke Howard's first book just got a nice review in Slate:

Howard wrote and drew Talk Dirty to Me, and it feels refreshing on several levels. It’s sexy without being vulgar, curious without being exploitative. It gives its heroine agency over her choices while still recognizing the precarious hold she has on her own twentysomething self. And Howard’s cartooning is endlessly inventive—the traditional six-panel page bends, slows down, speeds up, stops on a dime, and then rockets into a new surprising place. Howard has a remarkable knack for bringing abstractions to visceral life on the page, and he makes desire both comic and erotic with a quavering line and a vivid pink-and-blue palette.

More on Amazon: https://www.amazon.com/dp/1935233378/?tag=slatmaga-20


Ant Farm companies

I used to work for a company that got bought very cheaply by an entepreneur, who proceeded to make a lot of changes in the company.

  • He owned an oursourcing company in India, so much of our operations and programming went over to the Indian company.  This gave the Indian company more scale, and the lower costs meant the company went from money-losing to money-making.
  • During our money-losing phase, layoffs were continuous.  But with all the oursourcing they continued.  We went from about 4400 employees to less than 1500 in five years (approximate numbers based on repeatedly dumping the company phone directory). From four buildings and part of a fifth in Chicago, we are moved down to one after he sold the company.
He already had a fortune, and was in his sixties. We wondered why he kept working so obviously hard. He wasn't at all a hands-off investor, but clearly knew a lot of details down in the organization.

I used to explain it by saying that the company was his ant farm.
How was the company like an ant farm?

He watched us for a bit, but then
  • couldn't resist taking a magnifying glass to some of the ants and burning them (layoffs).
  • when he got bored, it would be like taking a stick and stirring up the ant farm (reorganizing, with again more layoff casualties).
  • eventually got tired of us and got rid of us (sold us off).
So, we're a bit like a toy. Running companies was far more interesting than, say, playing golf. As to why he was so driven to make more money, we might look at this Barney and Clyde cartoon about a man who owns a drug company:


Now, the entepreneur saved the company, which was losing so much money we would probably have disappeared. And despite feeling that every year would be the year I would finally get laid off, my career actually did quite well during his ownership.

It's a fair question to ask what "saved the company" means. Saved for whom? Almost all the employees are different. We're in mostly different physical locations. Our customers could have bought our services for others.  A corporation is really not a person, and has no feelings.

As for the entepreneur, he made enough money to make it back on the Forbes 400 list by selling the outsourcing firm (now with larger scale) and selling us.  

The new owners are bloodless guys from the east coast. Unlike the entepreneur who got involved in as many aspects of the business as he could, they seem to have little interest in what we actually do. I met the entepreneur a number of times in meetings, sometimes one on one. I've only seen the news guys from the back of the auditorium on rare occasions (and that usually by audio or videoconference). 

It's hard for me to believe I feel nostalgic for the crazy, disruptive years of the entepreneur's reign, when he said that in five years the jobs of everyone who wasn't having direct contact with clients would  have their job outsourced.  (I didn't have direct contact.)  But I miss the raw energy, the joy of creation that he embodied.



Basket of deplorables is a deplorable comment

http://www.usatoday.com/story/news/politics/onpolitics/2016/09/10/clinton-trump-supporters-deplorable/90182922/


That's the way to follow up that positive convention with a positive campaign!
HRC usually speaks from a script -- did nobody read it and think "basket of deplorables" is going to make a deplorable sound bite? 

Basket of deplorables is a deplorable comment

http://www.usatoday.com/story/news/politics/onpolitics/2016/09/10/clinton-trump-supporters-deplorable/90182922/


That's the way to follow up that positive convention with a positive campaign!
HRC usually speaks from a script -- did nobody read it and think "basket of deplorables" is going to make a deplorable sound bite? 

Wednesday, August 31, 2016


Nice review of Luke Howard's new book

Nice review of son-in-law Luke Howard;s newest book (Our Mother, reviewed at the end of this short video).
https://twitter.com/jmaq/status/771059969844207616

More information about this book, which has been getting good notices, is here:
http://retrofit.storenvy.com/collections/29642-all-products/products/17488829-our-mother-by-luke-howard

Not Quite Used to Fame


But Luke and Abby don't have this authorship thing quite down yet. Abby posted this:

So after the con [Comics Convention in Burlington VT, where Luke did a book signing for his previous book, Talk Dirty to Me], Luke and I went to Phoenix books in Burlington to check it out, and we got excited to see his book in the window. When we went inside we looked for his book but it wasn't in the graphic novel section. Luke (rightly) pointed out they must have all the copies at the con, but I thought it might be somewhere else tucked away due to some of the more explicit content. 

While Luke was in the stacks, I decided to ask if they had it, which they said they didn't, they were all at the con, and they seemed embarrassed. I couldn't clarify that it's okay, I'm the author's wife, because Luke was mortified I was even asking the staff.Cut to this morning and Luke is texting one of the staff of Phoenix books who is at the con. 

Luke (to me): "apparently some people stopped by the store last night looking for me to sign a book"
Me: "woah! That's awesome"
Luke: "..."
Luke: "That's us. We did that."
Me: "oh crap. Tell her that was us."
Luke (after typing): "she says 'great, thanks for throwing my staff into a tizzy last night.' Abby... Why... Why did you ask..."
Me: "hah. Hah. ....mistake...."

Doesn't seem like a major sin. As I wrote:
 I don't understand why you fessed up. You actually think those reviews on Yelp and Amazon are all written by satisfied customers, without any of them being from the owner or staff? Even big writers like Stephen King do this. Now, a big writer like Stephen King can't go personally into a bookstore to see if they carry "The Shining", but he has enough money to hire minions. 

There's also this article by Brent Underwood, "Behind the Scam: What Does It Take to Be a ‘Best-Selling Author’? $3 and 5 Minutes." detailing how to game Amazon's system (or, maybe showing how ridiculous the notion of a best-seller on Amazon is).
One of Luke's fellow authors at the Center for Cartoon Studies (a man who's farther along in his career) wrote: "Oh, God, LET THEM THINK IT WAS A CUSTOMER! NEVER ADMIT TO THIS KIND OF THING!"

Monday, August 22, 2016

Protected Bike Lanes: North American Evidence?

This posting is just a place to make my critique publicly available to those locally who've requested it. It probably won't be of general interest.

The start

Mike,

If you have a chance could you take a look at these two reports and let me know if the statistical methodology looks right. I don’t need a full review or anything like that. I just don’t have any way of knowing if they really know what they are talking about. I believe that it is all the same data that both reports are working from.


When I inquired of ActiveTrans [Chicago-based Active Transportation Alliance, a bicycling / mass transit / pedestrian advocacy group] if there were peer reviewed studies supporting the safety of protected bike lanes, they sent me a link to the People for Bikes web page: http://www.peopleforbikes.org/statistics

When I looked at what they had on the safety of protected bike lanes I found lots of stuff about people “feeling” safer, and puff pieces and memos by advocates, but this single Canadian study seemed to be the only thing that approached rigor, I think.


Full citations for this study:

Harris, M. A., Reynolds, C. C. O., Winters, M., Cripton, P. A., Shen, H., Chipman, M. L., … Teschke, K. (2013). Comparing the effects of infrastructure on bicycling injury at intersections and non-intersections using a case–crossover design. Injury Prevention, 19(5), 303–310. http://doi.org/10.1136/injuryprev-2012-040561

Teschke, K., Harris, M. A., Reynolds, C. C. O., Winters, M., Babul, S., Chipman, M., … Cripton, P. A. (2012). Route Infrastructure and the Risk of Injuries to Bicyclists: A Case-Crossover Study. American Journal of Public Health, 102(12), 2336–2343. http://doi.org/10.2105/AJPH.2012.300762


My Comments

Here are some comments. Note that while I am a statistician, I work in marketing research, not transportation analysis.

This is really one study, with different parts of the analysis published in AJPH (2012) or BMJ (2013).
Case crossover is a very reasonable study design, although the relative risk factors in this case are less stable than they seem. A logistic regression model (equation 1) seems reasonable.

This is an exploratory study; note in table 4 of AJPH they report 14 significance tests, two ways (unadjusted, and adjusted) at the 5% level.  Five of the 14 confidence intervals show significance (unadjusted).  The results about the same for the adjusted (which is good), and to simplify the discussion I’m just going to consider the unadjusted.

The finding that jumps out at you is that .12  odds ratio (OR) for cycle tracks, and 88% reduction. That seems huge, but we need to look a bit more carefully at this.

1.       First of all, it’s NOT an 88% reduction. It’s an 88% reduction in the OR relative to the reference condition (major street, parked cars, no bike infrastructure).  But that particular condition is a relatively dangerous condition. It’s appropriate to run the study that way (you usually pick the largest condition as the reference condition), but it’s easy to say “88% reduction” while forgetting it’s NOT an 88% reduction overall, just to pretty much the most dangerous condition in their data.
For example, I might be only 10% more polite than the average person, but I’m 88% more polite than Donald Trump.

2.       The confidence intervals for all of infrastructure options overlap. There’s no statistical difference between any of these:
a.       Local street, no bike infrastructure
b.      Local street, designated bike route
c.       Local street, designated bike route with traffic calming
d.      Off street, sidewalk
e.      Off street, multiuse path paved
f.        Off street, multiuse path unpaved
g.       Bike path
h.      Cycle track (i.e. protected bike lane)

This is because the confidence intervals overlap. The tests show that some of these are different than the reference condition (major street, parked cars, no bike infrastructure), and some are not.  It is true that this overlapping intervals method I’m using is only approximate, but the overlap is pretty large.  Interpreting these non-differences as differences is a common statistical reasoning error. See, for example,
Gelman, A., and Stern, H. (2006), “The Difference Between ‘Significant’ and‘Not Significant’ is Not Itself Statistically Significant,” The American Statistician,60, 328–331.

3.       The cycle track difference is pretty frail.  From table 4, there are 2 accidents on cycle tracks, 10 non-accidents on cycle tracks (the control observations). While they fit a logistic model using the overall data, we can best see why this is a frail result by considering this as a binomial, like a coin flip.  Because of the way the case crossover design works, we could expect the same number of accidents and non-accidents on cycle tracks, e.g. a 50-50 split.
a.       With 12 observations and a 50% expectation, we would expect 6 and 6, but just like a coin flip we would probably see a result that varied.  A 2-10 split is (as reported) statistically reliable, but just barely so. 3-9 would not be (one more accident). 2-9 would not be (one fewer control in a cycle track).  So, if we change ONE OBSERVATION in either direction, we have NO STATISTICALLY SIGNIFICANT EFFECT AT ALL for cycle tracks. 
b.      Since the control segment reflects a random choice by the investigators, it’s just luck that they picked 10, rather than 9, control segments on cycle tracks.  (In fact, doing a rough calculation there’s about a 45% chance of picking 9 or fewer controls and getting no significant effect at all.)
c.       In short, there’s a good chance this 88% reduction has some type M error in it (actual magnitude, if we were to do a bunch of similar studies, would be far less than 88%).  Again, I want to emphasize that this does NOT mean the investigators did anything wrong in their reporting or analysis.

4.       There are a couple of other quirks in this study. The accident risk was NOT higher at  intersections (OR = .96, nonsignficantly lower), which I don’t think is a normal finding. I’m used to thinking intersections are much, much more dangerous than non-intersections; I’m  pretty sure that was John Forester’s analysis. But I no longer have a copy of Effective Cycling.
 Note in the BMJ article they analyze intersections and non-intersections separately.

The BMJ article uses the same data, but different controls – in fact, for some cases they use multiple controls for the same case; it’s not clear to me how  they adjusted for this non-independence, and it’s clearly harder for me to do the approximate binomial calculations. (not saying they did anything odd, just that it’s not clear how they handled the cross-case dependence).  In the BMJ article, cycle track is statistically significant for non-intersections, but not for intersections. But there aren’t any cycle track accidents at intersections here, so we’re out of data. 

Overall: it’s a pretty good study, and seems to have paid careful attention to definitions, etc.  But it’s only one study, and the cycle track result seems to depend on the choice of a single, random control case.  Not much of a platform to spend millions of infrastructure money on, if that’s all we’ve got.

And, given the small data size, they obviously couldn’t distinguish the type of barrier used on the cycle track (curb, bollards, or parked cars).

SLIGHT ADDENDUM:  Instead of looking at the binomial as 2-10 versus 6-6 expectation, we could compare the observed proportion of accidents (2 out of 690) with the observed population of controls (10 out of 690). In this case, 2-9 is still significant, 2-8 is not, so this depends on a change in the random picking of 2 controls, not one.  That’s about a 33% chance, not a 46% chance.  But the general notion that this is a fragile result is still true – because it depends on a carefully constructed, but still small, data set.

Evanston


I think part of the impetus for asking me to look at this was the current controversy over a protected bike lane on Dodge in Evanston, IL  

In this case, a protected bike lane (with parking on the left, curb on the right) has replaced a regular bike lane (with traffic on the left, parking on the right).  I have not been on Dodge since the change, and can't comment on this particular case.

Saturday, August 20, 2016

Basic Laws of Human Stupidity

Randy Cassingham of This is True included this recently:

In recent reading, I’ve stumbled on a paper by Carlo M. Cipolla. An Italian, Cipolla taught economic history at the University of California at Berkeley, and proposed “The [Five] Basic Laws of Human Stupidity”:
  1. Always and inevitably everyone underestimates the number of stupid individuals in circulation.
  2. The probability that a certain person will be stupid is independent of any other characteristic of that person.
  3. A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.
  4. Non-stupid people always underestimate the damaging power of stupid individuals. In particular, non-stupid people constantly forget that at all times and places and under any circumstances to deal and/or associate with stupid people always turns out to be a costly mistake.
  5. A stupid person is the most dangerous type of person.
“Our daily life is mostly made of cases in which we lose money and/or time and/or energy and/or appetite, cheerfulness and good health because of the improbable action of some preposterous creature who has nothing to gain and indeed gains nothing from causing us embarrassment, difficulties or harm,”

Cipolla wrote in the explanation of the 3rd law. “Nobody knows, understands or can possibly explain why that preposterous creature does what he does what he does*. In fact there is no explanation — or better, there is only one explanation: the person in question is stupid.”

The fifth law has a corollary: A stupid person is more dangerous than a bandit. (Because a thief at least has motives, even if you don’t agree with them.)

It’s all spelled out in his short paper: The Basic Laws of Human Stupidity
It's a fun essay, not to be taken too seriously.  But there is a puzzle here with the second law. Cippola writes:

Whenever I analyzed the blue-collar workers I found that the fraction s of them were stupid. As s's value was higher that I expected (First Law), paying my tribute to fashion I thought at first that segregation, poverty, lack of education were to be blamed. But moving up the social ladder I found that the same ration was prevalent among the white collar employees and among the students. More impressive still were the results among the professors. Whether I considered a large university or a small college, a famous institution or an obscure one, I found that the same fraction s of the professors are stupid. So bewildered was I by the results, that I made a special point to extend my research to a specially selected group, to a real elite, the Nobel laureates. The result confirmed Nature's supreme powers: s fraction of the Nobel Laureates are stupid. 
What? Some Nobel Laureates are stupid? There could be at least two factors in play here.

The first is that we humans have areas of expertise and areas of non-expertise, which is where we are stupid. I didn't say ignorant, I said stupid.  I am ignorant of the inner workings of a modern car engine. I would be stupid if I tried to make a major repair myself.

But smart people make this mistake all the time. Thinking they are smart, they express opinions in areas far outside their area of study. Academics, who tend to be experts in narrow areas, are particularly prone to this.

The second is Lawrence J. Peter's Peter Principle: people rise to their level of incompetence.

A junior employee does a superb job, so they get promoted to a job they do very well. So they get promoted to a job  they do well. Then they are promoted to a job they are competent at. Finally, based on that long track record, they are promoted to a job that is beyond their capabilities.

So, you start out smart in an organization, but can easily end up incompetent, which is so close to stupid you can't tell much difference.

Thus, with the help of these two phenomena, we can see how there can be stupid people distributed everywhere.