Data Driven Journalism Fails to Live Up to the Hype

Facebooktwittergoogle_plusFacebooktwittergoogle_plus
Data Driven Journalism

Data Driven Journalism

When Nate Silver’s weighted polling average model accurately predicted 49 of 50 states in the electoral college in 2008, he became a household name overnight. A few years later, he launched fivethirtyeight.com, and with it the new trend of “data driven journalism.”

In 2012, his critics charged that his models were wrong because the polls he relied on were skewed in favor of Democrats. The actual election proved that his critics were wrong. But it didn’t necessarily prove that Silver was right. The thing is, there were good reasons for the critics to believe that the polls were skewed. The put forth historical models showing how and why they probably were, and the models made sense. But making sense isn’t the same thing as being right, and when election day dawned, we learned that the model was broken.

The problem for Silver and other data driven journalists is that their models aren’t right, either. I wrote a few months ago that Silver’s modeling method would eventually fail, and fail spectacularly. By any honest measure, it has done so in this election cycle. His “polls plus” model, billed as the newer, better, more accurate model simply wasn’t. It actually performed worse in this cycle than his “polls only” model, worse than general weighted polling averages (such as the RCP average), and even did worse than a fictional pundit.

This destined to eventually happen.

Mr. Silver made three cardinal mistakes.

First, he confused the map for the territory. He built a model of the past. His model fits the past with high accuracy. But the past is not the future, and the model is not reality. He found variables with high correlation. Those variables seemed to have a logical causation effect that made sense. So he made a model out of them. But as we noted above, making sense isn’t enough to be right.

Second, statistical models of his sort simply can’t account for Black Swan events such as Donald Trump’s candidacy. Yet the one thing we can say with certainty about Black Swan events is that they will eventually happen. Donald Trump happened, and Silver’s model couldn’t account for it.

Third, Silver let his own opinions and feelings get in the way. He was accused of this in 2012, but the election results vindicated him. This time, Silver simply couldn’t accept that his own model was actually wrong. It happens to the best of us. But it hit Silver hard in 2016.

To be somewhat fair to Mr. Silver, he has acknowledged that this cycle threw his model off. On the other hand, his model is off by far more than he – or most – understand.

First of all, we have to accept that with good polling data, which we’ve mostly had, predicting an election the night before isn’t actually all that hard. The single exception to this is if the polls show a very close race. There are occasional upsets to this, but Nate Silver’s method (basically an advanced weighted polling average run through a Monte Carlo simulation) wouldn’t catch most of those. Still, Silver has done pretty well with this. His polls-plus method called 50 of 56 primaries this way. But his polls-only method, without his extra factors, still did better – 51 of 56.

But this isn’t even very interesting. You could have done just about as well by simply using the RCP polling average the day before the races and looking at who was ahead. Silver seems to be including a few more polls than RCP and weighting them based on past performance. Both techniques are useful and probably provide him with a small edge over RCP. But in both cases, you’re still essentially just looking at the polls the day before a race.

What about before a race? Predicting the race the day before just isn’t very useful. By then, most anybody can do it if they have good poll data available. How did Nate Silver’s polls do a week before each race? A month before each race?

I don’t have the data right at hand, but the answer is “very poorly.” I spent a lot of time checking his forecasts this cycle – which means I watched an awful lot of his predictions swing from “heavily favors someone who isn’t Trump” to “90%+ chances of Trump winning.” Sometimes these forecasts took a month or more to change. Sometimes it happened over the course of a few weeks.

In other words, his “forecasts” were completely and utterly useless more than a week or so ahead of any given race.

And here is where Silver – and data driven journalism as a whole – breaks down. Psychohistory simply isn’t a real science yet. That far in advance, “data driven journalism” doesn’t give any better answers than experienced pundits. It can’t. The science of data analytics simply isn’t good enough, especially in cases like presidential primaries where past data is sparse. Over time we can actually expect these forecasts to do somewhat worse than experienced pundits. Like their conventional brethren, the data driven journalists can’t help but let their biases step in. We saw this very clearly this cycle with Silver, who was certain that Trump couldn’t win the nomination. This is often worse for “data driven journalists” because they are so convinced that their approach is purely analytical. Furthermore, the data driven journalists, although excelling in statistical techniques, lack the experience with the political system to make intuitive calls. When the data isn’t good, or the model isn’t good, their fallback intuition simply isn’t there the way it can be with a seasoned pundit.

On the other hand, we just sat through an election cycle where all the seasoned pundits called it wrong, too. Because seasoned intuition also has trouble with Black Swan events. Except for one thing: many of the seasoned veterans, although predicting a different outcome, did acknowledge that something seemed to be “different” about this cycle. Experience can give you that feel in a way that data often simply can’t.

Data driven journalism is not useless. It has its place. But it will never be the revolution in news that Silver and others have tried to make it.

Russell Newquist

My name is Russell Newquist. I am a software engineer, a martial artist, an author, an editor, a businessman and a blogger.

I have a Bachelor of Arts degree in Philosophy and a Master of Science degree in Computer Science, but I’m technically a high school dropout. I also think that everything in this paragraph is pretty close to meaningless.

I work for a really great small company in Huntsville, Alabama building really cool software.

I’m the owner and head instructor of Madison Martial Arts Academy, which I opened in 2013 less to make money and more because I just really enjoy a good martial arts workout with friends.

I’m the editor in chief of Silver Empire and also one of the published authors there. And, of course, there is this blog – and all of its predecessors.

There’s no particular reason you should trust anything I say any more than any other source. So read it, read other stuff, and think for your damn self – if our society hasn’t yet over-educated you to the point that you’ve forgotten how.

Click Here to Leave a Comment Below