Skip to main content

This is going to be a bit of a weird post, and also much longer than usual. But I wanted to take the time to show how difficult analyzing trends and interpreting primary data can be. Hopefully, it will educate readers and make you appreciate Deloitte research like TMT Predictions even more than you already do!

Last week, Nielsen released its quarterly Cross Platform Report. There were a few hundred articles written about the report in the days following, many of them referencing this column from Peter Schwartz in noted tech blog All Things D. He drew a bunch of conclusions, some of which I think may or may not be right, but were not justified by the Nielsen report.

Why does this matter?

Story continues below advertisement

  1. The current analytical consensus believes that online and streaming video is about to displace traditional TV.
  2. That belief is being reinforced by data such as the Nielsen report… but that data is (by and large) not being properly examined.
  3. People are making decisions based on that data. If the data is being wrongly interpreted, then people may make wrong decisions.

So think of this post as a primer on how to interpret data. I happen to use the Nielsen report as a case study, but the methodology will work for any media report.

Before we move on, two comments about Nielsen data. One, his data is not self-reported. These people are not being asked to remember what they watched or asked what they think they watch. Nielsen uses highly accurate passive measurement technology that tracks real-world behaviour. And two, the sample size is huge. Many studies look at a few hundred users over a few days and their statistical significance can be questionable. The trends discerned from tens of thousands of viewers over six months will be very valid. In other words, this is pretty much the most accurate consumer data you're going to see.

Are you ready? Is the PDF downloaded? Are your pencils sharpened? Good… because there will be a short quiz after.

1. Skip the introductory stuff, and start at the bottom of page two, in the box titled "methodology."

2. The breakdown of the 13,128 content streamers into five quintiles is perfectly acceptable. But, before analyzing any conclusions, it is really important to note that sixth category of non-content streamers - another 7,253 subjects. Out of a total of 20,381 subjects, Nielsen's charts that follow exclude almost 36 per cent of all those measured. That is a legitimate data analysis decision to make, but it is a huge number and the data that Nielsen shows later apply only to a subset of the total viewing audience.

3. They also did a quintile split on TV viewers. As you can see, there is a sixth category for those who don't view TV… and it has only 95 subjects in it. Roughly half a per cent (0.466 per cent, to be exact) of Americans studied don't view any TV.

That is a pretty stunning number. Interestingly, I don't think that got mentioned in any of the stories. Two reasons: I suspect most reporters and bloggers don't bother reading the methodology sections; and everybody knows that North Americans watch a lot of TV. Writing a story about "how TV is doomed" is fun and exciting and draws lots of hits and links. Writing a story about how TV is still being watched is less sexy.

Story continues below advertisement

4. In bold at the bottom of the page, the report authors say that their data reveals "two interesting and unprecedented correlations between content streaming and TV viewing." As always, please exercise caution when you see that word: as any stats person should tell you, correlation does not imply causation. The fact that a relationship exists is interesting and should be explored further, but is not meaningful on its own. Check out this page for some pretty humorous examples of things that are correlated, but almost certainly not linked in any causal fashion.

5. Check out the chart at the bottom of page 3. First, the nice smooth curves are drawn by the report authors. I think the curves are artistic…but equally I think they are potentially misleading or confusing. Straight lines wouldn't have looked as good but would do away with the suggestion that Nielsen has intermediate data points along the X axis, which they don't.

6. Yes, it seems clear that those who stream the most (quintile 1) watch the least TV. But the relationship is not linear or true across the quintiles: Those who stream the second most also were the second heaviest TV watchers in Q4 2010. The third heaviest streamers were the heaviest TV viewers in the same period.

For the Q1 2011 data, the data is equally variable: Second heaviest streamers were once again the second biggest TV viewers, for instance. But the datum that jumps out at me should be very worrying for those trying to draw conclusions: The fourth heaviest streamers (only streaming about 18 seconds of video per day) watched the fourth most TV and were not statistically significantly different than quintile 1 who were streaming almost 19 minutes per day (the difference between quintiles 1 and 4 was only 0.5 minutes, or 0.2 per cent).

If there is a relationship here, it is not linear. The statement, "the more you stream, the less TV you watch" is not supported by the data. You can accurately say, "the heaviest streamers watched less TV than the other 4 quintiles." Which is interesting indeed, but not what most articles said.

7. There is a real difference in the variability of the data on the two axes. The streaming axis is highly variable and skewed: The daily streaming in Q1 2011 range from 0.1 to 18.8, for a midpoint of 9.45 minutes plus or minus 99 per cent. Meanwhile, the TV viewing numbers range from 272.4 to 290 per day, or 281.2 minutes plus or minus 3.1 per cent. I have a major problem with drawing conclusions when one variable has such a broad range and one such a narrow range. Even very small errors could have large effects.

Story continues below advertisement

8. It is also worth noting that the quintiles for streaming content are badly clumped. The first quintile of heaviest users is nicely separated from the pack, but the remaining four are distinguished by only three minutes of streaming video per day. That's not a lot of variation to try to figure out patterns and quarter over quarter trends.

9. Another possibly confounding factor is seasonality. As Nielsen notes in every report, Americans TV watching is seasonally variable, which we can see by the TV viewing midpoint being higher in Q1 2011 than in Q4 2010, up about 5.4 per cent. One would think that streaming content and TV viewing would be similarly affected by seasonal factors, but no one have ever established that, to the best of my knowledge.

10. On to page 4, and the top left hand chart. Subset analysis is always fun - but because the number of subjects is lower (not stated in the report, but I would assume about 25 per cent of the subjects are in the 18-34 demographic) the data is also statistically less valid.

This one has many of the same problems as the charts on the preceding page: The second through fifth quintiles are tightly clumped with meaningless differences in streaming minutes; the TV viewing variability is wider (229.3 plus or minus 7.5 per cent, or double the range of all viewers) but that could be due to the smaller data set and is still quite a narrow range; and the second heaviest streamers are in a virtual dead heat to be the heaviest TV watchers. Nielsen is correct in saying that "more pronounced behaviours emerge," but I am not sure that isn't just an artifact of the subset size.

11. The top right chart is very problematic. They have done the quintiles on TV viewing, not content streaming, and are once again looking only at the 18-34 demographic. It is true that those that watch the least TV do the most streaming… but can a difference of two minutes per day between that quintile and the cluster of the other four quintiles really be considered meaningful?

Important note: this data is the virtually full set of all 18-34 year old respondents, which is why the daily streaming minutes have dropped so far. Once you include the almost 40 per cent of Americans who stream no content, the average American 18-34 year old -- supposedly the cohort that is often reported as being the group "massively shifting to streaming" -- watches about 35-times as much TV as they do streamed content.

Another aside, that top quintile is quite something. We've always known that older people and kids park themselves in front of the TV all day, but this data says that 20 per cent of American 18-34 year olds (a group that one would think has better things to do) watch 522 minutes of TV per day - that's almost 9 hours.

12. I applaud Nielsen's caution in describing the results as "emerging behaviour shift." That is statistics speak for "we think there may be something going on, but we wouldn't want to put a lot of money on it." As in the previous charts, we have all sorts of problems. The range of streaming minutes is ridiculously narrow in Q4 2010 and not much better in Q1 2011 - instead of a one-minute range between the top and the bottom, it is all the way up to 1.3 minutes.

13. But the real red flag is that the various lines cross. Yes, the lightest TV viewers stream the most. But the heaviest viewers stream the second most in both Q4/10 and Q1/11. My favourite is that those who watch 598 minutes of TV per day (almost 10 hours) and those who watch 146.7 minutes - which is a less than 200 per cent range - both stream an identical three minutes of content per day. Couldn't I legitimately write a headline around that saying there is no relationship between streaming and TV watching?

The rest of the report is the usual Nielsen stuff: great data, clearly presented, very transparent with good footnotes, and indispensable to anyone trying to figure this stuff out.

A couple of additional points:

None of my comments are intended to criticize the folks at Nielsen. They present all their data and all their methodologies. Yes, they draw charts and present data to make their conclusions look as strong as possible - but everybody does that. At no point do they willfully misinterpret, misstate or try to hide any data.

Nor am I really criticizing the people who wrote up this study, whether blogger, journalist or analyst. I think most of the articles I read say things that are probably true…but the statements were perhaps more sweeping and stated with greater certitude than the primary data justifies. Not the end of the world, but at least it gave me great material for a column.

Report an error Editorial code of conduct
As of December 20, 2017, we have temporarily removed commenting from our articles. We hope to have this resolved by the end of January 2018. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to If you want to write a letter to the editor, please forward to