I have taken a strange interest in ratings in the last year or so, an interest that admittedly developed not long after I started watching the first season of Hannibal.  For all of its critical acclaim—here’s a recent example from Matt Zoller Seitz —the show seemed like it would meet an untimely death at the hands of network cancellation last June.  While it was granted a stay of execution, it still finds itself trying to carve out an audience, no mean feat now that it has been scheduled, ironically, in the “Death Slot” of Friday nights at 10 p.m. here in the States.   There’s a good chance that I’m just going through some kind of Kubler-Ross grieving process in anticipation of the ax, but all of this has me thinking about the validity of television ratings in these days of new media and technology.

Traditionally, in the U.S., the Nielsen ratings have been the gold standard for measuring interest and viewership and, most importantly (at least in terms of the profit margin), advertising.  (The Broadcasters’ Audience Research Board handles the ratings in the U.K.)  In times past, before VCRs, DVRs, streaming video, and the Internet, the process was fairly straightforward.  “Nielsen families” served as a random sample of the national audience, and, through viewing diaries and set-top boxes that “recorded the number of households tuned to each network during every minute of broadcast” (Murray 175), the Nielsen company collected data that was used to calculate exactly what we were watching, for how long, and how consistently.

Since viewers did not have many options when it came to watching—they either watched a show when it was on or missed it and crossed their fingers that they would catch it in a summer rerun—they had to tailor their schedules to what they wanted to see.  If TV was important to them, then the programming schedule, to an extent, controlled what they did and when they did it.

Grab your remotes, and fast-forward to the twenty-first century.   With so many recording options and so many ways to watch, viewers increasingly have control over not just what and when they watch, but, with tablets and smartphones, where and how they watch it.  We can record several episodes at a time and catch up on them weeks later, from one TV or device to another, or binge-watch a whole season of a favorite show (or a whole series in its entirety) on demand months later, long after the original airing.

All of these options have me wondering just what the Nielsen numbers mean now and how accurate they actually are.  According to their website, Nielsen has adapted (and continues to adapt) its technology to deal with these radical changes in viewing habits.  Not only do they still use random samples and viewing diaries, but their television meters can account for “‘time-shifted’ viewing—the watching of recorded programming up to seven days after an original broadcast” (my italics).  Nielsen also “incorporate[s] census-style data from third parties,” like Facebook, to track video usage on tablets and smartphones, and, last fall, they “began providing the first-ever measure of the total reach of TV-related conversation on Twitter” through the Nielsen TV Twitter Ratings.  Next year, as Variety’s Todd Spangler reports, they will even “be able to attribute linear TV viewed on smartphones and tablets to [their] National TV ratings.”

While all of that sounds great, it raises a few other questions.  I can’t help but notice the phrase “up to seven days” in their analysis of recorded broadcasts.  In other words, if you lived in a Nielsen household and waited more than a week to watch a show that you DVRed, you would not be counted as part of the show’s Nielsen audience, any more than if you saw it online or as part of a binge-watch months later.  I just watched a Walking Dead episode that I recorded two weeks ago and have been catching up on some True Detective episodes on demand that came out last month.  For Nielsen, those viewings would largely be meaningless.

Nielsen logo

Given how people might encounter a show now, I also have to wonder about the evolving, expanding definition of “audience” and what that could (and should) mean for Nielsen and the ratings.  As they continue to factor different kinds of viewing into their metrics, where do they draw the line, or do they draw one at all?  Should a person who watches a clip (or clips) of a show online be called a viewer?  Should a person who misses an episode of a favorite show but reads a recap online, to keep up with the plot, be included in some way?  What about Facebook posts and likes/dislikes?  To what extent should they influence the way that media research is tabulated?  (This has been the subject of some debate within the industry.)

And there’s the whole question of piracy.  Since “pirates” are watching the video illegally, there are ethical reasons why they shouldn’t be factored into the equation.  They are watching, though, nonetheless; if a series had a substantial audience that way and there was an effective way to measure them, would it be worthwhile to include them in the final tally?  (Check out this TorrentFreak article from last fall that named Game of Thrones the most pirated show “for the second year in a row”; Time Warner CEO Jeff Bewkes proudly called this dubious distinction “better than an Emmy.”

Connected to that (in a technologically disconnected way) is the “cord-cutter” controversy.  As Techdirt’s Karl Bode notes, Nielsen has had difficulty in calculating the “small but growing and very important statistical reality” of viewers who have cut all ties with their cable companies and decided to get their television entertainment from Netflix, Hulu, or some other, possibly pirated source.  Bode cites Nielsen’s repeated rejection of this population as yet another example of the industry’s failure “to actually pay attention to the real world and changing consumer trends.”  If that is the case, then shouldn’t they be counted, too?

Do some television shows really have a larger following than the numbers indicate or than the network and ad executives realize?

I’m not sure if there’s an immediate answer to all of these questions or these potential gaps in the research, particularly as the technology continues to develop and advance and the audience becomes more and more fragmented.  After all, the most accurate measure would probably smack of Big Brother in our living rooms, on our trains, in our heads—some kind of constant monitoring of all media engagement that would tell the networks and the advertisers exactly how and when their programming was being viewed by all viewers, a comprehensive watching beyond the watching.  I doubt that a general audience would want their viewing habits scrutinized so closely.  (Depending on the conspiracy theory that you subscribe to, that might be happening now anyway.)

Big Brother

For all of what Spangler calls “the grumbling that Nielsen has taken far too long to progress in [their consideration of alternate viewing methods,]” a recent David Lieberman article in Deadline suggests that some of it, like mobile viewing, may not be as much of a factor as we or the industry suspects (or, at least, not yet).  According to Lieberman, Nielsen overestimated the amount of time that viewers watched content on mobile devices “by a factor of—wait for it—538%”; the latest report brings “the monthly mobile viewing average” down to about an hour, “growing to 1 hour and 23 minutes a month in the last three months of 2013.”  If these numbers are correct, then most of the Nielsen viewers are still getting their TV the old fashion way, in spite of the other options at their disposal.

And now for the seedy underside to all of this.  As John Herrman points out in a rather disturbing Splitsider essay, the focal point of this research for the industry ultimately isn’t the programming content; it’s the advertising.  “[T]he numbers that networks and advertisers actually use,” he explains“— to sell ads, to set prices, and to decide on the fate of a show — are commercial ratings.”  Advertisers spend boatloads of money to run their commercials during a particular program because the ratings hold the promise that viewers will watch them and, ultimately, buy their products.   Like an eagle-eyed ratings Santa, Nielsen also monitors viewer attempts to avoid the ads and is quick to leave them with coal in their stockings for their troubles; according to Herrman, “If every Nielsen Family watched a show the day after it aired but skipped through all its ads, that show would probably be canceled.”

The reason why we watch, then, isn’t what matters most to the decision-makers.

As a case in point and an illustration of what Herrman says, consider all of the legal hullaballoo about Dish Network’s Hopper technology, which allows viewers to record primetime programs without the commercials and thus negates their financial value.  The company recently came to an agreement with some of the networks that would disable the feature temporarily and let the ads go through; on the heels of this decision, the Hollywood Reporter’s Alex Ben Block expects “that other networks will now look to cut similar deals.”

So, the cold, hard reality of the ratings is that the audience that makes (or influences) the decision about what stays and what goes on television is only a fraction of the total audience that actually watches, and that is an audience that sits through the commercials as well as the content itself.  The technology may have changed, but the bottom line hasn’t.

Regardless of the research or the numbers, I have to admit that there’s something, at times, that still seems arbitrary about it all, especially in those moments when I don’t agree with the ratings or the decisions.  I want to believe that there’s a ghost in the machine.  But even if there is, I guess that it doesn’t make a difference.  It just means that the ghost and I don’t have the same taste in television.

Douglas L. Howard is Chair of the English Department on the Ammerman Campus at Suffolk County Community College, editor of Dexter: Investigating Cutting Edge Television (2010), and co-editor of The Essential Sopranos Reader (2011) and The Gothic Other (2004).