Monday, March 24, 2008

Vegetable Soup Analogy Debunked: Should Television's Small Sample Be Sacked?

Vegetable Soup Analogy Debunked: Should Television's Small Sample Be Sacked?
by Frank S. Foster , Monday, March 17, 2008 TVBoard @ MediaPost.com

I HAD A VERY INTERESTING conversation the other day with an industry veteran regarding the state of the television audience measurement industry. As someone who has been a spokesperson for new ways of measuring television, I am occasionally approached by people who assume that I have no idea how it has always been done. The conversation began innocently with concerns about set-top-box data, each of which I answered simply and directly.

After 30 minutes, he had nothing more to say. When I suggested counting one in five anonymous households and mathematically inferring demographic attributes would be a more acceptable solution than projecting from a small panel, he became quite agitated. "The panel is the only acceptable solution," he cried. "No other approach allows us to taste the soup!"

The infamous vegetable soup analogy had been thrown down. Now, this particular gentleman should have known better -- he was granted an advanced degree from a prestigious college -- because the analogy borders on the ridiculous. But when faced with an argument he was not winning, he turned to something familiar. I told him I was running late but that I would address the issue in my next MediaPost TV Board column. So here goes.

I googled "Nielsen Media Research Soup" and found a white paper of sorts with Nielsen's logo entitled "What TV Ratings Really Mean." Below is an excerpt:

"Actually, a representative sample doesn't have to be very large to represent the population it is drawn from. For example, you don't need to eat an entire pot of vegetable soup to know what kind of soup it is. A cup of soup is more than adequate to represent what is in the pot. If, however, you don't stir the soup to make sure that all of the various ingredients have a good chance of ending up in the cup, you might just pour a cup of vegetable broth. Stirring the soup is a way to make sure that the sample you draw represents all the different parts of what is in the pot."

"A cup of soup is more than adequate to represent what is in the pot." That seems a fair assumption, but the application to television audience measurement is problematic. I believe Nielsen projected that there will be 112,800,000 television households in the U.S. market for the 2007-2008 television season. I believe it is generally accepted that the effective size of the current national people meter panel is 10,000, although many think it will grow to 17,000 by 2011.

To make my point, I will be generous and assume the effective sample will be 20,000 at some point, and that the number of television households will remain constant. 20,000 out of 112,800,000 equates to 1 in 5,640 households. So how small does that make Nielsen's cup?When I was a child, the biggest pot in my mother's home was a five-gallon soup kitchen pot. There are 16 cups to a gallon, so there must have been 80 cups in her soup kitchen pot. So Nielsen's "cup" isn't really a cup. There are 256 tablespoons in a gallon, so there must have been 1,280 tablespoons in my mother's soup kitchen pot. So Nielsen's "cup" isn't really a tablespoon. If there are 768 teaspoons in a gallon, there must have been 3,840 teaspoons of soup in the soup kitchen pot. But that means a teaspoon is still too big to use and Nielsen's "cup" is smaller than a teaspoon. According to Wikipedia, there are 60 "drops" in a teaspoon, so there must have been 230,400 drops in my mother's soup kitchen pot. Which means Nielsen's "cup" actually translates to 41 drops -- or approximately 2/3 of a teaspoon.

What further complicates the analogy is that Nielsen's one pot is used to measure television in its entirety -- which consists not of one network, but hundreds. How do you measure hundreds of networks with just 41 drops? Assume a broadcast network had a great night and 20% of the nation's population watched that network's program. Nielsen's "cup" for that network would be 20% of 41 drops, or approximately eight drops. Out of those eight drops Nielsen would have to differentiate by age, sex and race. A popular network with a 2.0 rating would have one drop of soup, give or take. The ratings for the vast majority of networks would be based on much less soup than that.

Keep in mind we have not discussed the "stirring" of the sample, nor have we talked about those cups that are drawn from the pot and dumped because they would not agree to take part in the tasting. We have also limited our analysis to Nielsen's national panel. Their local panels are an order of magnitude smaller and more problematic. If we include geographic targeting (Sys Code Clusters, for example) the issue becomes further complicated. Of course, addressable advertising cannot be measured by a sample at all.

To my learned colleague with whom I spoke the other day, this has been a real-world examination of the vegetable soup analogy. The small sample approach was fine when collecting more data was expensive and wrought with problems, but it makes little sense today. Ultimately, it will be the television industry that will decide if small panels remain the "gold standard." I continue to believe counting one in five households and using mathematical inference to determine demographics is a vastly superior solution.

Undoubtedly, there will be those who disagree. Rebuttals can be posted on this blog or addressed directly to me. My contact information is listed below.

Post your response to the public TV Board blog.

See what others are saying on the TV Board blog.Frank S. Foster ( foster.frank@evadconsulting.com) is a senior consultant at EVAD Consulting and is currently working with TNS in a strategic capacity related to its television audience measurement efforts. Prior to EVAD, Mr. Foster was the co-founder and president of erinMedia.

Friday, March 14, 2008

Media Roundtable Luncheon

I had the opportunity to attend and speak at a George Allen Florida All Media Roundtable yesterday. There were some very interesting speakers at this Roundtable including:

George Measer - CEO of a local group of community newspapers and past preseident of the National Newspaper Association. Mr. Measer spoke about the decline in circulation for daily newspapers nationwide, but interestingly enough, community newspapers have been shielded from this decline and are actually increasing. These seems to be a strong demand for hyper-local news.

Norman Rau - President of Sandusky Radio, with ten radio stations in Phoenix and Seattle. Mr. Rau spoke on the subject of radio, obviously. What struck me about his speech was his' company's attempt at incorporating the internet into their portfolio, but not having a lot of sales success. All of their radio stations are currently streamed online, Mr. Rau said, but with 80% of a radio station's business local, local businesses do not care about the worldwide listeners attracted to an online streaming radio station. The amount of local visitors listening to their stations online is relatively low, and thus they have had trouble quantifying the value to their clients.

Jerry Bilik - VP of Creative Development for Feld Entertainment. Feld Entertainment is the current owner of Ringling Circus, and also has the exclusive contract to product the Disney on Ice shows. He talked about how even in a declining economy, live family shows like Disney on Ice and the circus thrive. Even as families are struggling with high costs of gas, food and housing, they will still find money to bring their children to these kind of events, simply for the moral factor. Mr. Bilik also mentioned a great breaking news: The Ringling Bros Circus is actively searching for locations to FINALLY bring the circus back home to Sarasota, Florida for a year-round show and winter headquarters! This would be huge for the Sarasota area. If anyone has any potential locations in Sarasota that may meet the needs of the circus, contact me and I can put you in contact with Mr. Bilik.

Bob Chandler - Former CBS Senior Vice President and supervisor of 60 Minutes. Mr. Chandler spoke about the upcoming 40 year anniversary of 60 Minutes and told us the origins of how the show originally came to be. Even I got nostalgic about that period of time in TV history, and I wasn't even around then!

I spoke on the subject of TV and the internet, and the multitude of ways that our company is trying to provide content across any particular platform we can imagine - from TV to newspaper to online to cell phones, where people are, that's where we're going to be. There seemed to be a lot of interest in this new world of media, and a lot of disbelief in the way things have advanced and changed. I received some very insightful questions, particularly on the subject of "Information Overload". I just covered information overload in a previous post, so it is definately a problem in my mind as well.

All in all, it was an insightful lunch, and I was delighted to share in the wisdom of these past giants of the media industry.

Wednesday, March 12, 2008

Lazy Mechanic vs Industrious Mechanic

On the heels of my past post, here is a story that has always struck me as a good approach to have as a Research Director. In fact, I've had this story printed out and posted on my cube wall for years.
____________

THE LAZY MECHANIC: First, to preface my system for doing library research, let me explain the story of the Lazy Mechanic and the Industrious Mechanic. According to my Dad, who comes from a line of machinists, engineers and other early technocrats, it's better for a factory to hire a lazy mechanic to service it's machines, than an industrious one.

He says that an industrious mechanic will carefully oil, and service and repair a machine so that it works perfectly (with lots of his help and labor) and never needs replacing. This will leave you with an outdated and labor inefficient machine, that is in too good a repair to justify scrapping.

A lazy mechanic, on the other hand, doesn't want to spend his day oiling and repairing and servicing. So he will use his mechanical knowledge to alter the machine to make it more efficient, (expending extra time at the beginning), so he needn't work on it for the future. He will also ignore doing any kind of servicing that isn't actually necessary to the operation of the machine. In short, he will expend his energy on making it a better machine, rather than maintaining an inefficient status quo.

This is one of life's important lessons. If something takes a lot of time, trouble, and effort to do, when it is something that should be simple, chances are that the system is set up wrong. In this case you need to apply your effort to finding a better system that works faster, rather than simply dumping all your effort into forcing an inefficient method to work for you. This will take extra time at the beginning, and save you time in the long run.

Drowing in Data

The biggest adjustment to Local People Meters for me has been the absolute mass of information that is thrown at my head on a daily basis. I'm beginning to learn that too much information can be just as problematic as too little information. With so much data laying in front of you, it is extremely difficult to put your hands on that tiny bit of information that will make a difference in helping your station's news, marketing and sales advance their work.

Most TV research directors are used to delving deeply into four ratings books a year, analyzing and cutting the data in as many possible slices as possible. But with ratings every month, every week, every day, there are simply not enough hours in the day to go into that granular level for every piece of data that comes my way.

My market has been LPM for nearly eight months now. The first few months of LPM were all about understanding the differences in methodology and ratings. The next few months were spent delving into the accuracy of the meter sample, the A/P meters themselves and potential crediting errors. These last few months (and moving forward) are all about establishing regular reports that will pull out the most relevant information to the forefront. I wish I could have done this going in, but its a process that requires the input of news, marketing and sales. What do these people NEED in order to do their job, and what kind of report can I create to communicate the corresponding research for that need.

Time is always an issue. A typical book analysis in the past quarterly sweeps cycle for me would entitle about two weeks preparing for sweeps, four weeks in sweeps, and four-five weeks of post sweeps analysis. This schedule is NOT possible for one person to do on a monthly basis.

Thus, the best RDs in an LPM world adapt and learn to create tools to do the work for them. One RD called them "Excel Widgets". I am a big fan of Excel Widgets, and since I am a self professed lazy Research Director, the more I can make my tools do for me, the less I have to do on a daily basis.

One example of a tool I created was a Meter Sample Monitor. I wrote the excel grid so that I can download the Excel versions of the meter sample characteristics on a weekly schedule, drop the excel file into my Meter Sample Monitoring excel file, and voila! Excel does the calculations for me, tracks the weekly progress of the sample and even highlights in red and orange the areas where the meter is slipping too far off the universe estimates. The whole process takes 30 seconds.

Monday, March 10, 2008

From St. Pete Times Critic Eric Deggan's Blog:

March 10, 2008
Local TV News Ratings Show Typical WFLA/WTVT Fight
I 've only got detailed numbers for the core viewer demographic most local TV stations sell to advertisers, those aged 25 to 54.

But among this group in February's ratings race, WFLA-Ch. 8 was slightly on top in early evening and 11 p.m. newscast timeslots, pretty much flipping places with close rival WTVT-Ch. 13 from results we saw in November's ratings race.

The two stations are pretty much even in the early morning, from 5 a.m. to 7 a.m. And though some local folks insist that the switch to daily demographic information back in October has changed the way advertisers buy time, the local TV stations continue to get $$ for promoting their stations from national sources in the sweeps months of February, May, July and November -- so that's when you see billboards, radio ads and other, non-TV advertising.

Here's the February numbers, among viewers aged 25 to 54, for the ratings junkies, courtesy of WTVT. The first number is the rating, or percentage of Tampa Bay area TV viewers aged 25 to 54 with TVs. The second number is the share, or percentage of Tampa Bay area viewers aged 25 to 54 with TVs turned on at the time.:

5 p.m. newscasts:
Dr. Phil/WTSP N/A
WFLA 2.0/9
WTVT (Fox 13) 1.7/7
WFTS-Ch. 28 (ABC Action News) 0.8/4

5:30 p.m. newscasts:
WFLA 2.0/8
WTSP (shows Dr. Phil; no news ratings)
WTVT 1.6/6
WFTS 0.9/3

6 p.m. newscasts:
WTSP-Ch. 10 1.8/6
WFLA 2.3/8
WTVT Fox 13 2.2/8
WFTS Action News 1.1/4

6:30 p.m. newscasts (network, except for WTVT):
WTSP 1.5/5
WFLA 2.5/8
WTVT Fox 13 2.3/7
WFTS Action News 1.5/5

10 p.m. newscasts:
WFLA on WTTA-Ch. 38 0.4/1
WTVT Fox 13 News 11p 4.3/10

11 p.m. newscasts:
WFLA 2.1/6
WTSP-Ch. 10 1.5/5
WTVT Fox 13 News 11p 1.9/6
WFTS Action News 1.6/5

6 a.m. to 7 a.m. newscasts:
WFLA- 2.7/16
WTSP- 0.4/2
WTVT 2.6/15.5
WFTS 0.8/5

Tuesday, March 4, 2008

Research is all about the source

I couldn't say it any better myself. From Terry Heaton's PoMo Blog:

Research is All About The Source
In one week, we’ve had two “studies” telling us different things about where Americans get their news.

In a report from Magid and Hearst-Argyle, most people choose local TV news. The study made mouths water and lips smack as a chorus of “we told you so” rang from the board rooms of various local broadcast companies.

Not only is local TV news content the biggest audience draw for news and information on-air and on digital platforms – it is also the most effective video advertising platform, according to new research results…

But a second report, this one from Zogby International, reveals that the Internet is the top source of news for nearly half of Americans. Two thirds, the survey found, are dissatisfied with the quality of journalism, calling it “out of touch.”

So who do you believe? Both can’t be right. The truth is neither is right. The Magid study is of 2,700 viewers of local news. Of course, they’d say that local news is their top choice. The Zogby study is of 1979 adults on the Web. Of course, they’d say the Web is their top source for news.
We badly need research in this area, but we shouldn’t pay any attention whatsoever to studies like these, because research is all about the source.