It's better than jumping to a conclusion, but how much?
by Barry Henderson
Public opinion surveys are only as good as their sampled populations and their questions. Sounds pretty simple. It isn't. Just the fact that they all have those "plus or minus" factors of a certain, usually small, percentage of variance in projected accuracy tells part of the story of the complexity of obtaining reasoned and reasonably accurate results from a poll of some segment of the public on a certain issue.
The polling organization conducting the survey usually determines that accuracy factor and publishes it alongside the conclusions. Sometimes it seems a better idea to have had an independent assay of that factor and the survey itself by an outside organization, but that is rarely done, except by opponents of the poll's conclusions whose assessments are skewed by their own biases.
Nonetheless, polling is a way of American life. We sample each other's opinions on virtually everything, from who will preside over the nation to whether the next commemorative postage stamp should depict an Elvis or an Elvis impersonator to which color jelly bean connotes happiness. Nothing is sacred, immune from the opinion surveyor.
The Roper Center in Connecticut maintains what it calls "by far the most complete collection of public opinion information in existence, harboring data from as far back as 1935 in thousands of polls conducted in the United States and 70 other countries. It was founded in the 1940s by Elmo Roper, who, along with George Gallup, were pioneers in the polling process.
Sift through that well-catalogued library when you have a few years to spend on it, and you might learn something. The Roper organization also conducts surveys for anyone with the money and the desire to tap into a given population base for an opinion sample.
In Knoxville, there is a well-established polling organization that conducts mostly state, local, and regional surveys. It's the Social Science Research Institute at the University of Tennessee. It recently published results of a broadly conceived poll on issues that face Knoxvillians. The sampling included 429 randomly selected registered voters in the city, and its margin of error was +/- 4.7 percent, representing what its takers described as a 95 percent confidence level.
Its focus was on such things as the 2003 mayor's race, the financing of a convention center hotel, city services and taxes, downtown revitalization, and Metro government and annexation. Pretty broad, but interesting subjects, no?
So when the first question, on the mayor's upcoming election, produced survey results that gave Bill Haslam a 7-point lead over Madeline Rogero, with 40 percent undecided, in the first half of August, its accuracy was immediately challenged. Not by Rogero supporters, but by Bill Lyons, Haslam's campaign manager, who said the Haslam margin was much greater. The irony in that challenge is in the fact that Lyons, a UT political science professor and frequent political commentator, helped set up the institute that conducted the poll and was its executive director for several years.
He was quoted at the time as saying the undecided number was way too big and that he would have taken a much larger sample.
A larger sample would have been nice, but the flaw, if any, in the sampling was that included registered voters, rather than likely voters, and an average of nearly two thirds of 95,000 or so registered voters don't vote in any given city election in recent years.
Getting to likely voters is more difficult and more costly, but it would have helped produce a more accurate poll. It's difficult because the best way to determine "likely" is to examine voting records to identify those who have been voting, a time-consuming process that makes it more expensive.
Just asking the question, "Do you intend to vote?" is useless, because, in a democracy it's expected of citizens, and they will probably say yes, even if they don't. The may even genuinely "intend" to, but the records show they haven't. The answer to such questions don't tell the real story because, in the pollsters' words, the questions themselves have "social desirability" built into them. People will often answer such questions based on what they think they ought to do, rather than what they will actually do.
Without the likely voter factor, the poll leaves your guess as good as ours, or Lyons', as to what Haslam's lead was at the time.
Other biases may be interjected, often unintentionally, into surveys such as the one the UT institute conducted in early August.
On the question of whether the citizens polled would prefer to have a convention center hotel publicly or privately funded, the sampling preferred private funding overwhelmingly. That shouldn't surprise anyone. People who are in one way or another taxpayers ordinarily don't like the idea of tax money paying for or subsidizing private enterprises.
None of this is to say that public opinion surveys themselves lack value. They are an indicator, in every case, of something or other. Just remember; it is well to consider exactly who was asked, in the demographic sense, and exactly what they were asked. Judge for yourself whether there were biases and what they were. Then throw at least five grains of table salt over your shoulder and do your own determination of what's valid in the survey, and what isn't.
September 11, 2003 * Vol. 13, No. 37
© 2003 Metro Pulse
|