[Contined from Part 1 here]
A good example of straw polling gone wrong is the 1936 opinion poll conducted by The Literary Digest. How did they predict that Roosevelt would lose the election when, in fact, he won by a landslide margin?
What the Literary Digest did not realize was that they had relied on voluntary responses from only their readers, a list of registered car owners, and a list of telephone users. Even though they had a large number of responses, they accidentally biased their results by only including votes of wealthier Americans in their poll. The majority of American voters at the time were much poorer and did not own a television or a car, and sided with Roosevelt.
Polling - The Scientific Way!
Around this same time, a man by the name of George Gallup correctly predicted the results of the same election using far fewer respondents – only 5000 – using a much more rigorous scientific method for surveying the public. He used something called “statistical sampling,” which is a process for making sure every person who is asked to be a part of the survey is selected at random.
Think of it this way: if you wanted to find the average age of everyone in your town, you wouldn't just stand next to the elementary school and ask the age of every child who came out. Instead, you might knock on every 10th door, because you'd have an equally likely chance of knocking on a young family's door as a senior citizen.
Scientific polling has grown since the 1930s with agencies like Gallup, Rasmussen Reports, and Pew Research reporting on everything from the president's approval rating to what direction Americans think the nation is going.
Is This Fail-proof?
Not quite. Even with better sampling, there's still plenty of opportunities for polling errors to occur. Think about the way a question is worded. If you ask one person “What do you think about President's handling of the economy?” versus “What do you think about President's handling of the economic crisis?” you could get two very different answers.
Another area for concern is non-responsive bias. This means people who don't like to answer calls from strangers won't ever have their opinions heard through an opinion poll. With most polls being conducted in English, the growing Spanish-speaking population is not reflected in the numbers. To account for the low representation, pollsters use weighting - for example, if there are half the number of African-American responses compared to rest of the population, they will count the results twice in the final outcome.
So, as you can see, polling these days is more of an art than science. It is possible that people who don't have time to take phone surveys have a different political slant than those who do.
Today, both straw and scientific polling weave their ways throughout the 24-hour news cycle. The best bet is to do your homework and see who is behind each poll before you believe it.