Creating a good survey, one that gives you robust results, takes skill. In a former life I worked for a data analytics company where a team worked on creating surveys for consumers where I gained an appreciation of the skill. I have sincere worked with online surveys. Here are some aspects of survey design to consider.
Imagine you want to know whether Dutch people prefer dark or milk chocolate. The population of the Netherlands is 16.8 million. How many of them do you need to ask?
It turns out, not that many. If I collected data from 1067 people I could be 95% sure that my answer as correct with a margin of error of 3%. That means that if 70% choose milk chocolate the answer in the general population will lie between 67 and 73%. So if you’re a chocolate manufacturer you now know to make most of your flavours based on milk chocolate.
You can be more sure of the answer the further the outcome is from 50%. For the chocolate maker an answer of 47-53% would still be useful, but it’s problematic if you’re predicting political outcomes.
Once upon a time I knew the maths behind these calculations, now I just use an online calculator
Your sample should reflect your target population as much as possible. This may involve excluding some people from participating – if you are researching hair care products you don’t need bald men in your sample. For wider. issues it is more likely that you will try to construct a sample that mirrors the total population in terms of gender, race, age, income, family status, religion, location, gender identity and sexuality. That’s not easy. The further you are from your target group the less reliable the outcome of your survey.
Your method of collecting data may introduce bias, if you are collecting data by calling domestic numbers during working hours you exclude working people. If you collect data online you exclude those not on the Internet, and limit respondents to the small group that find your website.
If you are collecting data online you need to control for bots, and you may want to limit the number of times a respondent can answer.
To get useful data from your survey you need to construct your questions to be neutral, unambiguous, not leading and specific.
“Do you smoke cigarettes?” Is neutral
“Are you a filthy smoker?” Is not.
It should be clear what information you ar seeking in your question; there are two traps to avoid here.
- Asking two things in one question
“how friendly and helpful was your customer agent today?” Asks two things, and it’s impossible to decide how to answer if your customer agent solved the problem but was grumpy on the phone with you. You need to split this into two questions.
- Using negatives
“Do you disagree that raising taxes won’t create jobs?” Is confusing. Rewrite this to ask “Do you agree that… ?” to simplify it
Avoid Leading Questions
Leading questions contain details that indicate the expected answer.
“When will you start offering free upgrades?” assumes that you will offer free upgrades.
You will get more accurate and useful data if you ask specifics.
“Do you eat chocolate regularly?” doesn’t tell you much since ‘regularly’ means different things. Much better to ask “how often do you eat chocolate?” and give people a series of ranges to choose from.
What led to this post? A friend posted a strange survey from the President of the United States that breaks every single one of these rules, and a few others.
Here’s the title page of the survey, given that it was sent out after the press conference where the press was repeatedly called “Fake news” the title is clearly priming you to doubt the accountability of the media.
All of which is to say that when the Presidential PR machine talks about having data showing how people don’t trust mainstream media remember his data collection is flawed and the results cannot be trusted.