This post guides you through one of the best practices for survey project management: checking test data.
You've created a survey on coffee drinking and launched it. You've received all the responses (and more!) you were hoping to receive. You begin to analyze the data and something doesn't look write...some of the questions have more responses than they should... and some of the questions have fewer responses... what's wrong?
You probably have something wrong with your skip logic. Skip logic means respondents are conditionally assigned to answer certain questions based on their responses to other questions. Here's an example survey employing skip logic:
Q1. On a scale of 1-10 with 10 the highest, do you like coffee?
1 2 3 4 5 6 7 8 9 10
Q2. How often do you drink coffee?
- Less than once a week
- Once a week
- Several times each week
- Almost every day
- Every day
- Several cups a day
Q3. Please select your favorite coffee-based beverage:
- Flat White
- Long Black
- Short Black
- Nestle 3-in-1
In the survey above, having respondents who selected “Never” in Q2 answer doesn't make sense for Q3 to be asked to respondents who selected "Never" in Q2. So, you implement logic that allows respondents who selected "Never" in Q2 to skip Q3.
Imagine implementing this same sort of process throughout a 40, 50, or 100-question survey. The survey is going to be complicated, and you want to be sure that you've correctly implemented all the logic into Qualtrics.
How can you know you've got the skip logic right?
There are two steps to making sure your data will turn out:
- Use survey test data
- Check the frequency tables
Create Test Data
You should use the Generate Test Responses feature in Qualtrics' Research Suite. By using Generate Test Responses, you can have artificial respondents "take" the survey and follow the possible logic choices. The best practice would be to generate somewhere between 50-100 fake responses in the data table.
Check the Frequency Tables
Then, you should use the frequency tables available in the reporting tool. Click Results at the top of the screen in Qualtrics. You can export the report to a JPG, CSV document, or PDF for easy viewing. After you export the report, you can compare the expected frequencies with the actual frequencies.
To continue building on the coffee example, please use the below as your frequency check. For this example, you generated 40 test responses using the Test Survey feature.
Using the survey script from above, you know respondents who select “None of the above” in Q1 should skip to Q3. So, you should expect 26 responses to Q2 (40-14=26).
Looks right so far! Q2 has 26 responses. Now, you should expect Q3 to have 40 total responses because all respondents were directed to this question.
40 total responses—it looks like your survey logic is working. You can now launch your survey. If the frequencies in either of your questions had been incorrect, you could have easily adjusted your skip logic in the Research Suite.
Elon Musk wouldn’t launch a rocket without testing it first. You shouldn’t launch a survey to your valuable respondents without testing it first. Qualtrics' Research Suite makes it easy for you to do this critical step in project management. By checking your frequency tables with test data, you can rest assured knowing that your survey results won't be biased by problems with your skip logic.