Using Paradata for Instrument Evaluation and Refinement (Presentation)

Using Paradata for Instrument Evaluation and Refinement (Presentation)

Published: May 30, 2015
Publisher: Hollywood, FL: American Association for Public Opinion Research Annual Conference
Associated Project

WIC Local Agency Breastfeeding Policy and Practices Inventory (WIC BPI)

Time frame: 2011-2014

Prepared for:

U.S. Department of Agriculture, Food and Nutrition Service


Amanda Reiter

Key Findings

We used paradata in a two-part web survey fielded to WIC agencies to inform recommendations for instrument refinement in future rounds of administration. These findings included:

  • Dropping or changing the format of burdensome questions (for example, from open-ended to close-ended format).
  • Sending detailed instructions to help with complex questions.
  • Placing high-priority topics early in the survey.
  • Encouraging participation among relevant agency staff.

Paradata are a potentially useful tool for improving questionnaire design by helping understand the questions that are more or less effective, the significance of question placement, and the length of time spent on questions by respondents. For a national study of the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), we used paradata to evaluate instrument burden and the order of topic presentation, as well as identify potentially problematic questions. Our innovative use of paradata informed our recommendations for refining the instrument for future administration.       

We administered a two-part web survey to 90 state-level and approximately 1,800 local WIC agencies (the response rate was 91% to both parts of the survey). In this paper, we will describe how we used paradata to analyze respondent burden, break-offs, and the order in which respondents completed survey modules. Specifically, we will discuss: 1) using question- and module-level burden estimates to identify survey topics to consider revising in future rounds of administration; 2) considering the frequency of break-offs as a marker of problematic questions; 3) examining the number of login attempts as an indicator of how agency respondents shared responsibility for participation; and 4) analyzing the order of module completion to inform their order of presentation to respondents. Using paradata for questionnaire evaluation is a promising approach with the potential to reduce burden and cost and increase data quality in future administrations.

How do you apply evidence?

Take our quick four-question survey to help us curate evidence and insights that serve you.

Take our survey