Survey Guidelines, Part 1: Survey Design

(Originally posted: Friday, December 13th, 2013)

When it comes to surveys, colleges and universities have many options. The Association for Institutional Research lists 251 instruments and surveys. Surveys are appealing because researchers can reach large audiences and, with new advances in technology, can easily communicate with potential respondents. Electronic surveys also lend themselves to easier data analysis – usually, the data is fed directly into a spreadsheet or statistical software program.

In spite, or maybe because, of their increasing prevalence, survey response rates are declining nationally. According to the Pew Research Center,  telephone survey response rates dropped from 36% to 9% between 1997 and 2012. There are many and complex reasons for declining survey response rates. As a result of the combination of increasing need for survey data and declining response rates, the quality of survey administration and design is extremely important.

In Why Students Don’t Like School, author Daniel Willingham explains that people don’t like school because the learning process is slow, takes effort, and is uncertain. The popular Faster is Better commercials, which take place in an actual school, demonstrate this. While pretty funny, the kind of cynical take-away is that most people prefer immediate results, don’t really want to think too hard about things, and crave certainty.

In my experience, this is how most people approach the survey design process. There is no way around it – careful evaluation and survey design is time consuming, a lot of work, and you can’t impose your will on the results. But the extra time and energy put into survey design is worth the benefits. The costs spent on cleaning up surveys due to poor design are much higher. If you do not want to put in the time, effort, and amount of thinking into survey design, then perhaps surveys are not for you. Rather, you should probably look into other evaluation techniques, like polls, secondary data sets, policy research, or analyses of already existing data.

Below are some rules for guidelines for conducting surveys. It should be noted these refer to surveys in assessment and evaluation, not surveys conducted for empirical research. There are differences between the two approaches, some of which will be discussed below.

Guideline 1 – Before conducting any survey, the first question you should ask yourself is: “Do I need to do a survey to get this information?”

People are naturally curious. Unfortunately, many attempt to satisfy that curiosity by immediately designing and implementing a survey, without any thought about research purpose, goals, questions, or other considerations. If you are interested about things like how much financial aid students receive, student demographics, or student satisfaction, there is no need to do a survey – that information almost always already exists in various places. Institutional research, registration, and assessment offices are the first places to look. By asking for information in a survey that already exists, you are wasting staff and student time and resources. Even worse, the time one spends on a bad survey is time taken away from a good survey that asks legitimate questions focused on information that can only be discovered through a survey. A survey should only be used when it is the only possible way to get the information you are looking for.

The nature of the research design also impacts the decision to do a survey. If you are not really sure about what you may discover or find, or have just a vague idea about what the results might be, then maybe a focus group or interviews might be more appropriate. Focus groups and interviews are also great for helping you design future survey questions.

Guideline 2 – Satisfying curiosity is not a legitimate reason for doing a survey or for asking a survey question.

If you can’t act on or use survey results, you are wasting your time. And others’ time. Even worse is attempting to insert questions that have nothing to do with the survey’s topic in order to satisfy one’s own curiosity. If you are surveying students about facilities, you should not insert a question asking students to indicate their satisfaction with the registration process or  their major, particularly when this information already exists. If you are really curious about an issue that has absolutely nothing to do with your survey’s topic, it would be much easier and places less burden on your survey population if you just called the staff member in charge of the program.

In the context of practical program evaluation and assessment, it you can’t act on the results of a survey question, there is no point in asking it. Of course, in empirical research, curiosity is the main driver of survey questions and is certainly legitimate. Lots of great research has come out of exploring a topic with no end goal in mind. (While an undergraduate student, I read a really good book about this topic by Royston Roberts titled Serendipity in Science).

This is also not to imply that evaluations need to always have goals or that surveys always have an exclusively practical end in mind. There is even an evaluation method called Goal Free Evaluation. It is perfectly acceptable to conduct an open-ended survey with no particular practical purpose in mind in environmental scanning. In general, however, evaluation is oriented towards practical concerns and use.

Guideline 3 – Always pilot.

While a graduate student in financial aid, I once designed a survey that asked students about their expenses. I didn’t think piloting was necessary, but was strongly encouraged to do so by the assessment coordinator. During the piloting phase, I discovered there were many things I missed. For example, one student stated that parking is a significant expense for many students (particularly the tickets). It never would have occurred to me to add that question, but I was glad I did.

For some bizarre reason, most people do not like piloting surveys. I don’t know if it’s the time or extra work involved. Or maybe there is a fear of receiving negative feedback about a survey design? Or maybe because it forces people to think to hard (see the third paragraph)? But if you think piloting is a waste of time or want to avoid negative feedback, imagine the time you would have wasted designing and implementing the survey, only to have the results count for nothing because you missed something. If you don’t think you have enough time to pilot or simply just don’t want to do it, then you shouldn’t do the survey.

Guideline 4 – Consult with subject-matter experts.

After working over 15 years in both assessment and advancement, I have observed that assessment and evaluation is one of those areas that most people think they have little expertise. The opposite, I have found, is true of marketing, communications, branding, and public relations – for some reason, most people think they are experts in marketing and communications

There is one exception to the evaluation expertise rule – most people think they are survey experts. In reality, most aren’t.

Don’t confuse expertise in your subject area with expertise in evaluation design that just happens to be focused on your area. Do you go to your doctor for nutrition or diet advice? Of course not. You go to a nutritionist or dietitian. In a Field Guide to Lies, neurosurgeon Daniel Levitan writes about the mistake jurors make when they assume that an M.D. degree implies expertise in statistics – it doesn’t.

An MBA or MD degree does not make you an expert in surveys (marketing people are much better at surveys than business people or doctors, in my experience at least). You should run your survey by an expert who works in assessment or evaluation (in other words, knows what they’re talking about). Alternatively, you can become an expert yourself by reading books or attending seminars/classes in survey design. The best books I have read are Asking Questions, Mail and Internet Surveys, and Designing and Conducting Survey Research.

Don’t take feedback or criticism personally. People put a lot of time and energy into surveys, and a reviewer’s comments can be misconstrued as an evaluation of the program in general. A responsible evaluator will focus on the design and structure of the survey, not its content. When an evaluator asks questions, they are not questioning your program or your work – it is important to know the distinction.

Guideline 5 – Technical aptitude does not equate to subject-matter expertise in survey design.

Most surveys today are driven by technology. However, that does not mean that an IT person is an expert at surveys.

I recently worked on the analysis of survey that appeared to be designed by a subject-matter expert in computer technology. While I couldn’t comment on the content of the survey, it seemed like there was little thought put into analysis – the answers weren’t coded and the question numbers were assigned random numbers based on a program output, not any kind of analysis logic. While this maybe made sense to the person who designed the survey, it made analysis very problematic.

There is a lot of debate about the role of technology in strategy and planning. Some think technology should be a strategic partner and even lead business process design. Others think it should follow. There is no debate in the areas of evaluation and assessment – technology should follow. Aptitude in technology does not equate with skill in evaluation design, statistical analysis, question design, answer structure, branding and marketing, and grammar – all essential elements in survey design. These are different skill sets and, unless one is trained in the specific language of surveys, the skills do not easily transfer.

As a final thought, there are lots of great consultants and technical writers, but being a consultant or technical writer – in and of itself – does not confer expert status in survey design. Unless, of course, the individual has invested time in training or education in survey design.

Guideline 6 – Brand the survey.

Branding the survey includes things like adding the program or institutional logo or incorporating design elements that are consistent with organizational branding standards. Branding a survey adds legitimacy. If it is not obvious who is conducting a survey or if the survey is officially sanctioned, people are naturally less likely to take it. Personally, I would always run this by the marketing experts at your organization.

This may not always be possible or even necessary. If you are working with a survey vendor, they may have a template, which is fine. You can brand the invitation email or include the signature of a well-known person in the signature line.

Guideline 7 – If someone is already doing a similar or nearly identical survey, don’t do the same survey again.

People are more likely to use evaluation results if they have control over the design and administration of the actual evaluation. This approach becomes a problem, however, in large organizations where multiple groups have similar, but slightly different, program evaluation needs.

An example might be a graduate survey. Any university’s alumni affairs, assessment office, institutional research office, career center, or graduate college have legitimate reasons for surveying alumni. If all of them work in isolation (or even worse, don’t even bother to see if there are other surveys currently being conducted), response rates will inevitably suffer. Students get confused, thinking they already completed a survey, contributing to survey fatigue. Internal and external audiences get confused, not knowing which results to reference. For very good legitimate reasons, it might be that individual programs and departments have to conduct their own surveys. If this is the case, programs and departments should work together, share results, and stay in regular contact.

Guideline 8 – Pay close attention to the invitation and introduction.

You should generally include:

  1. Who is doing the survey. Legitimacy increases trust, and thus the validity of your results.
  2. Why you are doing the survey. Surveys are dependent upon social exchange theory. A survey respondent gives you their time (costs). In return, you give them the satisfaction of knowing they are contributing to the improvement of a program (benefit). If it is not clear why the survey is being conducted, the costs increase and the benefits decrease.
  3. How the results will be used.
  4. Promise of confidentiality and/or anonymity. You may not always be able to promise this. Follow the rules of your IRB (see below).

Guideline 9 – Check and see if you need to go through your institutional review board (IRB).

Many people think an IRB is time consuming, and it is. But we have found the IRB process to be an extremely valuable tool in ensuring survey quality by forcing you to outline, in detailed fashion, all of the design elements of the survey. Even more importantly, the IRB process identifies elements of your survey that could potentially harm respondents.

Guideline 10 – Motivation and incentives.

New research is showing that people are more likely to take a survey if the reward is immediate (like including a dollar bill with the invitation letter), as opposed to far off in the future. The research about incentives is generally mixed, but we don’t think they can hurt. In Mail and Internet Surveys, the authors provide some helpful suggestions to increase respondent motivation:

  • Show appreciation and thank respondents.
  • Support group values.
  • Make the survey interesting.
  • Make the survey as short as possible, or appear to be short.
  • Avoid making the respondent feel subordinate to the research by making them feel in control.
  • Avoid inconvenience.
  • Minimize personal information.
  • Make the branding consistent with institutional branding.

Guideline 11 – Develop a timeline or checklist.

The wonderful book The Checklist Manifesto includes a compelling story about how hospitals witnessed significant reductions in patient infection rates by implementing checklists. Why? Because doctors and nurses were forgetting to do things like clean out IV tubes. For many, forgetting to do simple tasks was the result of busy work environments. For others, it was hubris. Bizarrely, they felt their professional credibility was being questioned by having to use a checklist.

While not as serious a topic, all survey design processes should have a timeline, checklist, calendar, or some kind of structure. There are lots of moving parts to surveys, and it is easy to forget them.

About rlsmith205

Bloomington-Normal, IL
This entry was posted in Surveys. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s