(Originally posted: Friday, December 13th, 2013)
Surveys, like all evaluations, take place in a social and political context. Here are some guidelines for navigating these dynamics in the context of surveys:
1. Always include the people in charge of a program in the evaluation process.
Never ask survey questions about a unit, program, department, or organization in which you have no authority or responsibility. If you do need information relating to another unit, include them in the process. If you are responsible for a broad survey that asks questions about multiple areas, then you have to let relevant people know, preferably unit-by-unit.
2. Always include people with responsibility in an area in all stages of the evaluation, from design to the communication of results.
I learned this lesson years ago when I gave a presentation to a board. The presentation subject was about results from an institution-wide survey that included questions about specific units. Everyone knew about the administration of the survey and received a report on the findings. I thought this was enough, and I presented findings about financial aid to the board without letting the dean of enrollment management know beforehand. Several board members had questions which I could not answer and for which the dean was not prepared. At a minimum I should have let the dean know what I was doing, and ideally should have included in the dean in the discussion. Everything was fine after an apology on my part, and I learned my lesson
3. Protect and learn from the process.
In my experience, evaluation and planning project results are rarely questioned because of their results. Rather, the most common concerns are usually about process. These usually include questions about things like whether or not students were involved, whether faculty were consulted, or if there was appropriate outreach. I am confused in terms of why many people frequently skip this part – hubris? Laziness? Regardless of the reasons, questions about process can kill the legitimacy of an evaluation project and the use of the results, even if they are good. In my experience, there is a lot to learn from engaging groups and asking about their ideas. Any project I have worked on has vastly improved by consulting with faculty, students, and administrative colleagues. It is well worth the extra time and effort.
4. Surveys need to be clear about their purpose.
A colleague and old friend of mine from institutional research one stated that “we won’t fulfill data requests if the purpose is to prove somebody wrong.” We generally recommend that surveys addressing program improvement should generally not be tied to personnel evaluation, although sometimes people are open about using them for that purpose. Unless explicitly designed for that reason, surveys should focus on problems, not people.