(Originally posted: Friday, January 24th, 2014)
One of the most common challenges people face in assessment and planning is use. Some or maybe many programs go through a lot of work to gather data, talk in meetings, and sit through presentations, only to have the results sit on the shelf.
How does this happen? How do we allow this happen? Furthermore, why do we allow it happen? It is easy to blame assessment and planning frameworks, but ultimately these are just tools. People have the power to manipulate and use these tools for their use.
If assessment and planning exercises are found to have little value, it usually due to one of three reasons.
1. Assuming the process is the outcome. When students participate in a leadership program or take a class, most people assume they’ve learned something or somehow changed as a result. While a reasonable assumption, it is still an assumption. The number of students who participate in a leadership program or the mere provision of the program, by itself, doesn’t really say much about the impact it had on students. This is not to suggest that the number of students who participate in a program is not meaningful, but it is a measure of efficiency and maybe effectiveness, but not impact. (These categories come from a really good introductory book to evaluation, The ABC’s of Evaluation).
In a situation like this, the program’s learning goals, outcomes, and activities aren’t examined, reflected on, or even really discussed. In Utilization-Focused Evaluation (1978), Patton describes two situations where this can occur:
- Programs operating under a charity model evaluate program success and worth by the amount of faith, hope, hard work, and emotions put into the program by staff. Obviously, evaluation will not be effective because it is seen as an exercise in questioning staff intentions and faith. It is just assumed that the program is effective, regardless of what an evaluation may or may not reveal.
- In programs operating under a pork-barrel approach, program effectiveness is measured by a policy maker’s or constituent’s will. If the program is effective, it is because a policy maker says it is or has the strong backing of a constituent group. Evaluation really doesn’t matter in this situation, and results will be either be ignored or used for political advantage.
Programs that assume the process is the outcome are in a precarious position from an accountability standpoint. Without an assessment or evaluation process, it’s very easy for others to evaluate and judge the merit or worth of the program. This is not to suggest that one should engage in shadow assessment. Evaluation that produces sobering results will demonstrate accountability and attention to improvement, which is better than saying nothing at all.
In a situation where the process is the outcome, an assessment framework or cycle would only show the outcome and the activity, but nothing else:
2. A failure to make the results meaningful. The second obstacle to using assessment in decision-making is a failure to make it meaningful, or evaluate it. There’s a big difference between what the data are and what the data mean. Data-driven decision making is a misnomer. Data doesn’t make decisions – people do.
In this situation, the assessment data is actually gathered, but not used. Gathering assessment data and not using it is worse than not doing any assessment at all. If a program isn’t doing an assessment, at least time is not wasted on assessment processes that will never be used.
Assessment is a form of action research. Thus, its value lies in its utility. If there is no plan to use the assessment data, then an assessment should not be conducted. The only exception may be when an external agency – like a govt. agency, granting agency or accreditation group – specifically requires the assessment. Institution-wide processes related to planning and budgeting will also require assessment for compliance reasons.
For a program like this, the evaluation of the data and the use of it are missing from the assessment framework. It might look something like this:
3. A failure to act. The third challenge to using assessment in decision-making is a failure to act. In this situation, a program has gathered the data, assessed it, and evaluated it. But, for some reason, it failed to act. The assessment cycle resembles this kind of framework:
There are several reasons for a lack of action, even when evidence exists and it has been evaluated:
- Budget and resource constraints.
- A dysfunctional and toxic culture that makes it difficult to act.
- Leadership or staff turnover.
- Exhaustion or fatigue.
- Distrust of anything that has the words ‘assessment’ or ‘evaluation’ tied to it. (I once attended an internal conference where people presented wonderful results of efforts to enhance learning and retention in classrooms and programs, none of which ever showed up in program review or assessment reports. When asked, the people in the room never associated their work with assessment and evaluation, which they viewed mainly as an administrative task aimed at compliance and, even worse, control). To be fair, we can’t blame a lot of people for thinking this way, because….
- … some administrators identify and define assessment and evaluation as a formal planning or quality-improvement exercise. Assessment and evaluation can certainly be used to inform and feed into larger planning and quality-improvement processes. But the measurement and process principles that are characteristic of quality-improvement techniques (statistical control, reduction in variation, etc.) are difficult to apply to program evaluation.
- Vague ideas about when the evaluation data will be used, or an unrealistic timeline. For example: “Results will be evaluated in mid-July.”
- Assessment is viewed as an add-on, and not integrated or embedded into the curriculum.
- Assessment viewed as an evaluation of people, not processes.
- Sometimes programs will state that communication is the reason for a lack of sharing assessment results. Communication is never a cause, it’s a symptom. If people don’t like each other, of course they will have problems communicating with each other.
Fortunately, programs that have evaluated their assessment data, engaged in dialog, and generated ideas about what they think is important are in a good position. The ideas should be documented in a report or other format, so that when the time comes to act, the program is ready, and have an assessment framework that looks like this:
Acting on Assessment Data
A limitation of assessment frameworks is that they merely state: “use the results.” That’s easier said than done. There are so many factors that influence decision-making in colleges and universities, almost all of which aren’t captured in regular assessment cycles (see the figure on the right, below). Although, to be fair, models necessarily simplify processes, and can’t possibly capture all variables.
In Utilization-focused Evaluation (1978), Patton describes a situation where a program was struggling to come up with evaluation questions. After a lot of agonizing, debating, and indecision, Patton walked up to a board on the wall, and wrote:
“I would like to know _______ about my program.”
A program struggling with this can change the language of the assessment cycle/framework by using questions. Merely changing the language of a question or directive can have a significant impact on the answers. For example, a recent study from the Harvard Business School found that by changing one word in a question can have a dramatic effect on the answer and action of the recipient. Consider this question:
- What would you do if you won the lottery?
Most people will say things like: quit my job, buy a house, buy a new car, travel, etc. Now, think about this question:
2. What could you do if you won the lottery?
With the change of just one letter, most people will give dramatically different answers, like: work on ending poverty, volunteer at an animal shelter, learn a musical instrument, etc.
Similarly, an assessment model that poses questions, like the one on the left below, can give different answers and, for some programs, be more helpful. A great start, then, in making assessment meaningful, useful, and practical is to ask questions like:
“What does the assessment data mean?”
“Where do we go now?” “How do we get there?”