Junk Assessment and Junk Miles: Quit Worrying About Assessment

Junk Miles

On a recent long bike ride in the countryside of Illinois, I was concentrating on my training regimen and my mind wandered to assessment. Not something most normal people do, but it’s my job.

Thinking about challenges associated with the use of assessment, three themes came to mind:

  1. Assessment isn’t useful. People are busy with their regular jobs, and no one likes being forced to invest precious time and energy in something they won’t use. Even worse, people hate being told to do something they are already doing anyway.
  2. Assessment methodologies are messy and unreliable. This is particularly true in regard to trends in declining survey response rates. A lot of people just hate taking standardized tests and online surveys. As this college student from England notes, surveys are “boring, tedious, and usually pointless. I hate surveys when they talk about stuff I don’t care about (and) when they don’t change anything.” Low response rates are another issue. Some people take a “do the best we can with the data we have and proceed with caution” approach. A lot of people, though, feel that anything below a 80% participation rate utilizing double-blind experimental methods should be thrown out. And then buried and covered with salt.
  3. Assessment is too rational, linear, restrictive, goal-oriented, and a pedagogical straitjacket.

At around mile 10 mile of my ride, it occurred to me that I needed to pick up the pace in order to avoid junk miles. Junk miles are miles that serious cyclists and expert trainers advise people avoid because they have no specific training purpose. The idea is that one can’t improve strength and performance by wasting time on casual rides through the neighborhood.

As an industry, we do a lot of assessment. Could it be that we are engaging in junk assessment, and that a lot of it is just a waste of time, particularly if it’s not useful, the methodology is questionable, and it’s too restrictive? To answer this question, I had to examine my own experience with cycling.

My Experience with Junk Miles

I started cycling over a year ago by just getting on my old bike and going, with no purpose in mind. I really liked it, though, and wanted to get better. Over time, I made moderate changes to my diet and reduced the “junk” miles.

To get better and monitor my performance, I figured I needed data to make better decisions. Those room-temperature, half-eaten chicken nuggets  my kids leave on the counter, and I later eat over the sink, really add up. So, I needed two things: good research and good tools. This approach makes sense to me. In fact, it’s the whole principle of how assessment is supposed to work, at least according to standard assessment models: you create a goal, measure progress towards that goal, and use the results, or evidence, to make better decisions.

This was more challenging than I thought. I came across a reputable news website titled 15 Deadly Food Myths. Based on this article, there is no way I should be alive. And who knew that broccoli is full of toxins and has by-products that have been shown to cause cancer in lab rats?

Actually that’s not true. Broccoli is great for you. But this Onion-esque article shows how difficult it is to sift through all of the information out there. After all of my research on diet, this headline from the real Onion described how I felt: Eggs Good for You This Week.

Well, I thought, I’ll just simplify and focus on calories in and calories out. I bought a GPS bike computer to track calories, miles, heart rate, calories burned, and other important information. This approach is based on research. People who track their intake and outtake of calories are more likely to lose weight.

Or are they? The problem is that people are terrible at measuring what they eat. We lie about what we eat to ourselves and others. A headline referencing an article from the New England Journal of Medicine informed me that counting calories “never” works. Food labels can be wrong. Even my bike computer and other fitness trackers aren’t accurate.

At this point, I had two choices. I could devote more time, energy, and resources into getting better and more “significant” data, or I could just deal with the best data I could get and make decisions from there.

I have two little kids. I want to enjoy life, not just measure it. Measuring micro-nutrients and other biometric data with pee-sticks and brow-rags may provide good data, but it wouldn’t be a worthwhile use of my time or money. Besides, I feel great and am getting better, even without sending samples to a company that analyzes the intracellular micro-nutrients in blood.

A New Approach

My new approach to cycling and not worrying about junk miles worried me a little at first, but then I read an article by Selene Yeager: Why There’s No Such Thing as Junk Miles. A laser focus on each ride being aligned with a specific training goal, and informed by precise (e.g., valid and reliable) data is fine for some people. But think about what one misses out on:

  • Serendipitous discoveries, like seeing an eagle or a fox or enjoying a burger and beer with friendly locals at a small-town bar and grill you didn’t even know existed before.
  • Taking a 15 mile bike ride with your 10-year old daughter.
  • Learning something new about the group of friends you’re cycling with. Or making new friends.

I reached a similar conclusion while giving a co-presentation about an institution-wide survey of students. The survey revealed a difference in opinion in terms of feedback from faculty to students. We were having a great conversation about feedback, and then the inevitable question arose: “what was the response rate?” It was low, certainly out of the range where one can make generalizations about the entire population. (Although significance does not always mean one can generalize, or even that the data means anything). It was politely noted by the questioner that due to the low response rate, no conclusion about the data can be made and that the entire survey was a waste of time.

A humanities instructor, who admittedly had no training or expertise in calculating statistical significance, questioned this wisdom. We were having a great conversation about the topic, and it made him think differently about feedback to his students. He planned on investigating this topic in more detail in his own classroom, and improving his teaching from there.

Data Driven vs Data Informed Decision Making

So, was the assessment a waste of time, or junk assessment?

From one perspective, then, the survey was a form of junk assessment. Even so, at least one instructor planned on improving his classroom. The issue is in thinking that assessment and evaluation is just about the methodology and statistical significance. It isn’t. Hallmarks of good assessment is sharing, dialog, and use. In fact, the conversation and dialog is just as, if not more, important than the methodological precision of the assessment.

The term ‘data-driven decision making’ has little place in the world of assessment and evaluation, in my opinion. People drive decisions, not data. This is particularly true at the program level, where people make sense of and use data in terms of how it relates to them and their particular program’s context, not some objective truth that can be tested.

This isn’t to suggest that methodology and data quality aren’t important. In every study, one should always strive for representative samples and pay very close attention to details. In the uncontrolled settings in which a vast majority of assessment and evaluation occur, however, this is practically difficult. One should never advocate for substantial organizational, curricular, or budgetary changes based on one question from a survey with a 10% response rate – that would be irresponsible. However, there is no reason we can’t at least talk about the data and results, and discuss ideas to look at the problem in more detail and follow up with research backed by more rigorous methods.

In Evaluation Debates, Carol Weiss says it best:

Evaluation can do more than just legitimate something people already knew. It can also help to clarify and crystalize it and express sort of vague, inchoate feelings that people have and don’t really understand. Once evaluation does that, it really can be helpful (p. 145).

Here are a few strategies to make assessment and evaluation practical and meaningful:

  • Capitalize on what is already occurring. Programs do a lot of assessment, and it might be helpful to inventory what already exists.
  • On the other hand, it’s okay to get rid of unproductive assessments. If an assessment is focused on something the program doesn’t care about, stop doing it.
  • Don’t think that everything has to be assessed and evaluated. Pick 3-6 things that you care about, and focus on those. Focusing on one or two goals, themes, learning outcomes, or whatever you care about per year can contribute to great conversations over time.
  • If participation or response rates are low, don’t make decisions right away. Focus on the conversation and dialog, and see if further research is warranted.
  • Focus on where results will be discussed, as opposed to when. Places should be created where people can talk about and reflect on assessment evidence.
  • Foster a healthy organizational culture. Assessment works best in places where people trust each other, governance and leadership are stable (people don’t give up on the institution or each other), and people can engage in honest conversations.
  • Embrace the idea of serendipity in assessment. All assessment plans will contain some kind of outcome or goal, but it’s okay to incorporate the exploration of issues and ideas in evaluation plans.
  • Be open to different ideas in terms of what it means to use assessment data. Assessment literature can be pretty limited in terms of how it defines using assessment evidence. Most literature advocates ‘closing the loop.’ The links between evidence and decision-making, in reality, are rarely that direct and immediate. The problem is that these definitions ignore the incremental and evolutionary nature in which program decisions are actually made. (Michael Patton discusses this idea at length in Utilization Focused Evaluation).

One Type of Junk Assessment

If there is such a thing as junk assessment, it may be compliance-driven, externally-mandated, summative assessment. In So What Do They Really Know?, Cristina Tovani states that summative assessment is like an autopsy. Autopsies provide data that is valuable for doctors, nurses, health care policy makers, hospital administrators, and medical researchers, but does very little for the patient. Similarly, summative and compliance-driven assessment data may do very little for instructors and people at the program level, but it should inform policy makers and upper-level administrators. Still, these processes provide opportunities for reflection and dialog, and another opportunity to learn.

About rlsmith205

Bloomington-Normal, IL
This entry was posted in Culture, Methods. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s