I made it to the end of my junior year of college without picking a major. It never really occurred to me to pick one. But I did know what I liked to do – drawing, writing, reading, baseball statistics, history, and a few thousand other things.
I had one last thing to do before I went home for the summer: register for fall classes. The registration office, however, wouldn’t let me. An office worker told me there was a hold on my registration. There was a rule that all seniors must declare a major. He gave me a list that looked a lot like this:
Honestly, declaring a major never occurred to me. Reading down the list, I picked the first one I liked: Art History & Archaeology. And that’s how I picked my major – mostly because it begins with the letter “A.”
Programs are kind of like people. Some programs focus with laser-precision on what they want students to learn. They have valid and reliable instruments with super-precise instruments that tell them everything they need to know.
Programs Don’t Really Need Learning Outcomes
Other programs kind of muddle along, figuring things out as they go. They might have learning goals, but can’t remember who created them or why – there’s no name or date on the paper. Maybe it was the chair who retired three years ago? The original digital document is long gone, so the learning outcomes exist on a sheet with crooked margins that’s been photocopied a hundred times.
Some programs even intentionally muddle along – they have little structure or intention by design. When I read Lynda Barry’s Syllabus: Notes from an Accidental Professor, I couldn’t help but think how a tightly-structured assessment plan could only get in the way of how she teaches. I really liked the idea of teaching as a process of uncovering what skills and knowledge students already have and building on those. (1)
The thing about programs that muddle along or take a serendipitous approach to learning is that they’ve been doing it a long time, maybe for decades. And they’re still around, engaging and graduating students. They may understand, recognize, and even appreciate the value of learning outcomes, but they’ve been doing fine without them.
So, you really don’t need student learning outcomes. A lot of programs are functioning just fine without them (2). A lot of people are, too.
Learning Outcomes are Important
But just because a program doesn’t need learning outcomes doesn’t mean it shouldn’t have learning outcomes. I think it’s a good idea for three reasons:
First, it’s good pedagogy. Here’s an edited passage from Popham’s book, Transformative Assessment (pp. 50-51):
Jill has designed a one-month instructional unit to promote student’s mastery of a high-level cognitive skill. Jill will undertake the following activities:
- Fully clarify for students the skill they are to master by the time they achieve the unit’s target curricular aim.
- Motivate students to achieve the aim by showing how the skill will be potentially beneficial.
- Supply instruction.
- Model the use of the skill.
- Give students ample guided practice as well as time for independent practice.
This is a well-organized class. One can say with confidence that students will learn in this class, regardless of whether an assessment exists. Still, the skills are clarified, which is close to articulating learning outcomes. Even without making the skills explicit, the students will likely be learning.
However, without some kind of formative or summative assessment of those clarified skills, how will she know what to modify or improve? In my experience, teaching and program improvement is a continual process of tweaking and change. It’s rare that no changes are made, year after year. Some kind of formative assessment of those skills would go a long way in providing meaningful feedback and help improve the course.
The second reason is that outcomes tell your program’s story. In What the Best College Teachers Do, Bain writes that professors hold two responsibilities (p. 58):
- Help students learn.
- Tell society how much learning has taken place.
Having real learning outcomes is a good idea because it communicates your program’s story. Telling a program’s story can go a long way in educating decision-makers, responding to public or future-student inquiries, and demonstrating impact.
There’s a third reason, but if you’re engaged with the first two, you shouldn’t have to worry about it: accreditation and accountability. One caveat to this rule, though. It depends on the accreditation agency, and who you get a peer reviewer.
I once heard an accreditation peer reviewer tell a group from a diverse background of disciplines that at least 70% of their program-level data should be benchmarked. This isn’t to suggest that all peer reviewers feel this way, but there’s not a lot of variation in terms of perspectives about assessment among reviewers, at least in my experience.
Peer reviewers who are methodological-driven, as opposed to utilitarian-driven, are often just told what they want to hear. This is because peer reviewers write reports. And these reports are reviewed by people with a lot of responsibility and authority.
Does the risk of being genuine outweigh the benefits of homogenized, compliance-driven assessment? I don’t know. Peer reviewers should put more stock in being genuine over methodlogical purity. They intentionally look for mistakes, lessons learned, challenges, creativity, genuine ideas and results, and be very suspicious of perfect assessment plans that purport 70% benchmark-able data and 90% response rates.
If your program has muddled along or taken a serendipitous approach to learning, you might want to consider starting with one learning goal and examining it for one year. You can build on it in year two, or move on to another one. After 5 or 6 years, you will have a lot of assessment information.
And, keep it simple. A good formula for engaging faculty and staff in assessment is: simplicity + user customization = engagement.
Don’t fall into the measureability trap (3). Focus on what you find meaningful first, not whether the outcome can be quantified or measured. Would you end a program that promotes awareness of sexual assault just because of issues associated with measuring outcomes? Of course not. Whether or not an outcome can be measured should not be the sole criteria for addressing an outcome.
Don’t fall into another trap: the feeling that everything has to be assessed. Time, resources, and energy are precious resources, and they should be directed towards what we find meaningful and what matters.
Finally, consider storytelling, as opposed to planning, as a way to get started.