One of the problems with assessment is a lack of clarity about what is being assessed. A lot of this has to do with language – metrics, goals, learning outcomes, performance indicators, etc. can all be confusing. Overuse of this language can exhaust people and turn them off to assessment. A really good paper by Susan Hatfield clarifies and simplifies a lot of this. The paper’s better, but here’s a summary:
1. Learning outcomes should focus on students and what they learn. In this scenario, students are the units of analysis, not the program. You can assess what individual students have learned, or a cohort of students. Sometimes, this can be a broad program goal. For example, a program may have as a goal that students will learn teamwork skills. If the program agrees on an assessment method, creates space to talk about the results and everyone is willing to make changes based on the results, then it’s probably an acceptable program goal.
As an aside, it is the conversation and dialog about the assessment results that matter just as if not more than the process or data produced. Addressing one learning outcome a year in a meaningful way is a much better better use of your time than addressing three or four a year that you don’t care about. Data do not drive or make decisions – people do. Data should inform decision-making, but not drive it. Relying solely on data takes the human element out of decision-making, and ignores contextual factors and subject-matter expertise. Data should inform decision-making, but not drive it
Another advantage to keeping assessment simple is that it frees your unit or program to discover new things, many of which were probably unanticipated. This is serendipitous assessment.
2. Program goals refer to broader program goals. For example, a program goal may be to increase external funding through grants and fundraising. Another program goal might be to improve lab space, or to increase the hiring of faculty or staff in a specialized field. These types of goals are indirectly related learning, and, in this context at least, should not be viewed as learning outcomes. A problem arises when a unit decides to treat all goals as learning outcomes. That is a difficult argument to make – how exactly does increased external funding directly impact learning? Even more difficult, how does one measure that? This is not to suggest that it’s not possible or that indirect items like space, funding, or human resources don’t impact learning. Rather, it is just practically problematic to measure in the context of program assessment and evaluation.
I think of goals and learning outcomes like squares and rectangles. Any square can be a rectangle, but not all rectangles are squares. Similarly, any learning outcome can be a program goal, but not all program goals can be learning outcomes.
So, what do you do when you have an annual report, accreditation report, or program review document to write? I think it’s a good idea to separate program goals and learning outcomes. This may seem like just more work, but I don’t think so. Goals are things a unit does continuously anyway. They might be expressed in a strategic plan. If you focus on just assessing one learning outcome a year, after four or five years, you will have a pretty robust and comprehensive report. Or, one could focus on program goals, and then articulate how learning outcomes support each goal.