Serendipitous Assessment

An Art History major in college, I never felt confident in science and math (although I later liked learning statistics). In my senior year, I needed one more hour of science. I discovered an obscure one hour course in the biology department. To complete the hour, students had to read three books and write papers about them. This seemed like a good fit for me.

I met the biology professor and explained I had no idea what to read. He pulled a random book off the shelf called Serendipity: Accidental Discoveries in Science. I thought the idea of serendipity was wonderful and I really enjoyed the book. The idea that science is not a rigid field dictated by set rules and guidelines was very new to me. Human subjectivity and accidents can have a significant impact on science (Thomas Kuhn explored this idea in The Structure of Scientific Revolutions).

Is assessment, as a discipline, open to serendipity? It seems to largely driven by learning outcomes and goals. This is not to suggest that goals or outcomes are not important. Goals and outcomes provide direction. They serve as symbols for what we care about. Goals communicate our aims. Too much direction, however, takes away from agility.

I think there were four causes that led to assessment being a linear, goal-oriented exercise that left little room for agility, openness, and discovery:

  1. As a discipline, assessment’s foundational roots are in empirical research, specifically quantitative-oriented research. This research is driven by the scientific method and emphasized rational and orderly approaches to research.
  2. In the 1980’s and 1990’s, accreditation became the reason for doing assessment, not intellectual curiosity. This created a high-stakes environment that had little room for intellectual curiosity and random discovery.
  3. Accreditation increasingly borrowed tools and methods from quality improvement and strategic planning models. Assessment adopted a lot of these models in response to accreditation mandates. All of the sudden, TQM, CQI, benchmarking, and all kinds of management fads caught on. One of the strangest ones to take hold, in my opinion at least, was six-sigma. One of the primary rationales behind the quality-improvement methods was minimizing variability. This seems like an odd fit for higher education, which is purposely designed and structured to be diverse on so many levels.
  4. Assessment became a strategic planning exercise and, thus, closely aligned with processes largely unrelated to learning outcomes, like planning and budgeting. In a classic article, The Rise and Fall of Strategic Planning, Mintzberg describes what can happen when leaders adopt this approach:

The problem is that planning represents a calculating style of management, not a committing style. Managers with a committing style engage people in a journey. They lead in such a way that everyone on the journey helps shape its course. As a results, enthusiasm inevitably builds along the way. Those with a calculating style fix on a destination and calculate what the group must do to get there, with no concern for the members’ preferences….calculated strategies have no value in and of themselves…strategies take on value only as committed people infuse them with energy (Harvard Business Journal, January-February 1994, p. 109).

Serendipitous assessment can work in the following ways:

  • Eliminating narrow definitions of what it means to use assessment data.
  • Respecting inter-institutional variability among programs and departments.
  • Realizing it’s okay to have frameworks and best practices, but allowing some flexibility in terms of their use and implementation.
  • Focus on assessment processes and conversations, as opposed to exclusively results.
  • Allowing for multiple methods.

Data-driven decision making doesn’t work. People, not data, make decisions. People informed by good data make better decisions. Life is a combination of what we intend and what happens along the way. Good assessment plans and processes should capture both.

August 2015 Update: In Grading Student Achievement in Higher Education, author Mantz Yorke describes a new form of validity that is associated with serendipity. I thought it was applicable to this post. Here is the excerpt:

What is missing from the list (of forms of validity, including predictive, concurrent, content, and construct) produced by Cronbach and Meehl – and generally from textbooks dealing with validity – is any conception of validity in terms of the capacity of the test to reveal something unexpected. In assessing students’ performances, there may be a need to go beyond the prescribed assessment framework in order to accommodate an aspect of achievement that was not built into the curriculum design. This does, of course, create problems for grading systems, especially when they are seen as measuring systems (p. 21-22).

About rlsmith205

Bloomington-Normal, IL
This entry was posted in Culture. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s