Assessment Planning and Decision-making: The Problems with Assessment Frameworks

Nearly everyone has an assessment framework that symbolizes how assessment does or should work. I haven’t viewed all of them. But I’ve seen a lot. And most of them look the same.

In the book Reason & Rigor: How Conceptual Frameworks Guide Research, the authors explain the benefits of using frameworks to guide research. Some of the positives of frameworks include:

  • Serving as a guide or map.
  • Capitalizing on the collective expertise of subject-matter experts.
  • Articulating the links between steps in a plan or research study.

Frameworks in assessment serve the same purposes, and are helpful in planning curricular and co-curricular programs and activities.

Frameworks, however, can be limiting (see Mintzberg’s The Rise and Fall of Strategic Planning). There are several reasons why adhering to a strict, formalized assessment model, with little to no deviation or room for serendipity or exploration, can lead to problems.

  1. First, assessment frameworks, by themselves, ignore all of the variables that influence decision-making. Most, if not all, assessment frameworks assume that actions and decisions are in isolation from other factors, and that the only variable that influences decision-making is the analysis and interpretation of data. Here is an example of what most of them look like:


What if all of the factors that influence decision-making were actually included in this model? It might look something like the model below:


In Misbehaving: The Making of Behavioral Economics (2015), Thaler calls these supposedly irrelevant factors (SIFs). SIFs are factors that are not considered in research models. SIFs are factors that were ignored for many years by classical economists, who (incorrectly, we now know) assumed that all people respond to economic decisions in rational ways. We now know that humans are quite capable of making irrational and often bad decisions, despite our efforts to model human behavior.

Assessment frameworks do a good job of providing a road-map. They don’t, however, capture all of the bathroom breaks, family fights, random detours, gas stops, and flat tires. 

No one plans a long road trip without taking these factors into consideration. Well, not “no one.” Rational, linear planners probably do. They tend to pride themselves on arriving early or on time, only to sit down and watch TV for a few hours upon arrival. 

Similarly, institutional culture, politics, staff issues, and the normal issues that arrive in daily life should be considered when using an assessment framework.
2. Most assessment frameworks are shown as a cycle. This limits decision-making to a narrow definition, ignoring the non-linear and incremental manner in which decisions are actually made. We are constantly making decisions, and they often don’t follow the linear process described in most cycles.

3. Many planning and assessment models assume that organizations are rational. In Strategic Planning for Public and Nonprofit Organizations, Bryson notes that non-profits are only politically rational, and can only be understood from this perspective.

When most organizations articulate how they are organized, it usually looks like this:


The model above assumes orderly and rational decision-making, where everyone follows a chain of command. Communications are assumed to also follow this chain.

Anyone who works in higher education, and maybe most organizations, knows this is not how things actually work . People communicate with individuals at different levels and different departments all the time. Additionally, universities are open systems. State politicians, the press, donors, and even random people who seem to just kind of wander on the periphery will exert influence over organizational plans and activities. With that in mind, a different perspective on organizational structure might look like this:


When working with assessment models and frameworks, it is important to acknowledge the influence of other factors in decision-making and organizational dynamics that may influence the use and interpretation of assessment evidence. In a chapter of Using Evidence of Student Learning to Improve Higher Education, the authors note:

…the relationship between evidence and action is not always neat, rational, or linear. Moreover, the fact that evidence meets the highest possible psychometric standards may have no bearing on its effectiveness in prompting action (Hutchings, Kinzie, & Kuh, p. 41).

This does not mean that frameworks and models should not be used. In fact, they are very helpful in terms of planning and showing the links between parts of the assessment and evaluation plans. There are several ways the tension between best-practice and responsibility to the field of assessment, and the kind of messy way in which public non-profits are organized and how they make decisions.

In Assessment Reconsidered, authors Keeling, Wall, Underhile, and Dungy recommend distinguishing between formal assessment and informal assessment practices:

Formal assessment practice includes conceptualizing, planning, implementing, and evaluating the impact, or outcomes, of a purposeful, intentional learning event on a set of learners. Informal assessment is the experience that an individual or individuals have when they experience an event in which learning occurs…whether or not that event was intentionally developed or designed (p. 10).

The key, in the informal situation as the authors describe, is to develop methods that “ascribe meaning to that event.” Methods like observation, informal interviews, or quick polls/surveys are good for capturing these moments. Even staff debriefing and documenting observations can be helpful in these situations.

It is also important to ensure that multiple viewpoints are taken into consideration in assessment and evaluation. A lot, if not most, decisions do not occur through rational, formal processes and structures. Decisions are often made incrementally over time. Assessment data travels through a multiple of different people and groups, all of whom attach their own interpretation and meaning to the information (more about this topic is in M. Patton, Utilization-focused Evaluation, 1978). (Developing shared meaning about assessment data is much easier at the program-level. The variety of interpretations increases at the institutional level). Sometimes, it’s not obvious how a decision was reached or made.

For assessment data to be useful in this context, it should be broadly communicated, discussed, and given time to develop. The emphasis should be on creating a shared meaning over time. The phrase “the reality is…” should only be used after a long investment of time and energy in creating shared meaning. (I would suggest never using that phrase in the context of assessment and evaluation — otherwise, people may think the data does not reflect their reality).  In order to highlight use, instructors and leaders can clarify the connections between intentions and actions through curriculum mapping or logic models, or through reports and in meetings.


About rlsmith205

Bloomington-Normal, IL
This entry was posted in Culture, Methods. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s