An Insider’s Take on Assessment Reaction

Every few months, someone writes an article exposing assessment for what it supposedly is: a waste of time and a destructive force. The latest installment is from the Chronicle of Higher Education, An Insider’s Take on Assessment: It May Be Worse Than You Think.

Any discipline should welcome critical thought and reflection. This article definitely provides material for interesting discussion. I have a reaction to three points made in the article:

  1. Assessment as a form of research.
  2. Assessment as a form of control.
  3. Assessment as a conspiracy.

Assessment as a Form of Research

The author quotes an article from Intersection stating “The whole assessment process would fall apart if we had to test for reliability and validity and carefully model interactions before making conclusions about cause and effect.”

Assessment isn’t the only discipline dealing with this. As the replication crisis shows, a lot of empirical research struggles with issues of reliability and validity. Assessment may be no worse than a lot of the social science research in peer-reviewed journals or conference presentations.

In Debates on Evaluation, Mike Patton provides an example to explain the tension between validity and utility. Most agriculture faculty like to conduct research in controlled settings. This is because it enhances validity and reliability.

Most of the time, this is good. Scientific methods and prescribed measurement principles have contributed to advancements in health care, psychology, and how we understand the world. On the other hand, advancements have been made by accident or specifically ignoring the  rules of empirical, “gold standard” research or serendipity.

This isn’t anti-science or anti-intellectual. It’s pragmatic. Is anyone really going to tell an art or music professor with 20 years of experience in their discipline that their assessment of learning is invalid or unreliable because it lacks the “gold standards” of empirical research design? Good luck with that.

As the son-in-law and grandson of farmers, however, I know that any farmer will tell you that controlled settings are always impractical and almost always impossible. Based on the criteria described in the Chronicle article, my father-in-law would be wise to ignore most agriculture research. In fact, based on the logic presented in the Chronicle, most farmers would be wise to avoid empirical agriculture research conducted in controlled settings.

So, of course assessment is going to “fall apart” when testing for validity and reliability, just like a lot of peer-reviewed, empirical studies and probably most social science research. But does that mean my father-in-law should ignore agriculture research because it doesn’t translate into the field? Conversely, do agriculture researchers have nothing to learn from farmers because their observations lack the gold standards of empirical research, like controlled settings, randomized trails, etc.? Does a lack of use of these methods imply

Of course not. That would be ridiculous.

Assessment as a Form of Control

I agree with the author of the Chronicle article on a major point. Here’s the quote:

He (the author of the Intersection article) also seems to be opening the door to a challenge to what is perhaps the single most implausible idea associated with assessment: that grades given by people with disciplinary knowledge and training don’t tell us about student learning, but measurements developed by assessors, who lack specific disciplinary knowledge, do.

I have never in my career met anyone would tell a faculty member how to measure learning in their class or even program. I would never do that. It sends the message “I have no idea what your job is, but whatever is, you’re doing it wrong.”

The Chronicle article also makes a point about grades and ignoring the judgment of disciplinary experts. I happen think grades are fine at the classroom level. Their utility at the program level is another matter.

Unfortunately, conference attendance and a review of institutional assessment websites makes me believe there are a lot of assessment administrators who do tell faculty how to measure learning. And they are telling them that grades are unreliable measures of learning, even at the classroom level. If the goal is to foster engagement with assessment, telling someone their wrong is a bad strategy.

Assessment as a Conspiracy

The final point from the Chronicle article is that assessment is somehow related to the rise of on-line learning and growth in adjunct faculty. The author does not specifically use the term “conspiracy,” but the theory is that assessment provides evidence of quality in these areas, thus justifying their existence.

This might be a matter of correlation and not causation. Assessment has its roots in psychological research in the 1960’s and 1970’s and the accountability movement of the late 1970’s and 1980’s. Assessment predates on-line learning and massive growth in adjunct faculty.

Is it possible that assessment was co-opted by administrators later? I don’t know. I’ve only worked in “traditional,” non-profit higher education institutions. All I can say is that I have never witnessed administrators place more emphasis on assessing on-line teaching and learning over “traditional” forms, like classroom or lab teaching and learning, as a scheme to justify lower costs.

If assessment is an effective strategy for articulating the benefits of dubious educational practices, then why not use assessment to articulate the benefits of “legitimate” educational practices? I would certainly support that approach, and I don’t know what’s stopping people from doing the latter.

A Way Forward

The Chronicle article didn’t really offer a lot in terms of positive solutions moving forward. Here some ideas for moving forward:

Give up the fight about grades. Grades are fine at the classroom level. Telling faculty they aren’t insults their disciplinary expertise. As a form of measurement and motivation, grades can be problematic. My assumption, though, is that almost all faculty are pretty good about tying learning outcomes to grades and communicating to students what a grade means, so I don’t know why assessment administrators make such a big deal about them. Grades at the program level are a different story. I have no idea if a student with a 3.0 is more knowledgeable than a person with a 2.86.

Consider ending grids and templates. The point of standardized grids and templates is to give administrators a view of learning and program effectiveness for the institution as a whole. In theory, the components of the template are complied into one document and reviewed. This almost never happens. And if it did, programs and courses are too varied and contextual to make sense out of it.

What if, instead, we asked faculty to pick one thing they care about – critical thinking, art criticism, research ethics, whatever – and asked them to spend a year researching it? And they get control over how the research is formatted and looks. It could be uploaded to an on-line respository for sharing. Faculty are doing a lot of creative and interesting work in the area of student learning. This approach could capitalize on that. This also addresses the rather arbitrary issue of quantity. Over 5-7 years, that’s a lot of research on student learning. Assessment administrators would call this assessment, but “research” would be a less threatening term.

There’s two potential problems with this kind of process. Convincing an accreditation peer reviewer or agency official to take a different approach that relies more on creativity and one outcome a year, and less on standardization and compliance, might be a hard sell. A second problem is change. Organizational processes that have been in place for many years provide certainty. Change is slow, effortful, and uncertain (like learning). It’s easier to fill out a mind-numbing and non-useful grid than to change a process. And it requires less thought.

Distinguish between student learning assessment and program evaluation. Student learning outcomes assessment looks at what students learn and do. Program evaluation looks at what the program does (space allocation, staffing, budgeting, etc.). Student learning assessment projects can be used in program evaluations and reviews. But a program evaluation should not exclusively focus on student learning or be evaluated solely on that criteria. Teaching and learning are two different activities. Faculty don’t have total control over learning, only what they teach. No one should be held accountable for something over which they only have partial control.

Perhaps accountability and compliance can be framed as program evaluation or program review? All of the standards, frameworks, forms, and templates can be used in a program evaluation. Since program evaluation has more to do with evaluation criteria than disciplinary criteria – and who can argue with program evaluation? – make program review the primary accounability and compliance vehicle.

This frees up student learning outcomes assessment. Provide few if any standards or guidelines, four or five at most. Make learning outcomes assessment an addendum to the evaluation. This would respect faculty disciplinary expertise and maybe enhance engagement with the process.

Be prepared for what’s coming next. Unlike higher education, K-12 teachers are held accountable for what students learn and the value they add. K-12 education has been dealing with this side of assessment for decades. It would be naive to think it’s not coming to higher education. Big data sets about faculty productivity and graduate salaries already exist. Value added measures should be next. Someday, people will be able to quantify how much value a faculty member adds to a graduate’s salary and other labor market outcomes. The data’s already there, just waiting for someone to match it.

There’s still a narrow window to get ahead of this if we engage in geniune, simple assessment and dialog about what it means (not two-way monologue, in which two people talk, but no one really listens). Even though the Chronicle article provided almost no guidance in terms of positive next steps, it did offer a chance for reflection and dialog.

One Last Story

In my early days of assessment and institutional research, the focus was on compliance combined with quality improvement processes. It appears like it still is in many places.

Getting faculty to “comply” and submit assessment reports was very difficult. And what was submitted to the assessment committee was subpar and boring. Some of the font-sizes in the standardized grids had 8-point font and were difficult to read.

A few years in, I attended an in-house retention symposium. Faculty and staff presented on strategies for learning and student success at the classroom, program, and institutional levels. I was blown away by the quality of the research and the creativity.

It was all what I would call assessment research, and no one had ever submitted it to the assessment committee. When I asked a group of faculty why they didn’t turn this creative work into the assessment committee, the reply was “well, they never asked, and this work isn’t assessment.”

A vast majority of faculty want to do good work. And they’ll share it, if they are asked in the right way. But I learned that most faculty will do anything you ask them to do, but almost nothing they are told to do. This Chronicle article made me think that perhaps it’s time, as a discipline, to engage in better dialog with faculty and administrators about assessment, make positive changes in how we organize assessment practices, and do a better job at telling these stories to the public, including accreditors and policy makers.

Posted in Illinois | Leave a comment

Incorporating Design Principles in Writing Student Learning Outcomes

Modern Design Principles

Smartphones, tablets, and e-readers have revolutionized how we consume and create information. It’s a based simple formula:

simplicity + user customization = engagement

Take my e-reader, pictured below. It comes in only black or white. To get started, I just turned it on, adjusted a few settings, and was good to go. Over time, though, I added more extensions, created folders, personalized settings, and customized the kindle to meet my specific needs.


Simplicity in design coupled with customization of experience is why today’s smartphones and e-readers are so engaging. When I look at another person’s smartphone or e-reader, though, it’s kind of weird. Although the other person’s device looks the exact same, the settings, layout, and overall experience is not. Anyone who has looked for something on their spouse’s or partner’s smartphone knows the feeling.

The design of today’s e-readers and smartphones is intentional. The idea is that simplicity and less design is easier to use and more genuine. By allowing the user customize and personalize the device, engagement with it also increases.

Design Principles in Learning Outcomes

Applying the design principle of  principle of simplicity + user customization = engagement to assessment and evaluation could simplify things and lead to greater engagement.

Like a smartphone, tablet, or e-reader, the process for writing student learning outcomes can start with a few standard features, as shown below. Then, faculty and staff can customize from there. 

  1. Answer questions about your program, course or activity.
  2. Select a verb and link it to an activity.
  3. Write the final outcome.

Feature 1. Start with questions. 

  • Affective domain: “What does your program want students to value or care about?”
  • Cognitive domain: “What does your program want students to know?”
  • Psychomotor domain: “What does your program want students to be able to do?”

If you don’t know anything about the domains or need a refresher, that’s okay. You can watch this video.

Example: Masters Degree in Assessment & Evaluation Program

  • Assessment question 1 (cognitive): Students should know how to engage stakeholders.
  • Assessment question 2 (cognitive, somewhat psychomotor): Students should know how to write goals and outcomes.
  • Assessment question 3 (affective): Students should be able identify and question their own values and how those values guide assessment research.

Feature 2. Select the activity or assignment and link it to a verb.

Click on this sheet below to see what activities or assignments work well with a particular verb.

Example: Masters Degree in Assessment & Evaluation Program

  • Assessment question 1 (cognitive): Students should know how to engage stakeholders.
  • Assessment question 2 (cognitive, somewhat psychomotor): Students should know how to write goals and outcomes.
    • Activity/assignment: Students write learning outcomes using the ABCD method.
    • Potential verbs: Write, produce, demonstrate, generate
  • Assessment question 3 (affective): Students should be able identify and question their own values and how those values guide assessment research.
    • Activity/assignment: Reflecting on one’s values and their relationship to research epistemology.
    • Potential verbs: Reflect, justify, adjust, modify, defend, adapt

Feature 3. Write the outcomes.

Example: Masters Degree in Assessment & Evaluation Program (using the ABCD method as an example)

  • Assessment question 1 (cognitive): Students should know how to engage stakeholders.
    • Activity/assignment: Students use a stakeholder identification and analysis grid to identify and analyze stakeholders in an assessment plan.
    • Potential verbs: Identify, classify, prioritize, compare, contrast
    • Final outcome: Using a stakeholder identification grid (condition), students (audience) will identify stakeholders and integrate their needs into the design and analysis of assessment(s) as well as the reporting of results (behavior).
  • Assessment question 2 (cognitive, somewhat psychomotor): Students should know how to write goals and outcomes.
    • Activity/assignment: Students write learning outcomes using the ABCD method.
    • Potential verbs: Write, produce, demonstrate, generate, formulate
    • Final outcome: Given an ABCD template and learning taxonomies (condition), students (audience) will identify appropriate verbs and produce three learning outcomes (behavior).
  • Assessment question 3 (affective): Students should be able identify and question their own values and how those values guide assessment research.
    • Activity/assignment: Reflecting on one’s values and their relationship to research epistemology.
    • Potential verbs: Reflect, justify, adjust, modify, defend, adapt
    • Final outcome: Given a description of three major paradigms – positivism, constructivism, and pragmatism – (condition), students (audience) will identify a paradigm consistent with their personal worldview and articulate how they will adapt paradigmatic assumptions to different assessment and evaluation contexts and needs (behavior).

Start Customizing

Now it’s time to start customizing and writing the outcomes in a way that fits your specific disciplinary and programmatic needs, values, research orientation, culture and history. Some methods include:

Summary & Tips

ABC…..D Method

I did not include the degree of learning in the final learning outcomes. I have only found the degree part of the ABCD model useful in summative assessment. Here’s an example:

Upon completion of the art history program, 80% of students will be able to identify the approximate year of a painting.

The first issue with these kind of outcomes is that the cut-offs are seemingly arbitrary. Why is 80% better than 75%? What is so special and magical about 80%? The second issue is use of the results. If 90% of students in the art history program meet the goal, it provides an incentive for the program to ignore the outcome and move on. If 75% don’t show competency, that suggests a problem that may not exist.

The C.A.S.E. Method (Copy and Steal Everything)

The first outcome dealing with stakeholders is from the ASK student affairs assessment standards. There’s no sense writing a new outcome when a good one already exists. I wouldn’t recommend using an outcome without attribution, however.

Learning Outcomes Focus on What Students Actually Do, Not What They Posses

Adleman points that verbs need to be operationalized. Verbs or statements like understand, become familiar with, recall or capable of should be avoided because they describe “internal cognitive dynamics” that are difficult to assess. Just because a verb describes an action does not mean it can be operationalized for assessment.

Learning Outcomes Put the Focus on Students, Not Teaching

Learning outcomes should focus on what students do, not classroom activities or what we teach. “Students will be introduced to the topics of abnormal mental behaviors…” describes what we do as instructors, not what students do.

Learning Outcomes Are Focused on the Present or Near Present, Not the Future or the Past

Adleman asserts that learning outcomes should focus on what students do now, not in the future or past. As an example, we all want students to able to discuss an important topic or idea after they graduate. The first problem is that students aren’t with us anymore and will be difficult to assess. It’s difficult to isolate our impact on a graduates’s ability to discuss an important issue 10 years after graduation. The second problem is that discuss could be interpreted as focusing on a teaching activity.

Learning Outcomes Describe the Learning that Results from the Activity, not the Activity Itself

An outcome like “students will participate in a hazardous materials training seminar” is a fine outcome if the goal to measure participation only. As written, though, this outcome says nothing about what students will learn of the training seminar.

Focus on How to Operationalize Learning Outcomes, Not Arbitrary Distinctions

Some people are really picky about the differences between learning outcomes, objectives, goals, targets, indicators, outputs, etc. I have yet to see a standard approach to the definition of these words. Every textbook and author has a different definition.

Presenting a detailed and prescribed definition of each is confusing enough. Asking people to write varying levels of outcomes and outputs is even worse. The only distinctions that matter, in my experience, are the differences between 1) outcomes and outputs and 2) the levels between program, classroom, or institutional outcomes. In the context of writing learning outcomes they are all statements of intentionality. In my opinion, it doesn’t matter.

References & Documents

Posted in Assessment - General, Methods | Tagged | Leave a comment

Learning Goals, Objectives, & Outcomes: They’re All the Same

Assessment has always struggled with language and definitions. One area of confusion is the distinctions between different statements of intentionality. These usually include goals, objectives, outcomes, targets, performance indicators, and so forth. When incorporated into one plan, the result can sometimes look something like this:


Most People Aren’t That Interested in Assessment

For assessment experts, the distinctions between goals, objectives, and outcomes are obvious. Goals are broad, objectives more specific, and so forth.

The problem is that most people aren’t assessment experts, have no desire to be assessment experts, and don’t care about the distinctions. Reinforcing these distinctions  only reinforces the idea that assessment is a complex exercise in bureaucratic compliance, not improvement.

Keeping It Simple

While most people may not be that interested in assessment, they do care about their students and are intellectually curious about how things are going. They also have an intuitive sense about what a goal is. These considerations are what should drive engagement with assessment, not precision in statements of intentionality or filling out a grid.

If we really want to engage people in assessment, we should consider eliminating the perception of arbitrary distinctions, when possible, and focus on intentionality. It doesn’t matter whether statements of intentionality are defined as goals, objectives, outcomes, targets, or aims.

Distinctions about the claims people make when writing statements of intentionality, however, should be considered.

Distinction 1: Outcomes and Outputs

Deborah Mills-Schofield states “outcomes are the difference made by outputs. Outputs, such as revenue and profit, enable us to fund outcomes. But without outcomes, there is no need for outputs.”

Here is an example from two statements that claim to be student learning outcome statements:

  • Students will be introduced to the topics of abnormal mental behaviors…
  • Students will participate in a hazardous materials training seminar.

The problem in the first statement should be obvious. It says almost nothing about what students will learn. It is focused on what the instructor will do, not the student.

There’s no reason the first statement can’t be a goal, objective, outcome, target, or aim. It almost seems kind of pointless to ask people to make a distinction, when they should focus on the claim being made, not the definition.

This first outcome statement describes what instructors do, not what students do. Although the intent of the outcome may be student learning, the statement, as written, evaluates whether an instructor introduced students to the topic, not whether they learned it or not.

The second outcome statement is an improvement in that it describes what the student will do, as opposed to the instructor. If all one wants to do is assess the number of students who attended the seminar, then the statement is fine. It would be wrong, however, to claim or assume that students will learn something from the seminar just by attending it.

Distinction 2: Levels of Assessment

Program and institutional goals will almost always be more broad than classroom or unit goals. Rather than ask individuals to write multiple levels of goals, objectives, or outcomes, it would be better to ask them how their goals align with larger institutional or programmatic goals. This exercise is more intuitive and helps individuals see how their course or activity contributes to a coherent experience for students.

Posted in Assessment - General, Methods | Tagged | Leave a comment

Wayfinding & Curriculum Mapping in Higher Education

A curriculum map is a visual representation of how a program’s activities or courses lead to a coherent learning experience for students. (Principles of curriculum mapping can also be applied to the co-curriculum).

Wayfinding: Why Curriculum Mapping is Important

An example from the field of wayfinding illustrates the importance of curriculum mapping. Wayfinding refers to how people orient themselves in a space and use directional cues, like signs or walking paths, to navigate their environment. The design of a space has a significant impact on one’s experience and perceptions of their environment. Higher education has been leveraging that idea for decades.

This blue line in the photo below represents how I orient to my space at my university. My orientation to the campus and experiences are centered around assessment and evaluation.

Students, however, have a much different perspective and university experience, as shown by the red line. Whereas my orientation is centered around my discipline, students experience the university as a whole, as shown by the red line. Curriculum mapping helps us see how students navigate their experience and create a more cohesive and whole curriculum.


My Wayfinding (blue line) — Student Wayfinding (red line)

Types of Curriculum Maps

There are three types of curriculum maps: simple, embedded, and developmental. The following images highlight examples of all three. The last part of this post presents examples of how to use curriculum mapping.

Simple Curriculum Maps



Embedded Curriculum Maps

Embedded curriculum maps show where learning outcomes are addressed and assessed at specific points in the curriculum.


This curriculum map flips the assessments and courses.



Developmental Curriculum Maps

Developmental maps show student growth over time. A short list of developmental frameworks are shown below:

  • Introduced, Developed, Mastered
  • Introduced, Reinforced, Practiced, Demonstrated
  • Low Emphasis, Medium Emphasis, High Emphasis
  • Introduce, Emphasize, Measure
  • Instruction, Practice, Feedback



Using Curriculum Maps

Example 1. Art History Program Curriculum Map

What conclusions can you make about this curriculum map?


Conclusion 1 – Museum administration and budgeting is not covered anywhere in the common Art History program courses. It should probably be removed as a program outcome – there’s no sense having it as an outcome if it’s not being taught to all students. Faculty can still teach budgeting and administration in their individual courses, but no claims can be made about program graduates possessing this skill. If program faculty feel that administration and budgeting is important, then it should be reinforced in the common courses.

Conclusion 2 – Attendance at college art events is not addressed in any of the outcomes. This does not necessarily mean it should be removed as an activity, however. It just means program faculty should have a conversation about this activity’s role in the program.

Example 2. Biology Program Curriculum Map

What conclusions can you make about this curriculum map?


Conclusion 1 – Students are expected to have mastered laboratory skills by the end of the program. Faculty may be frustrated by this and blame the students. However, a curriculum map in this hypothetical example reveals that students were never introduced to laboratory skills. It is unfair to assess students for something they were never taught. Laboratory skills should be reinforced earlier in the curriculum.

Conclusion 2 – Students were introduced to the idea of major cellular processes, and expected to master it by the end of the program. However, they were never given the opportunity to practice this skill.

Conclusion 3 – Students were introduced to the careers in biology. However, career awareness is never discussed in the curriculum beyond the introductory course. Faculty should have a discussion about this outcome’s place in the curriculum.

Example 1. Student Affairs Curriculum Map

What conclusions can you make about this curriculum map?


Conclusion 1 – Civic responsibility is not covered in the student affairs curriculum. Staff have a decision to make. Should they remove civic responsibility as an outcome? Or, should it be reinforced and assessed in the curriculum?

Conclusion 2 – Global awareness is only addressed in the housing survey. Maybe that’s enough. But it won’t cover students who do not live in the residence halls.

Conclusion 3 – Two assessments are not addressed in any of the outcomes. For example, campus housing conducts an annual social media use survey and student organizations conduct an annual textbook cost survey. If an assessment is not being used for improvement, then you’re wasting staff and student time. The ultimate value of assessment lies in it’s use. Staff should consider removing these assessments.

Benefits of Curriculum Mapping & Recommendations

  • Curriculum maps are meant to be discussed and shared. They are an impartial and objective (to the extent that is possible) way of highlighting curricular gaps.
  • Curriculum maps highlight unproductive practices. If an assessment is not supporting the curriculum or being used, then it should be considered for removal.
  • Curriculum maps help programs set priorities and plan for the future.
  • Curriculum maps can communicate expectations to students.
  • Curriculum mapping is meant to be inclusive and include multiple viewpoints.
  • Curriculum mapping is not meant to prove someone wrong.


Posted in Assessment - General, Methods | Tagged , , | Leave a comment

Assessment and Stories We Tell Students

Do new, first-year college students need to study 2-3 hours per credit hour in a week to be successful? The short answer is: no. Research tells us that most first-year students spend about one hour or less per credit hour studying and preparing for class and do just fine, depending on your definition of “fine,” of course.

This is the central premise of Academically Adrift. The idea is that most students see college as a pathway towards economic security or a rite of passage into adulthood. Thus, college students invest their time in activities that have little to do with learning.


Assuming the premise of Academically Adrift is accurate, then traditional assessments, like grades, standardized tests, or degrees, are not assessing learning, but probably other things, like managing the college experience or skills related to persistence.

Data from a variety of sources, including NSSE , the CLA,  and grade inflation show that students are still getting good grades and graduating with less effort, at least measured by hours spent studying and preparing for class.

Telling most first-year students they need to study 2-3 hours per credit hour to be successful in college isn’t accurate and probably harmful. There are two problems with this kind of messaging:

  1. It exaggerates how much time academically competent and even a few successful students actually spend studying. Communicating an unrealistic standard reinforces the legitimacy of peers and other sources for information over more legitimate ones (like advisors and faculty).
  2. It sets up time as the constant and learning as variable. According to the flipped teaching model, time should be variable and learning is constant. A better strategy would be to communicate what students will do and/or outcomes of their college experience.

How does assessment inform what we should tell new first-year students? First-year students should receive two types of messages, one that legitimizes the expertise of faculty and student advising staff and anaother that de-emphasizes a fixed-intelligence mind set. They should receive messages like this:

Message 1: “You get out of college what you put into it. If you want to study 15 hours a week, and you’re fine with a 2.5-3.0 GPA, then go for it. Keep in mind, though, that your effort will need to increase as you progress through college, and in particular your major.”

Message 2: “You may be disappointed, in spite of all your hard work. Keep in mind, though, that intelligence is not fixed. Frustration with learning something new and learning from set-backs are all natural parts of the learning process. Utilizing the services we provide and listening to your professors can help you grow and be a more competent and efficient learner.”

If I remember anything, I remember two messages from my first-year orientation. You will need to study 2-3 hours per credit to be successful, and, look to your left and look to your right. These weren’t very helpful. A better message might have been: if you work hard and listen to your instructors and university staff, you will likely be fine. And if you’re not here next year for whatever reason, you’ll be doing something else.

Posted in Illinois | Leave a comment

Bunking & Debunking Altucher’s 15 Essential Skills They Don’t Teach in College

According to one internet blogger, there are 15 essential skills for making money. They include the usual things like networking, motivation, creativity, etc.

While the skills are fine (who can disagree with creativity?) the claims about them are dubious and have almost no evidence to back them up. The two claims are:

1. You don’t need to go to college to get the 15 essential skills.

2. Colleges aren’t teaching these skills (or, at least, students aren’t learning them).

Claim 1: You don’t need to go to college to get the 15 essential skills. 

One can make a good argument that you don’t need to go to college to learn. Traveling, reading War & Peace, and conducting home experiments can all take place outside of a classroom.

The claim is dubious from an earnings-perspective, however. This is because skills don’t translate into higher earnings unless a credential is almost always attached to them.

There are plenty of successful people who don’t have degrees. And almost all of them come from wealthy backgrounds. Steve Jobs and Bill Gates didn’t get a degree, but they did have wealthy parents and access to college. Most people don’t have the time or money to be unemployed and tinker in their parental-subsidized garages. A college degree is a much less risky bet.

In today’s U.S. economy, the evidence is pretty clear: family background and credentials matter more than skills. Whether skills should matter more is another conversation. If you want to make more money, in general this is what you need to do:

  1. Be born to rich parents.
  2. Get a college degree. Think college is too expensive and that it’s not worth it? Think again. The rate of return from getting a degree is still higher than if you didn’t go to college.
  3. Be mobile.

Skills matter, but credentials trump skills almost all the time. And while it stinks that wealthy kids get a huge head start, education still provides a pathway to a credential and higher earnings for most people.

So, the claim that a majority of people don’t need college to make more money is dubious, at best.

Claim 2: Colleges aren’t teaching these skills (or students aren’t learning them).

There is little evidence to support Altucher’s second claim, and a lot to counter it. An academic research library search of the things the author claims colleges don’t do with the added phrase “college learning outcomes _____ skills” revealed the following number of academic studies:

  • college learning outcomes presentation skills: 783,493 research articles.
  • college learning outcomes quantitative literacy skills: 223,947 research articles.
  • college learning outcomes philanthropy and civic engagement skills: 80,664 research articles.

Altucher has some good points about learning, but investment decisions should be based on evidence and realistic outcomes, not anecdote or opinon.

If your goal is to make more money, your best bet is to learn skills while in college, not out of it. It doesn’t even have to be a four-year degree. Economic returns to associate’s degrees, training certificates, and other short-term credential-granting programs are still quite high.

Of course, if you don’t want to go to college, that’s fine too. Plenty of people without college degrees have happy, satisfying lives. Just make sure your expectations match the outcome.

Posted in Assessment - General | Leave a comment

Illinois High School Graduates and Out-of-State Colleges

The map below shows where Illinois high school graduates enroll at public (blue) and private (red) four-year universities.


The first story is that a lot of Illinois high school graduates leave Illinois for college. The most, in fact, of any other state after New Jersey.

This matters because when a student leaves Illinois for college, they are less likely to return and work in Illinois. The net economic impact of a lost student to another state is about $225,000 per student lost over the course of a lifetime (1) in income tax revenues alone. This does not include the negative impact on the general economy in terms of lost consumption or spending.

Companies lose in terms of their ability to attract an educated and skilled workforce. Taxpayers lose the investment they made in students in 13 years of K-12 education. Other states win because they are able to develop a highly educated workforce with little investment of their own.

The second compelling story is that a lot of students leave Illinois for public universities. In the last 20 years, four-year public university enrollment among Illinois high school graduates has remained relatively flat, but nearly doubled at out-of-state four-year public universities. This should send a clear message about what Illinois residents think about the state of public higher education in Illinois.

Strategies to keep Illinois residents at in-state universities include:

  • Mission differentiation in the four-year public sector, creating institutions with their own unique niche (small, liberal-arts focused; technical or engineering focused; large university focused on research; etc.). Institutional diversity provides more options to Illinois residents. A one-size fits all approach precludes access to distinctiveness and value.
  • More certainty in the higher education budget for direct institutional subsidies and student financial aid.
  • Financial aid incentives for in-state residents. However, since it is assumed that Illinois residents are paying more for public institutions in other states, economic incentives may not be effective in retaining people who are not sensitive to price, and focus more on perceived educational quality.
  • A final strategy would be to recruit more out-of-state students or out-of-state college graduates to Illinois. If Illinois residents are unwilling to invest in strategies that would keep talented high school graduates at in-state colleges, this option could be less expensive. It would probably require, however, more public-private coordination and cooperation, expertise in economic development and education, and leadership.
(1) Adjusted for inflation from $162,000 figure in 2000.
Posted in Illinois | Leave a comment