I stumbled upon the value of assessment during a bout of acute paranoia. Something I’ve come to term as a healthy sense of “paranoid optimism.” The hope that everything will be okay, but hedging your bets, and developing contingency plans in case it isn’t. The scenario was a familiar one, an impending potential loss of funding. At the time, I was overseeing a fitness program within campus recreation that was heavily subsidized by student fees. I was looking for a way to justify our existence, a common refrain I’ve heard from other student affairs professionals. My default mechanism to communicate impact was to count heads. Look how many people are showing up to our classes, or participating in personal training! While there is no harm in utilizing those figures, and we’ll talk more in depth about how to do so effectively, the common comeback I rightfully received was, “…so what? So what that X amount of people participated, how do you know they actually got anything out of it?” Where was the connection between engagement and student success? At first, this response can be a frustrating barrier. But in this day and age, when funding is sparse and co-curricular activities can be too easily characterized as superfluous, I quickly realized the need to dig deeper.
The next evolution from participation counts came in the form of learning outcomes. Pre-planned, aspirational, and conditional outcomes attached to an activity, program, or initiative. These learning outcomes became guiding principles for program planning. They took the form of self-report survey questions given to students, and served as the first metrics we had that really communicated learning and impact. When planning an activity or program, we’d ask, what do we want students to gain from this? Then we’d operationalize those desires into learning outcomes and bake them into our program, from the pre-planning phase to the evaluation.
The learning outcomes started to become the driver for what we planned, and gave us greater focus when it came to content and curriculum creation. While building a six week, weight training program for example, learning outcomes helped us reverse engineer the end goal, with the user in mind. If we thought it was important for participants to identify at least three benefits of developing an exercise routine, we would focus on those benefits during the program. Then we would ask participants to identify those benefits in a post program survey. Analyzing those responses gave us the feedback we needed to improve future offerings.
Participation counts and learning outcomes are a great place to start. And before moving deeper into other assessment types and tools, let us define the real benefit and purpose of assessment. As I tell my staff, assessment moves in two directions, one it can be used externally to justify our existence to relevant stakeholders. But first and foremost, it should be used internally, to determine the effectiveness and impact of our programs and operations. In the above example with the fitness program, assessment was used to plan the program, create the program content, assess the impact, and identify areas to improve. Assessment should never be used as a punitive or disciplinary measure. I think that one misconception alone accounts for many professionals’ trepidations about adopting assessment practices. The perception that assessment will create loads of additional work is something we’ll address in part two. Assessment should raise constructive questions and give clarity to how we can always be improving and developing our offerings. Join us for part two as we go more in depth on how to use assessment data to better tell our story and improve co-curricular programs.