Assessment results, obtained in Step 2, are meaningless in and of themselves unless they are first interpreted by faculty members in light of the intended learning outcomes and in the context of other evidence about the academic program, and understanding of the results must become part of a broader faculty conversation across the academic program.
The first element in interpreting assessment results is to compare the actual outcomes to the outcomes that were intended (as identified in Step 1). Do the results indicate that the programs learning outcomes were achieved and students are learning what was intended? Do they indicate that there is room for improvement on any of the intended outcomes that were the subject of the assessment? If the results are anomalous, do they indicate a problem with the assessment instrument itself or with the student learning outcomes? These are questions that faculty members must consider in order to begin to understand the assessment results.
When intended outcomes are measured in more than one way and the assessments concur, there is greater confidence in the results. That is, assessment validity is higher when there are multiple indicators giving the same message. This is called triangulation of sources. For example, evidence of student work combined with feedback from exit interviews is more compelling than just one or the other. Evidence is bolstered when it comes from more than one type of source (e.g., portfolios from a capstone course plus surveys of employers plus standardized test scores), not just from more than one source of a single type (e.g., class work from multiple points in the course). No matter whether assessments indicate success in achieving student learning outcomes or potential problem areas, gathering additional points of evidence will add to the strength of the assessment.
There is another reason to consider other evidence as well. If the original assessment identifies a general area of concern, follow-up studies may be needed to find the crux of the problem. For example, an alumni survey might indicate that graduates did not in general agree that their needs for academic advising had been well met. If the survey went no further on this topic, follow-up investigations, perhaps in the form of focus groups, would be needed to understand whether the issue was availability of advisors during advising periods, a desire for greater mentoring relationships with faculty advisors throughout the year, or some other issue.
Once the key faculty members conducting the assessment are confident of their findings, it is important to bring the information to all program faculty members for discussion and interpretation. The normal department communication systems can be brought to bear on the dissemination process: faculty meetings, committee discussions, e-mail, a Blackboard site, etc.
Assessment results almost always indicate some areas for improvement in the program. That is a useful result, since it helps people think about improvement. But be prepared for people to question the basis of the data. It is a natural response to focus on the limitations of the assessment methodology or sampling method when results are negative, and sometimes this is rightfully so; hence the value of triangulated sources.
How one frames the results is critical. Assessment results should never be used as evidence of a particular person's shortcoming. In the spirit of program assessment, one always needs to ask, How can we do better? Viewed this way, assessment is an opportunity to improve ourselves and do a better job for the students we serve.
Last Modified: October 19, 2012