Tuesday, February 1, 2011

Deborah Sipe's Doctoral Work

Assessment Primer

by Ruth Stiehl & Les Lewchuk

Corvallis, OR: The Learning Organization (2008)

Stiehl’s goal, with this work, is to present a new way of assessing student learning. Her approach is a reaction and answer to the many calls for reform in improving student retention, accountability, meaningful learning for students, and appropriately assessing student learning. Her approach calls for a systems or holistic view of student learning. In her approach, all the factors involved in student learning become intertwined: curriculum development, assessment, and student learning experiences. She thus sees student assessment from the systems perspective in that it is one of the many parts that are interconnected in student learning.

Stiehl calls for viewing student learning from the perspective of how students experience the learning process in college and she uses the metaphor of rafting a river as a visual organizer . Students enter or “put in” at a certain point, travel along in their learning, encounter rapids (such as tests and projects), and “take out” or leave school at a certain point, preferably when they have completed the program. The student is the paddler on raft, along with fellow paddlers, and the instructor is the guide. Stiehl sees the river as the knowledge base for students.

To illustrate her suggested approach, Stiehl outlines a river rafting journey, both through storytelling and visual representation. Thus, the process of assessing students becomes part of the whole story and the graphic representation. Stiehl begins by noting what is involved in the put in phase for a student: figuring out where they’re going, what equipment they need to have, etc. This description is a useful reminder of the importance of student support services, such as counseling and guidance.

Stiehl is essentially asks that faculty see their responsibilities from a new perspective:

  1. Guiding the paddlers down the river;
  2. Constructing the river;
  3. Adjusting the river based on a flow of evidence. (p.20)

This metaphor emphasizes the importance of formative assessment; the continual flow of assessment information about student learning aids the instructor in “adjusting the river.” The metaphor also reminds the reader of an essential element of systems thinking: that continual information is needed by the system to adjust and reorganize itself. Thus, assessment information helps the instructor adjust the curriculum to meet student learning needs in order to reach the desired outcome.

Stiehl notes that intended student outcomes are the “take out” phase of a student’s learning journey. She strongly advocates that intended outcomes, what instructors expect students to be able to do as a result of the specific learning journey, should drive curriculum development. In order for student outcomes to provide clear direction for curriculum development, they need to be “robust”; clear yet complex and flexible. Stiehl provides a number of examples to illustrate her concept of robust outcomes. She notes the difference between program and college wide outcomes and those intended for courses.

In carrying out student assessment, Stiehl advocates for creating student work tasks that are appropriate and occur during the flow of student learning. She suggests the importance of authentic tasks to make the learning more meaningful, and notes that the periods between assessments are “wake-up rapids”, when students gather information, study issues, and so on, to prepare for tests or assignments. Stiehl sees the patterning of assessments as part of the overall “program map” for helping students negotiate their journey down the river of learning. The “rapids” students encounter should be distributed throughout the learning process to allow for instructors to use the assessment information, and should be centralized in something like a capstone project at the end of the course or program. Such a project, in which the student takes the gained knowledge and applies it, allow the instructor to note what has been learned, if the student has applied it, and, ultimately, if the student outcome has been achieved.

Stiehl does not deal directly with student assessment until the third part of her book, after she has set assessment within the whole system of student learning. She explores three areas here: why assess student learning, what should be assessed, and how student learning should be assessed. Stiehl asserts that student learning is assessed to assist, to adjust, and to advance, with assessing to assist being the most important of those three because it is so crucial in helping to guide the learning experience. She notes that when assessing to advance, instructors should not be focusing on what the student has achieved, but whether the student is ready to go forward to the next learning. The concept of assessing to adjust speaks both to the need to review assessment information to adjust curriculum is presented to students, but also to review assessment processes to ensure they are providing the information needed. This point could be more clearly explained. Stiehl contends that what should be assessed are not only the students, but also the challenges of the content and the process itself. This viewpoint again reflects the systems thinking approach in that there are interconnections amongst the students, the process, and the content.

In the following section, Stiehl focuses on a key question: how to decide the criteria for quality in student work. She notes the importance of professional judgment and student input, but notes the influence of social values on this question. As a result, there can be no perfect, objective criteria, rather, Stiehl argues for a process that can be used in determining the criteria for quality. Later, however, she asserts that “good criteria is criteria that assists students, provides data for making decisions, about advancing the student, and making decisions about adjusting learning experiences.” (p. 72). Stiehl then discusses three types of assessment tools: the checklist, the scoring guide, and the rubric, providing many visual illustrations. As to the timing of assessment, she notes that data can be gathered at any point in the student’s learning journey and that the accumulated evidence, like the confluence of smaller streams to a river, provides us with a rich amount of information. Stiehl notes the value of indirect evidence of learning, such as student satisfaction surveys, as well as direct evidence. She also argues that the entire college system itself can and should be a part of this evidence-gathering process. In her final chapters, Stiehl discusses methods for course adjustment, then expanding the idea of adjustment to a discussion of what it is involved in a program review.

Overall, Stiehl work is a very comprehensive view of the assessment process as it occurs in the context of student learning and within the college structure itself. As such, she views assessment both in terms of a linear system and of the wider system of the college. Stiehl very carefully explains her metaphors, through stories, visuals, and examples. Also using a systems approach, she attends to the details of the assessment process, addressing key questions of why, what, and how. The tone of her prose is positive, inspiring, and hopeful and her narrative is clear and informal, yet continually instructional. Her work provides clear guidance to those embarking on the journey of assessment or those who already doing so, but who lack clear guidance on its role in student learning.


  1. One thing that we need to make sure we understand in all this is that assessing students and assessing programs are different challenges requiring different skills. Unfortunately, I think there is an overall view in the establishment, at least PCC, that because an instructor is capable of performing the former, that they are competent to execute the latter. I question this assumption. But it seems that having been asked to assess programs, this blog has drifted towards the topic of assessing students. A valuable topic indeed, but not the one currently before the SACs in my mind.

  2. I like Phil's point that we should be clear about the different purposes of assessing. Instructors have long assessed student work for the purpose of assigning grades (summative assessment.) We are now being asked to assess for the purpose of program improvement.

    But we are still assessing student learning, for both of these different purposes -- for telling if our programs are effective, we are assessing student learning of outcomes (either PCC's core outcomes OR degree/cert outcomes.)

    For a long time I wasn't clear on why the name of the assessment council was "LEARNING Assessment Council." Now I think it is because all assessment, for all these different purposes, is assessment of student learning!!