This Blog is from the faculty Learning Assessment Council at Portland Community College, in Portland, Oregon. Faculty and others at PCC are responding to changes in Higher Education -- especially the "accountability movement" -- and seeking to improve teaching and learning based on evidence-backed best practices.
Tuesday, February 22, 2011
Michele Marden and the Blame Game
Error-Denying Culture VS Error-Embracing Culture
Which are we in and which will prevail?
This idea has surfaced at three separate meetings I have attended recently.
An “error-denying” culture is one where mistakes are hidden and blame is shifted to others. Individuals quickly find a safe zone and don’t step out of it. Innovation and new ideas, which are both inherently risky and have a much higher potential of failure, are avoided. Growth and improvement stagnate because individuals are on the defensive and unwilling to take risks. On the up side, everyone knows what is expected of them.
An “error-embracing” culture is one where mistakes are expected to occur and individuals are encouraged to experiment and then learn from the results. Growth and improvement happen and individuals support each other instead shifting the blame. On the down side, expectations may change quickly and it may be hard to keep pace.
The first meeting where these ideas surfaced was held by a manager at PCC who was previously from the “outside.” I am still relatively new to the college, so I asked his opinion: is PCC was an error-denying or an error-embracing institution? Without hesitation, he said we are very much an error-denying institution. I found this surprising.
Since this meeting I have been reflecting on previous colleges and high schools where I have taught. Outside of a very short-lived experience, I have always been in an academic setting either as a student or as an instructor. I would have never described academia as error-denying. We are in the business of learning, right? When learning, errors happen and we correct them. But the more I reflect, the more I see evidence of an error-denying culture at PCC and at all the other places I’ve taught.
The blame game: Reflecting on whether or not my experiences as a teacher were error-denying, I first thought of the blame game. I taught at a university where we had a lot of students from the local community college. We complained ruthlessly that it was the community college’s fault for not bringing students up to speed. When I taught at a community college, we naturally looked down on the high schools for social promotion and various other horrors. When I was teaching at a high school? Oh, those middle school teachers had no clue. The few middle school teachers that were respected by the high school teachers felt that the elementary school teachers were the problem. Pretty soon it becomes obvious that the real blame lies with the mother who did not listen to Mozart while she was pregnant. Don’t blame me for the student’s inability to solve a linear equation, it is clearly the mother’s fault for listening to pop music during her pregnancy!
Blame game, version II: Every college and school I have taught at has internal issues of consistency between instructors (regardless of full-time or part-time). This is such a hot topic that I hesitate to mention it. I am definitely going outside of my error-denying safety zone on this one.
This is a very real problem and one that creates many bad feelings between instructors. Worse than the bad feelings though are the ones who really suffer: the students who pass a class, only to fail the next sequential course because they were not adequately prepared. I have had the experience of being on both sides of this. I have been the instructor that required too much, and I know of one experience when I was the instructor that did not require enough. I say “one experience” because there might have been others, but no one told me. The only reason I know about the one time was from an off-handed student comment that made me wonder if something was off. The student comment is what started a very secretive detective-like investigation into what the department wanted. I wonder now why being stealthy felt like the appropriate course of action rather than just asking someone.
Maybe it is a personality defect that I alone have. But if not, why is it so hard to have these conversations about expectations of student work? Perhaps an error-avoidance culture is to blame. This is a sensitive matter that can easily fall into the categories of “good” teacher and “bad” teacher. One of the best experiences I have had at PCC was when a colleague set up team-level meetings for a course. I came to really enjoy and look forward to the meetings, but at first I was nervous about what would come of them – if I would become viewed as a “bad” teacher because of being “easy” or too “hard.”
Due to low turn out, the meetings are no longer held, but there is hope for trying to revive them. Without these meetings, it is harder to have a sense of connection and I now feel adrift. Without the self-correcting that occurs when faculty members talk with each other regularly, there is the potential that we will grow apart in our expectations of students over time. The big question: If it reaches the point where students can no longer transition easily between instructors, will we be able to discuss it without it feeling like an attack?
In just about every conversation I have heard that involved consistency between instructors, someone said “academic freedom” and the conversation shriveled up and died. I completely support academic freedom, but we are doing students a huge disservice if we can not find a way around this issue of consistency.
I would like the college to provide some tracking of students that would help inform instructors of their student’s success in sequential courses. The fear is that if we do this in an error-denying culture, faculty will be on the defensive and those without tenure will worry that their job security is in jeopardy. If we had an error-embracing culture this could be one of many indicators that might help a large department refocus if inconsistencies were found.
What, you aren’t perfect?: I have seen error-avoidance attitudes in myself and other colleagues through out my teaching career. I have had colleagues ask me a question about math problem that they didn’t fully grasp or about how to explain a slippery concept to students. It is often with embarrassment and/or gratitude that they knew that I would not share their “failure” with others.
I very rarely ask for help, even when I know I need it. I’m all about reinventing the wheel – I’ve done it a 1000 times. But lately I have truly started to appreciate how much I can learn (ie, steal) from others if I get over myself and talk to them about what they do in class.
Recognizing this in myself, I am trying to break old patterns, but it is not easy. The universe must have wanted to prove this point because I faced this very issue recently. A student from a prior term came into my cubicle with a question from a course that I have not taught in a while now. I was very distracted with other work, but thought I would be able to help him. As I looked at the problem, the active shooter alarm went off. We walked to the back of the office where I stared at the problem. My thoughts cycled between “I have so much work to do” and “If an active shooter comes back here we are all nicely clumped together for a massacre.” Every now and then I’d wonder why I wasn’t able to answer the question that I knew I used to be able to do without hesitation – was I losing my mind? After the active shooter alarm was canceled and we all went back to our desks, I realized that I needed to just ask for help from someone who was fresher with the material so I could get back to the other work I needed to be doing. This felt like admitting defeat instead of being a welcomed and rare opportunity to see how a well-respected colleague explained a concept that was tough for students.
Elephant in the room: Consistency issues within a department are difficult to discuss, but they are like cuddly puppies when it comes to discussing the divide between full and part time faculty. At least consistency is addressed, albeit most often in a rant, and is a problem that doesn’t divide nicely between full-time and part-time.
Unspoken questions: How much should part-time faculty be involved with curriculum and other aspects of the department and college that are typically handled by full-time faculty? The part-time faculty are overall more willing to embrace the “new” assessment movement that PCC is just starting to define. What does this mean? What is it like to be part-time at PCC? Should full-time faculty care? Should administrators care?
I am too new on the scene to try to answer these questions, but I will share my story: This year I am full-time probationary after teaching part-time at PCC for about three years. Occasionally someone will ask me how I’m handing the “increased work load.” HA! What increased work load? The first time someone asked this, I think I laughed and said I was working less. I’ve been working 10-14 hour weekdays consistently for over a decade now, but I’ve never worked harder than I have as a part-time faculty at PCC, nor have I been under more stress. Part of the issue was that my workload was not just at PCC. I was balancing classes and committee work at two colleges while trying to discover what each department wanted. I realized that I wasn’t really working less now, because I have taken on more committee work, but something significant had changed.
The question stayed with me and eventually I figured out the difference between being part-time and being full-time: I am now “free” to take risks and it is ok to make a mistake (just one though- I am still probationary). As a part-timer I was very much on the defensive. I was hesitant to share my views in case they were “too out there.” I carefully weighed everything I wanted to say and often stayed silent. I was very much in the error-avoidance frame of mind. I will not speak for part-time faculty at large, but I will say that in talking with other part-time faculty both in and out of the math department, I know these error-avoidance attitudes are not unique. Before there is any misunderstanding, I should mention that PCC’s math department has many wonderful and supportive full-time faculty who do reach out to part-time faculty. If it wasn’t for them, I might have given up and left a career I love.
Administrators like to share at part-time in-service events that the part-time faculty carrying most of the instructional load and the college wouldn’t be able to function without them. One time when this was said, I looked around the room to see if others showed any sign of being as angry as I was. I know the administrators were trying to share their appreciation, but this “compliment” still infuriates me. Part-time faculty do carry more instructional burden and all too often they are not respected or supported. Is it a concern that the part-time faculty carry most of the instructional load for the college but they might feel the pressure of an error-denying culture more so than the full-time faculty? If so, what can we do as an institution to change this? Can we create a culture that allows for the freedom to make a mistake without running the risk, real or imagined, of being viewed as incompetent?
Really, students too?: Error-avoidance and error-welcoming also shows up in the realm of this “new” assessment culture many of us are just discovering. Formative assessments are assessments that are not graded, they are only feedback. The Anderson Conference discussed this quite a bit. I have an upcoming blog of my reflections from the conference where I discuss this more, but the research shows that students are more willing to take risks in their work on formative assessments. They explore ideas more freely than they would if the assessment was summative (ie, graded). It is all about feeling safe to make mistakes, whether this is in a meeting with colleagues or as a student in class learning something new.
Prior failures: The Learning Assessment Council (LAC) intentionally chose to leave the assessment in the hands of the faculty. They could have recommended that the college hire an outside evaluator who would have measured our success with our students and then reported back our areas of weakness.
Instead of this top-down approach that would have been met with faculty resistance, the LAC felt that faculty at PCC, both full and part time, knew what was best for our students and so charged the SACs to find their way with the “new” assessment mandates.
The LAC realized that asking faculty to “find their way” would not be easy and they would face a different kind of resistance from faculty than the top-down approach. Being new on the scene, I do not fully understand the issues that surrounded prior “assessment” attempts of the last 15 years. I have heard many comments about the fuzziness that happened when SACs were first asked to do program reviews. If I understand what was said, there was little direction, but a lot of blame when it wasn’t done well. It seems that the whole process was hoop-jumping with a hoop that was not well-defined and that moved about without warning.
So, now, faculty are again being asked to review their program/discipline under this “new” assessment idea while there is no concrete direction for how it should be done. Even worse, they are saying “it is best to show areas of weakness.” I understand why the faculty who experienced the last assessment wave are distrustful and questioning.
Up until now, we have been assessing to show success, which fits an error-denying culture. Suddenly, and with little guidance, the rules have changed. Our accreditation body no longer wants to see measures that show success. They, and the college, want to see assessments that show weak areas and a plan for improvement. This fits nicely with an error-embracing culture, but there is one big catch: prior experiences with the college have created mistrust. How do we know that this work is meaningful and how do we know that we will not be criticized if the first attempt is not perfect?
I’ve been in enough meetings to feel pretty secure that the college is on board and up front with their hopes for this faculty-driven assessment process. If I had any sense that the work I am doing for assessment was hoop-jumping, I would very quickly remove myself as a co-chair with the math department’s learning assessment subcommittee. The freedom to have the control to define the college’s core outcomes for mathematics is both liberating and maddening. I can’t count the number of times that I have wished the LAC had gone with the top-down approach, and I can count pretty high. The frustration does eventually pass, and when I meet with the incredible minds that make up our subcommittee or chat with the other co-chair I feel renewed. I also very much appreciate the critical and questioning minds of the wonderful math faculty who know PCC’s history. They are my reminder to look before I leap. I love the questions they ask and the concerns they bring to the table. I completely believe that this “new” assessment process must be seen through everyone’s eyes and the concerns must be addressed.
Headway?: If we are in an error-denying culture, I have seen some subtle shifts toward an error-embracing culture:
1. Our accreditation body wants to see evidence that we have “self-corrected” in regards to assessment (vs assessing to show success). The college is giving increased support for training to both full-time and part-time faculty for this “new” assessment outlook. [Plug: The last assessment class will be this Spring at Sylvania campus on Friday mornings – contact Shirlee Geiger if interested.]
2. The Learning Assessment Council (LAC) is encouraging an “over the shoulder” look at what different SACs are doing with regards assessing the college’s core outcomes so we can learn from each other. [Plug: Upcoming in May is the third annual Assessment Circus, and then in June we will hold the second LAC CCOR review of assessment results.]
3. Beautiful things happen when faculty talk. The LAC is fostering communication between CTE and LDC by grouping together faculty members for peer review of assessment plans.
4. There is hope to build “critical friends” groups where instructors can bring classroom issues and failed assignments to colleagues for review and brainstorming in a safe and confidential space.
5. The Teaching Learning Centers (TLC) are providing resources, book groups, and a space to discuss these ideas. They also brought assessment to the forefront with the Anderson Conference.
6. This blog and the many ideas that have surfaced through it is evidence of a shift. Communication and sharing ideas is the back-bone of moving toward an error-embracing culture. We have to be bold enough to step outside of our comfort zone and speak on issues that may not be well-received by all.
If we are indeed an “error-denying” culture, I believe we can become an “error-embracing” one. It all starts by having nice comments to this blog. J
Seriously, I am very curious to hear what others think about PCC’s culture. If I am I wrong about the college being error-denying, in what ways are we error-embracing? If I am right, what are the consequences? There might be some out there who don’t want to make the shift to error-embracing. If so, what are the pitfalls of an error-embracing culture?
Tuesday, February 15, 2011
Gregg Meyer and his life before Teaching....
The Tokyo Two-Step
Circling in on continuous improvement
I enjoy change. Yet curiously, whenever I find myself in a totally new setting, my brain tries to map the present to the past much like Google Maps plunks down green and red pins and draws a zigzagging purple line between them. The transition from industry to education has forced many such correlation ponderings this past year. New acronyms, rules and terminologies spring up almost daily; will they ever slow down? Last Fall, I began hearing the “A” word echoing through our cold concrete corridors until one day it burst its way right through our SAC meeting door. It arrived with less welcome than dear old Uncle Bob received when he showed up just in time for our Thanksgiving feast.
Silently, I contemplated the reactions of my new peers. Were they merely resistant to change? Were they staunch protectors of their freedom to instruct as they pleased? Or, perhaps their apprehensions were symptomatic of “Initiative Fatigue”. You know, that syndrome brought on by the endless fad cycles imposed upon employees by management and their teams of highly paid consultants. Hmmm.
After 30 years in corporate manufacturing I’d become numb to the continuum of packaged paradigms each hailed as the solution to all our woes. Just In Time (JIT) inventory, Agile, Lean and Six Sigma (obviously one through five failed miserably); the epiphanies never waned— all proof that Dilbert was not a cartoon!
Thus, for those of you whose knee-jerk reaction to “Assessments” has left you with bruised chins, I get it. What in the world will they think of next? How can they make their jobs easier and yours… well… not? I have a surprise— Assessments can actually make your jobs easier and even more rewarding! Skeptical? Read on…
I call it the “Tokyo Two-Step: Hoshin & Kaizen”. These intertwined disciplines are the only two lingo artifacts I’ve observed to have stood the test of time. Pre-dating my manufacturing career by decades these Japanese expressions ironically have their roots tied to a crusty American gent named Dr. Edward W Deming. In the early 1950’s, Deming flew to Japan to help rebuild their war-decimated industry by employing his statistical process control techniques. Requiring no costly measurement devices, companies like Toyota Motor Company latched onto both his methodology and unwavering focus on quality. Although Deming went on to pen a total of 14 Principles of Transformative Leadership, his prime directive was simple: “Create constancy of purpose towards continuous improvement”. Hoshin, like many foreign terms, is difficult to distill into a single concept. The one I’ve most easily internalized suggests that Hoshin represents a cyclical process that spirals ever inward towards your Reason for Being.
Pop Quiz: What was Toyota’s first car called? Answer: the “Toyopet” (your likely failure is forgiven, but no worries, future assessment opportunities await:-). Notice how that name never got resurrected? I’ll just go on record that this 27hp wondercar wouldn’t have been a JD Power candidate. Yet post-Deming, Toyota’s encoding of quality into their DNA and universal buy-in to countless Hoshin cycles has earned them the highest dependability rating and netted them more quality awards than any auto manufacturer in history. As commercial-like as this sounds, Deming is acknowledged to have had a greater positive impact on Japanese manufacturing than any individual of non-Japanese heritage (an honor that subsequently earned him their “Order of the Sacred Treasure” award on behalf of Emperor Hirohito; of closer to home interest— Deming was also awarded an honorary doctorate from Oregon State for his advancements in manufacturing quality).
If Hoshin maps effective planning, Kaizen is the compass. Kaizen represents daily actions that ever so gradually guide your journey. Together, Hoshin and Kaizen form a reverse engineering team dismantling your mission, parsing it into one to three year objectives (read Assessment Plans) and measuring it terms of simple and sequential tasks. Each atom of action is to be practiced daily until it gets relegated to your brain’s background processing. Isn’t it ironic that actions requiring no thought are the ones that most consistently get done? This is Kaizen in action!
Pop Quiz #2: “Did you lock your car today?” Most will answer “yes”, but few can actually recall the specific act (your assessments are showing signs of improvement).
My Kaizen thought experiment:
If I were to take roll every single class period until the process became fully habitualized, would I get to know my students’ names more quickly? Would this in turn help build deeper individual connections? Don’t better connections foster trust? If a student were to trust me, perhaps they would share a little more about themselves and what makes them tick. With such insights, I could ensure the points I make in class come from relevant perspectives and improve the odds of them catching the balls I toss. Wow, learning might now really be taking root! When I administer my next exam I’ll bet they give better answers and that will save perhaps five hours of grading time. Result: Now I can focus on the more interesting aspects of teaching. Step, step, repeat. Win.
Final Quiz:
1. Might this daily work geared toward accumulating Kaizen wins pay off when program assessments come due?
2. “With a constancy of purpose towards continuous improvement” could Hoshin planning help PCC’s Six Core Outcomes thrive?
3. In the picture below, can you visualize the underlying element leading towards a long-term goal? Can you spot the Hoshin cycles or little Kaizen wins? How might you make real-time adjustments to keep the system in tune?
Tuesday, February 8, 2011
Leaps and other acts of joy....
At that point, he interrupted me. He said:
I found this exchange oddly liberating. If it was widely understood that I believed non-standard things, and saw connections that others thought non-existent, but people still talked to me (at least on most days) .... then maybe I didn't need to worry so much !
I mention this, because I am about to make one of those odd-ball connections (or leaps) that I am inclined to preface with some sort of warning....
There is a new book out about Integrative Education, written by one of my heroes, Palmer Parker (The Courage to Teach), along with Arthur Zajonc. It is called The Heart of Education, a Call to Renewal. Parker and Zajonc are calling for a new approach to best educational practices -- one that is crafted to help people think deeply about their values, discover more of their own potential, search for their own deepest aspirations, and reflect on their place in reality and the nature of the human struggle for meaning. Palmer and Zajonc think educators should educate, not just offer what has the best cost/benefit ratio in the next budgeting cycle.
Many people would take the "bean counter" mentality of the assessment movement to be part of what Parker and Zajonc are railing against. Assessment is all about trying to measure effects in the world -- those pesky "outcomes" that are "out there" -- and so it skews toward things that are measurable. Deep aspirations, meaning, values.... those things are hard (maybe impossible) to measure. So lots of people worry that in our rush to measure, we will forget what we are trying to do in the first place.
But here's my leap:
To me, it sounds like core outcomes. PCC has some of them. You know about them, right?
And that is what we are asked to assess.
The assessment movement asks that we remember why we are here as educators, why what we do matters.... and then it reminds us that it matters so much that we can't afford to fail. Our world needs self-reflective problem solvers, able to collaborate across divisions of culture and profession -- willing to turn toward our collective social and environmental problems (instead of turning away.) Helping create such consciousness and skill is what Palmer and Zajonc say is at the heart of education. The assessment movement then chimes in to say: and please, figure out how to do this well by taking a look at what you are currently accomplishing and not accomplishing in student learning.
It is hard to figure out how to measure what matters. But if you know what matters -- that counts as a good start. Palmer and Zajonc reminds us of what matters.
We are all in this together.
Let us collaborate, and in that way help our students learn to integrate. And let us devise ever more accurate ways to tell what we are accomplishing, and what we have left to do. Our students, and our world, will be the better for it....
Assessment drives collaboration around what matters. Palmer and Zajonc ask us to remember what matters, and then collaborate around it. I see assessment taking us to where Palmer and Zajonc say we should be.
Hmmmm. Maybe it's not such a big leap after all....
Tuesday, February 1, 2011
Deborah Sipe's Doctoral Work
Assessment Primer
by Ruth Stiehl & Les Lewchuk
Corvallis, OR: The Learning Organization (2008)
Stiehl’s goal, with this work, is to present a new way of assessing student learning. Her approach is a reaction and answer to the many calls for reform in improving student retention, accountability, meaningful learning for students, and appropriately assessing student learning. Her approach calls for a systems or holistic view of student learning. In her approach, all the factors involved in student learning become intertwined: curriculum development, assessment, and student learning experiences. She thus sees student assessment from the systems perspective in that it is one of the many parts that are interconnected in student learning.
Stiehl calls for viewing student learning from the perspective of how students experience the learning process in college and she uses the metaphor of rafting a river as a visual organizer . Students enter or “put in” at a certain point, travel along in their learning, encounter rapids (such as tests and projects), and “take out” or leave school at a certain point, preferably when they have completed the program. The student is the paddler on raft, along with fellow paddlers, and the instructor is the guide. Stiehl sees the river as the knowledge base for students.
To illustrate her suggested approach, Stiehl outlines a river rafting journey, both through storytelling and visual representation. Thus, the process of assessing students becomes part of the whole story and the graphic representation. Stiehl begins by noting what is involved in the put in phase for a student: figuring out where they’re going, what equipment they need to have, etc. This description is a useful reminder of the importance of student support services, such as counseling and guidance.
Stiehl is essentially asks that faculty see their responsibilities from a new perspective:
- Guiding the paddlers down the river;
- Constructing the river;
- Adjusting the river based on a flow of evidence. (p.20)
This metaphor emphasizes the importance of formative assessment; the continual flow of assessment information about student learning aids the instructor in “adjusting the river.” The metaphor also reminds the reader of an essential element of systems thinking: that continual information is needed by the system to adjust and reorganize itself. Thus, assessment information helps the instructor adjust the curriculum to meet student learning needs in order to reach the desired outcome.
Stiehl notes that intended student outcomes are the “take out” phase of a student’s learning journey. She strongly advocates that intended outcomes, what instructors expect students to be able to do as a result of the specific learning journey, should drive curriculum development. In order for student outcomes to provide clear direction for curriculum development, they need to be “robust”; clear yet complex and flexible. Stiehl provides a number of examples to illustrate her concept of robust outcomes. She notes the difference between program and college wide outcomes and those intended for courses.
In carrying out student assessment, Stiehl advocates for creating student work tasks that are appropriate and occur during the flow of student learning. She suggests the importance of authentic tasks to make the learning more meaningful, and notes that the periods between assessments are “wake-up rapids”, when students gather information, study issues, and so on, to prepare for tests or assignments. Stiehl sees the patterning of assessments as part of the overall “program map” for helping students negotiate their journey down the river of learning. The “rapids” students encounter should be distributed throughout the learning process to allow for instructors to use the assessment information, and should be centralized in something like a capstone project at the end of the course or program. Such a project, in which the student takes the gained knowledge and applies it, allow the instructor to note what has been learned, if the student has applied it, and, ultimately, if the student outcome has been achieved.
Stiehl does not deal directly with student assessment until the third part of her book, after she has set assessment within the whole system of student learning. She explores three areas here: why assess student learning, what should be assessed, and how student learning should be assessed. Stiehl asserts that student learning is assessed to assist, to adjust, and to advance, with assessing to assist being the most important of those three because it is so crucial in helping to guide the learning experience. She notes that when assessing to advance, instructors should not be focusing on what the student has achieved, but whether the student is ready to go forward to the next learning. The concept of assessing to adjust speaks both to the need to review assessment information to adjust curriculum is presented to students, but also to review assessment processes to ensure they are providing the information needed. This point could be more clearly explained. Stiehl contends that what should be assessed are not only the students, but also the challenges of the content and the process itself. This viewpoint again reflects the systems thinking approach in that there are interconnections amongst the students, the process, and the content.
In the following section, Stiehl focuses on a key question: how to decide the criteria for quality in student work. She notes the importance of professional judgment and student input, but notes the influence of social values on this question. As a result, there can be no perfect, objective criteria, rather, Stiehl argues for a process that can be used in determining the criteria for quality. Later, however, she asserts that “good criteria is criteria that assists students, provides data for making decisions, about advancing the student, and making decisions about adjusting learning experiences.” (p. 72). Stiehl then discusses three types of assessment tools: the checklist, the scoring guide, and the rubric, providing many visual illustrations. As to the timing of assessment, she notes that data can be gathered at any point in the student’s learning journey and that the accumulated evidence, like the confluence of smaller streams to a river, provides us with a rich amount of information. Stiehl notes the value of indirect evidence of learning, such as student satisfaction surveys, as well as direct evidence. She also argues that the entire college system itself can and should be a part of this evidence-gathering process. In her final chapters, Stiehl discusses methods for course adjustment, then expanding the idea of adjustment to a discussion of what it is involved in a program review.
Overall, Stiehl work is a very comprehensive view of the assessment process as it occurs in the context of student learning and within the college structure itself. As such, she views assessment both in terms of a linear system and of the wider system of the college. Stiehl very carefully explains her metaphors, through stories, visuals, and examples. Also using a systems approach, she attends to the details of the assessment process, addressing key questions of why, what, and how. The tone of her prose is positive, inspiring, and hopeful and her narrative is clear and informal, yet continually instructional. Her work provides clear guidance to those embarking on the journey of assessment or those who already doing so, but who lack clear guidance on its role in student learning.