incoming Chair, faculty Learning Assessment Council
For most of my life, I have been a reader of fiction. But I remember that time with nostalgia, because my reading habits changed a while back, and then changed again. About a decade ago I found myself reaching for non-fiction instead of novels -- biographies, historical accounts for non-historians, popularized narratives of science discoveries, and then (slowly) economics and business ethics stuff. That wasn't as much fun as a gripping novel, actually, but well-written non-fiction had gotten its own grip on me. I read that stuff broadly and frequently.
In the last two years, though, I find myself growing nostalgic for my broad non-fiction period. Because these days I read about assessment, and the accountability movement, and then assessment some more. I often read just before going to bed, and this assessment stuff shows up in my dreams. I won't call them "nightmares" exactly. But I am pretty sure I had better quality dreams in my days of fiction reading....
I read something earlier this term, and I waited for a break in the blog line-up to bring it to you. (And thanks to all the guest bloggers, by the way. This blog has had lots of traffic, and I hear informally that many of the ideas presented here are parts of F2F conversations across our many campuses. I love good conversations, where multiple points of view are well represented. Then we have a beautiful array of thoughts to think about and talk about, and think about some more -- and the conversations are ever more productive. Thanks to all who have put their minds and energies to thinking and talking about assessment here at PCC.)
This something I waited to share on a blog appeared in a little newsletter, Assessment Update. That newsletter has become one of my favorites in this new reading phase I am in. People involved in Higher Ed in some capacity or other -- faculty, academic professionals, administrators -- write up short little pieces on some aspect of assessment at their institution. Often they are first-person narratives, usually told as tales of challenge and success. (Nothing is so gripping to me these days as an adventure story with an academic hero or heroine!) And, since I can relate to the challenges and risks and obstacles being faced, the sense of dramatic tension builds with each new paragraph....
So the one I want to write about comes from Assessment Update,Volume 22, Number 5. It was written by Trudy W. Banta from Marquette University. Like PCC, Marquette decided to go with a faculty-owned assessment process. And the heart of their approach, like ours, is a faculty peer review of assessment plans. They have a half-day end-of-year peer review session, just like Sylvia Gray pioneered last spring. And like our experience, their faculty reported loud and clear that they liked the chance to talk across discipline lines, and collaborate together around the common institutional mission.
Then Trudy went one step further.... She created a rubric for how to judge how far Marquette has gone at creating a "culture of evidence." And I want to share that rubric with you.
In the Anderson Conference this year, you will have a chance to learn from some local Assessment experts -- actually, from the assessment group that first got me to change my reading habits.... From them I learned that the simple fact of providing people with rubrics at the start of a class drives better summative scores at the end of a class. Rick Stiggins says that students are much more likely to hit a target when they know what and where it is, and it doesn't keep moving....
So this rubric defines our "target." (Sounds like gun practice, which is not that great of a metaphor for me, but I still like the basic points.) I believe, like Stiggins says, we'll be more likely to succeed if we know where we are aiming to go....
Assessment Component | Beginning Assessment System | Meets Expectations for Assessment System | Assessment System Reflects Best Practices |
Learning Outcomes | Program learning outcomes have been identified and are generally measurable | Measurable program learning outcomes. Learning outcomes are posted on the program website. | Posted measurable program learning outcomes are routinely shared with students and faculty. |
Assessment Measures | General measures are identified (e.g. student written assignment) | Specific measures are clearly identified (student global case studying the capstone course). Measures relate to the program learning outcomes. Measures can provide useful information about student learning. | Multiple measures are used to assess a student learning outcome. Emphasis on specific direct measures. Rubrics or guides are used for the measures [and they are routinely normed.] Measures are created to assess the impact on student performance of prior actions to improve student learning. |
Assessment Results | Data collected and aggregated for at least one learning outcome | A majority of learning outcomes assessed annually. Data collected and aggregated are linked to specific learning outcome(s). Data are aggregated in a meaningful way that the average reader can understand. | If not all learning outcomes are assessed annually, a rotation schedule is established to assess all learning outcomes within a reasonable framework. Data are aggregated and analyzed in a systematic manner. Data are collected and analyzed to evaluate prior actions to improve student learning. |
Faculty Analysis and Conclusions | All program faculty receive annual assessment results. Faculty input about the results is sought. | All program faculty receive annual assessment results and designate program or department faculty to meet to discuss assessment results in depth. Specific conclusions about student learning are made based on the available assessment results. | All of previous level and faculty synthesize the results from various assessment measures to form specific conclusions about each performance indicator for that learning outcome. |
Actions to Improve Learning and Assessment | At least one action to improve learning or improve assessment is identified. The proposed action(s) relates to faculty conclusions about areas for improvement. | Description of the action to improve learning or assessment is specific and relates directly to faculty conclusions about areas for improvement. Description of action includes a timetable for implementation and identifies who is responsible for the action. Actions are realistic, with a good probability of improving learning or assessment. | All of previous level and assessment methods and timetable for assessing and evaluating the effectiveness of the action are included in the planned action. |
Where is your SAC in this process? Where is PCC?
I have a good idea of the answers to both questions, because I have been reading through both the SAC assessment plans AND the peer reviews from our session in November. Different SACs are in different places, but we are all of us on this chart somewhere.
This stuff is good reading, and it makes me proud and happy to be part of what PCC is doing to serve our students ever better, and through them help shape a better future for our community and for our world.
The position of chair of the Learning Assessment Council rotates, and I know that in at most another18 months, I will be handing the leadership off to someone else. (Interested?) Maybe at that point, my reading habits will change again. If you've read any good fiction lately, maybe you could let me know.... In the meantime, there is a high stack of assessment stuff waiting for me. (I'd be happy to share!) From my reading, I know that the ground rules of higher ed are changing across the globe. Like all big changes, some is for the worse.... but some is for the better. A faculty-led search for evidence in order to identify best teaching/learning practices has a good chance of being in the for-the-better category. Thanks to all who are working to make it so....