Tuesday, December 7, 2010

A rubric for assessing our Assessment Progress

Shirlee Geiger
incoming Chair, faculty Learning Assessment Council

For most of my life, I have been a reader of fiction. But I remember that time with nostalgia, because my reading habits changed a while back, and then changed again. About a decade ago I found myself reaching for non-fiction instead of novels -- biographies, historical accounts for non-historians, popularized narratives of science discoveries, and then (slowly) economics and business ethics stuff. That wasn't as much fun as a gripping novel, actually, but well-written non-fiction had gotten its own grip on me. I read that stuff broadly and frequently.

In the last two years, though, I find myself growing nostalgic for my broad non-fiction period. Because these days I read about assessment, and the accountability movement, and then assessment some more. I often read just before going to bed, and this assessment stuff shows up in my dreams. I won't call them "nightmares" exactly. But I am pretty sure I had better quality dreams in my days of fiction reading....

I read something earlier this term, and I waited for a break in the blog line-up to bring it to you. (And thanks to all the guest bloggers, by the way. This blog has had lots of traffic, and I hear informally that many of the ideas presented here are parts of F2F conversations across our many campuses. I love good conversations, where multiple points of view are well represented. Then we have a beautiful array of thoughts to think about and talk about, and think about some more -- and the conversations are ever more productive. Thanks to all who have put their minds and energies to thinking and talking about assessment here at PCC.)

This something I waited to share on a blog appeared in a little newsletter, Assessment Update. That newsletter has become one of my favorites in this new reading phase I am in. People involved in Higher Ed in some capacity or other -- faculty, academic professionals, administrators -- write up short little pieces on some aspect of assessment at their institution. Often they are first-person narratives, usually told as tales of challenge and success. (Nothing is so gripping to me these days as an adventure story with an academic hero or heroine!) And, since I can relate to the challenges and risks and obstacles being faced, the sense of dramatic tension builds with each new paragraph....

So the one I want to write about comes from Assessment Update,Volume 22, Number 5. It was written by Trudy W. Banta from Marquette University. Like PCC, Marquette decided to go with a faculty-owned assessment process. And the heart of their approach, like ours, is a faculty peer review of assessment plans. They have a half-day end-of-year peer review session, just like Sylvia Gray pioneered last spring. And like our experience, their faculty reported loud and clear that they liked the chance to talk across discipline lines, and collaborate together around the common institutional mission.

Then Trudy went one step further.... She created a rubric for how to judge how far Marquette has gone at creating a "culture of evidence." And I want to share that rubric with you.

In the Anderson Conference this year, you will have a chance to learn from some local Assessment experts -- actually, from the assessment group that first got me to change my reading habits.... From them I learned that the simple fact of providing people with rubrics at the start of a class drives better summative scores at the end of a class. Rick Stiggins says that students are much more likely to hit a target when they know what and where it is, and it doesn't keep moving....

So this rubric defines our "target." (Sounds like gun practice, which is not that great of a metaphor for me, but I still like the basic points.) I believe, like Stiggins says, we'll be more likely to succeed if we know where we are aiming to go....

Assessment Component Beginning Assessment System Meets Expectations for Assessment System Assessment System Reflects Best Practices
Learning Outcomes

Program learning outcomes have been identified and are generally measurable
Measurable program learning outcomes.

Learning outcomes are posted on the program website.


Posted measurable program learning outcomes are routinely shared with students and faculty.
Assessment Measures





General measures are identified (e.g. student written assignment)





Specific measures are clearly identified (student global case studying the capstone course).


Measures relate to the program learning outcomes.

Measures can provide useful information about student learning.
Multiple measures are used to assess a student learning outcome. Emphasis on specific direct measures.

Rubrics or guides are used for the measures [and they are routinely normed.]

Measures are created to assess the impact on student performance of prior actions to improve student learning.
Assessment Results Data collected and aggregated for at least one learning outcome A majority of learning outcomes assessed annually.

Data collected and aggregated are linked to specific learning outcome(s).

Data are aggregated in a meaningful way that the average reader can understand.
If not all learning outcomes are assessed annually, a rotation schedule is established to assess all learning outcomes within a reasonable framework.

Data are aggregated and analyzed in a systematic manner.

Data are collected and analyzed to evaluate prior actions to improve student learning.
Faculty Analysis and Conclusions All program faculty receive annual assessment results.

Faculty input about the results is sought.
All program faculty receive annual assessment results and designate program or department faculty to meet to discuss assessment results in depth.

Specific conclusions about student learning are made based on the available assessment results.
All of previous level and faculty synthesize the results from various assessment measures to form specific conclusions about each performance indicator for that learning outcome.
Actions to Improve Learning and Assessment At least one action to improve learning or improve assessment is identified.

The proposed action(s) relates to faculty conclusions about areas for improvement.
Description of the action to improve learning or assessment is specific and relates directly to faculty conclusions about areas for improvement.

Description of action includes a timetable for implementation and identifies who is responsible for the action.

Actions are realistic, with a good probability of improving learning or assessment.
All of previous level and assessment methods and timetable for assessing and evaluating the effectiveness of the action are included in the planned action.

Where is your SAC in this process? Where is PCC?

I have a good idea of the answers to both questions, because I have been reading through both the SAC assessment plans AND the peer reviews from our session in November. Different SACs are in different places, but we are all of us on this chart somewhere.

This stuff is good reading, and it makes me proud and happy to be part of what PCC is doing to serve our students ever better, and through them help shape a better future for our community and for our world.

The position of chair of the Learning Assessment Council rotates, and I know that in at most another18 months, I will be handing the leadership off to someone else. (Interested?) Maybe at that point, my reading habits will change again. If you've read any good fiction lately, maybe you could let me know.... In the meantime, there is a high stack of assessment stuff waiting for me. (I'd be happy to share!) From my reading, I know that the ground rules of higher ed are changing across the globe. Like all big changes, some is for the worse.... but some is for the better. A faculty-led search for evidence in order to identify best teaching/learning practices has a good chance of being in the for-the-better category. Thanks to all who are working to make it so....

Sunday, November 28, 2010

David Rives, President of the American Federation of Teachers -Oregon

Ensuring Student Success in Oregon: Faculty Perspective

This year, a Legislative Workgroup for Higher Education was formed to look at several proposals coming forth to reorganize the Oregon University System (OUS). The Workgroup requested two major consultancy groups, the Western Interstate Commission on Higher Education and the National Center for Higher Education Management Systems, to assist with a proposal to restructure the higher education system in Oregon. That proposal has been set forth in Legislative Concept 2861, which will likely be drafted into a bill this next session. The legislature is also going to see a proposal for a restructure of OUS from the State Board of Higher Education, as well as a proposal for the University of Oregon to become more independent from the state system. All of these proposals go further than just reorganizing the governance boards that run the institutions of higher education in our state. They also seek to connect funding for education to new metrics for accountability and performance.


Currently, state funding is disbursed to each community college and university using formulas that are based largely on enrollment totals. Enrollment numbers can offer an indication of the accessibility of public education, but they don’t encapsulate other goals of higher education. The federal government, the state governor and legislature, numerous think tanks and foundations are all pressuring higher education institutions to use new metrics in determining how we are meeting our goals. (The Northwest Commission on Colleges and Universities desire that PCC demonstrate learning outcomes for students is just one example of this). Oregon already has a policy that, by 2025, 20% of the population will have at least a high school education, 40% will have at least some college (e.g., an associate degree or a certificate), and 40% will have at least a bachelor degree. 40/40/20 serves fine as an aspirational goal, but many educators, particularly at community colleges, would find it worrisome if colleges were to receive funding only according to how many degrees they awarded. Of course, these kinds of numbers-driven measures often have more in common with industry than education.


Even though the written reports of the consultants and the Chancellor of the OUS mention the need to utilize a variety of metrics in assessing higher education programs and institutions, a lot of the testimony and discussion at the legislative level has revolved around statistics that are easy to collect, like how many degrees an institution or program awards, or how many traditional, full-time, first year students enroll for a second year. Taking into account the number of degrees granted and the successful completion of programs of study is important. Most educators want to see as many students as possible complete their degrees—it is a measure of success and a measure of attaining goals. A case can be made that completing the classes required for a degree can contribute toward a being a more educated and civically-minded community member. But let’s not confuse metrics based on diplomas and degrees as the main measure of student success.


That brings up another point about the whole debate about accountability and performance. Largely absent from the discussion surrounding these accountability metrics, and some of the business terminology like “performance” and “productivity,” is the purpose of higher education. Higher education develops the capacity for abstract thought and critical reasoning. If we are going to assess what leads to quality education, we’re going to have to look at what’s involved in developing critical reasoning skills and abstract thinking.


Assessing such skills cannot be done through any easy, one-size-fits-all model, applicable to every institution and program of study in the state. The American Federation of Teachers website “What Should Count, http://www.whatshouldcount.org/, presents the some of the latest in ideas for assessing student success and institutional accountability. One example of assessments that look at what a higher education should offer would be the Essential Learning Outcomes (ELO’s) from the American Association of Colleges and Universities (http://www.aacu.org/leap/students/elo.cfm). These are broad learning outcomes that promote the kind of educational experience all students should have in some form. Here are the four main areas these outcomes cover:


  1. Knowledge of human cultures and the physical and natural world;



  2. Intellectual and practical skills, including inquiry and analysis, critical and creative thinking, written and oral communication, quantitative literacy, information literacy and teamwork and problem solving



  3. Integrative learning, including synthesis and advanced accomplishment across general and specialized studies.



  4. An understanding of key issues concerning personal and social responsibility, including civic knowledge and engagement, intercultural knowledge and competence and ethical reasoning.



Although these standards are relatively straightforward, it is not a simple matter to implement them. They have to be instituted into a program’s curricula. Teaching and assessment practices need to be designed to achieve results for the institution. This means a cooperative and coordinated effort among administrators, faculty and staff, both at the disciplinary level and the cross-disciplinary level. The American Federation of Teachers-Oregon wants to ensure that any statewide measures are aimed at truly assessing student success and that they involve faculty and staff in the process. At PCC, the involvement of faculty and staff in creating a set of tools to assess our learning outcomes at the program level is an example of what every college and university will have to do to if we educational professionals want to set the standards and not allow them to be dictated to us by outside groups. By defining the standards of quality education ourselves, we will also be able to look into how we can improve it in meaningful ways.


David Rives is the President of the American Federation of Teachers-Oregon

Tuesday, November 16, 2010

Martha Bailey: TLC co-ordinator's point of veiw

Assessment as Professional Development

According to some philosophers, all of us are ethical egoists, and we only choose to act when an action is to our benefit. While I do not hold to that as the best explanation for ethical behavior, I think the theory provides a useful position from which to discuss learning assessment with PCC faculty. We do many things as faculty because we are told they are part of the job: writing syllabi, conducting classes, grading—oh, yes—we have other motivations, too, but sometimes it just comes down to: I have to do this. Learning assessment, particularly at the program level, comes with that kind of mandate, as well as some extrinsic motivations: do this, and do it well, or we could lose accreditation, and that has more than a little impact on a community college like PCC.


But, what if we take the egoist’s view and ask, “What’s in it for me (besides keeping my job, and helping students advance)?” Where do we find the personal part of assessment of learning? I want to suggest that, at least in part, the answer is this gives us a tool for professional development. But before I pursue that idea, I want to acknowledge a position raised in a comment on last week’s blog. Jayabrush wrote “These discussions have in larger part been ones about sovereignty, and worthwhile ones, I might add. But I wonder if we need to have a more frank recognition that underneath all these discussions is a base fear, teachers' fear that if they are observed and found to "not teach well" then they might be fired instead of given an opportunity to improve.”


A similar fear was raised in an earlier posting, too. Both comments note that there is a real risk if the only use of assessment of the individual faculty member’s work is punitive. And it does happen, particularly for part-time faculty. Once a person is hired, outside of egregious actions, he or she will continue to be given classes, because department chairs need to fill teaching slots. In the last (I’m not sure how many) years, the level of evaluation of teaching performance by these instructors has been minimal—until an instructor applies for assignment rights, that key to access to staff development funds and other opportunities. Once this is done, if the instructor is denied such rights (and that does happen), the person can no longer teach at PCC, at least in that subject area. I’ve seen this happen to instructors who had taught for years, but never applied for the rights. So, of course, these new moves to assessment can appear to be another punitive move.


But there is another way to view assessment, and one that might even address the fears. An aside here: while the mandated assessment for accreditation is at the program level, and is not intended to single out a particular instructor, it is possible that, over time, one instructor’s classes may be deemed to be less successful than those of other instructors. If that happens, I would advocate moving the person into a plan similar to the one I am about to describe. But if assessment is considered an avenue for becoming as effective instructor as I can be, then I no longer need to fear it, though it probably will be uncomfortable at times. And if I can control when and how the assessment happens (not waiting until it is demanded), and even recruit help from my choice of allies, then assessment becomes a tool for both student learning and for faculty learning: assessment becomes a two-way street (the last phrase I have borrowed from a TLC brain-storming session on assessment, and I don’t recall who came up with the phrase).


What I mean is that we use assessment of various forms in the classroom to determine whether students are learning. And formative assessment, in particular, is designed to help both students and faculty see where student are succeeding, and where they need more work. But it can do the same for faculty: if students aren’t “getting it”, they often will offer suggestions of ways to help them. Not all of that feedback will be equally useful: some is worthless, while other pieces are absolute gems of insight.


The most useful level for assessing for professional development, then, is the classroom level, and not just toward the end of a class (the traditional student evaluation). It would be nice to be able to use longer-term, post-graduation assessment as well, but at the moment that isn’t practical. Rather, for the best possible interactive assessment and improvement throughout a course, there needs to be assessment planned by the instructor, and assessment offered by the students spontaneously. The class must be structured and carried out in an open and welcoming manner for the latter to be offered. For either these to be of any benefit toward professional development, the instructor does have to be willing to take the feedback as encouragement to improve and not simply as criticism. Someone who can work with student feedback, differentiating between the feedback of value and that offered with other intent, and who becomes a better instructor, will be able to approach evaluations by administrators with much greater confidence.


What continuous formative assessment in the classroom means will obviously vary with the course being assessed, since there are many types of courses offered at PCC. But the idea, and one that doesn’t need to add to the instructor’s time burden in the way Phil Seder described in his posting, is to regularly do small assessments of what is happening in class, and make course corrections along the way. Now, sometimes the needed course correction will be one that cannot be applied until the next time the course is taught—if my major assignment needs work, I’m not going to fix it for students this time around. But if I get feedback as we are going through, I can note that and include it in my course development, rather than having it come later as another task (this is something like the course development feedback loop Peter Seaman discussed).


The other piece here is that students will not only speak of course content and materials: sometimes they address aspects of course delivery, that is, issues related to the instructor directly. This offers the biggest opportunity for professional development—to work on me and my skills. And if this is coming continually, then when I see an opportunity coming up to learn in a given area where I am weak, I can jump on it. And, yes, it may mean the students get fewer comments on a piece of work, but if it leads to me being a more effective instructor overall, that benefits greatly outweighs the small harm.


Now, some might say that I am writing a piece such as this blog for egoistical reasons: after all, I do coordinate the TLC (Teaching Learning Center) at Cascade, and we do offer some of those sessions you might come to for improvement as an instructor. And I won’t deny that is somewhat true. But part of what I have learned as a TLC coordinator is that students benefit (learn more) when faculty continue to develop their skills; students may learn from less-skilled instructors, however, they truly appreciate getting to “study with” a highly-effective teacher. If we can let the students assess us even as we assess them, then learning truly does become “everyone’s business.”

Monday, November 8, 2010

A "Scientific" Basis for Assessment? By Peter Seaman

Phil Seder made a couple of points in his blog entry that I would like
to use as jumping-off points. But first a few words about me and where
I stand regarding assessment:


I started off in higher ed as an English teacher. I taught freshman
composition and literature courses, and I frankly struggled with
questions of worth - How do I know whether my students are getting
anything worthwhile from my classes? Is teaching really worthwhile, or
is it a waste of time? Could the students get the same thing if they
just read the textbook? How many drinks at happy hour will it take me
to forget these weighty questions? (I asked that last question only on
Friday afternoons). Eventually I became really curious about the
process of education and I started taking education courses on the
side. I even went so far as to get a certificate in teaching English as
a second language (ESL teachers use some fascinating methods), and then
I dove in head-first: I went to grad school full-time to get a master's
degree in something called Instructional Systems Technology.


"Instructional Systems Technology" sounds very intimidating, but if you
sound out the words, it's not so bad. The word "systems" implies that
instructional designers (or IDs) take a "systems" view of instruction,
meaning that we look at issues like repeatability, reliability,
validity, and so on. The word "technology" implies that we look for
ways to infuse technology in instruction (technology can be a method or
set of techniques, as well as a computer program).


Everyone who studies IST, or "instructional design" (its more common
name) learns a model called the ADDIE model.


ADDIE:
A = Analyze (the learning task, the learner, and the learning context)
D = Design (the instructional intervention - can be a book, an activity,
a computer application, or a combination of these, and much more)
D = Develop (the instructional intervention, also known as "production"
- a painful word for most IDs)
I = Implement (the intervention, or try it out)
E = Evaluate (the results of the intervention, or find out how it worked
- what long-lasting results it had upon behavior, cognitive functioning,
etc).

The process is iterative: we use the results of evaluation to improve
instruction continually (draw an arrow from "E" back up to "A" and we
can dump the results into our next "analysis" phase, and start the
process again).


You can see that ISD (instructional systems design - it goes by many
names) is really comfortable with evaluation. When I heard that the
accreditation authorities want PCC to use the results of assessment to
improve future instruction, I thought, "Of course - that's what you use
results for."


Okay, back to Phil Seder.

Phil made an excellent point in his blog entry about instructors doing
all of the steps - analyzing, designing, developing, implementing,
evaluating, which is analogous to washing the turkey, dressing the
turkey, cooking the turkey, serving the turkey, washing the dishes
afterward, and asking everyone how it tasted so you can improve your
recipe for next time. Phil has actually put his finger on one of the
great frustrations of my profession as an ID, namely the unwillingness
of instructors to *allow* anyone else to assist with any part of their
personal ADDIE process! Many instructors become so comfortable with
their own way of teaching that they view any advice or consultation as
an intrusion (I've even known instructors who refuse to let another
instructor observe any portion of their teaching process).


I'll admit: Letting someone else critique your professional work can be
very scary. And evaluation itself is unfortunately fraught (in many
organizations) with performance implications (administrators can use
evaluation results against teachers). And certainly there are control
issues (if I get to control every aspect of the instructional process,
then it will happen the way I want; to give away control to someone else
is to invite the possibility of loss of control). My larger point is
that instruction improves only through regular evaluation. To give away
control of some aspect of the process is to open up oneself to growth -
again, as long as there are protections within the organization.


Phil talked about how hard assessment is to do, and I agree. Sometimes
I wonder if it isn't better to prepare students for some kind of
external assessment - like the aviation instructors who prepare students
for FAA exams, or the real-estate instructors who prepare students for
state licensing exams. At least in these cases I can know the
assessment is external to me and free from my own bias. But it's still
scary because I as an instructor lose control of the situation the
moment the student walks out of my classroom (whether onground or
online). And of course when instructors are measured by how well their
students perform on standardized tests, instructors will "teach to the
test," which unfortunately limits learning to the confines of the test.


I guess I would close this blog entry by pointing to the necessity to
show results in almost any professional endeavor, and to wonder why
higher ed has been able to NOT show results for so long (sorry about
splitting that infinitive!). A few years ago, I overheard an instructor
at PCC say, "A student should take my course because she or he will be
better for having taken it." Okay, I accept that statement. But could
you say the same thing about a walk around the building? ("The student
will be better for having taken a walk around the building"). I'm sure
that every instructor knows, in the deepest chambers of the heart, that
students are better for having taken a course. But the fact is, it's
not enough in any profession to assert some value with no evidence of
value. And my profession of ISD absolutely depends on this evidence.
Think about it: If anyone can design instruction that works as well as
any other instruction, where is the value of the designer? I look at it
like riding in an airplane: When I first rode an airplane, I was really
frightened because I did not think there was any basis for an airplane
to stay aloft - it seemed like magic. But when I talked to pilots and
read books about aviation (kids' books are great for this kind of
explanation), I realized that aviation depends upon the application of
scientific principles: move a specially shaped object (fuselage with
wings) over the ground (and through the air) fast enough, and the object
will rise in the air - it will take off! It has to - the physical
properties of the earth demand it. So why can't we apply the same
principles in instruction: apply certain forces and depend on a certain
result?


Of course you say, "But students are not objects, uniformly shaped,
moving through the air at a certain speed." And of course, you are
correct! Students are humans and therefore arrive in the classroom with
an endless variety of behavioral, cognitive, and psychomotor attributes
that are incredibly hard to divine. But we have to start somewhere, and
we do that by applying certain interventions and measuring their
result. As long as we have organizational safeguards so that evaluation
data is not misused, we should not fear evaluation - it can only make
instruction more effective.

Tuesday, November 2, 2010

Phil Seder's Assessment Angst

I come from the world of business marketing. In that world, we plan, we produce and we measure success in order to refine programs and improve outcomes. I believe this to be a valid approach. I teach my students that it is an essential approach in the business world.


I thus find myself perplexed by my internal resistance to the idea of assessment that is being pushed down from the highest levels of government and has now come to roost at the community college level. There is no doubt in my mind that we can improve delivery and there is no doubt in my mind that assessment can lead to refinements in course content or delivery that results in better outcomes. So why my angst?


I think it is because in the business world, there is a fundamental difference from the world of classroom education (well, a bunch actually, but I’m going to deal with just one here). In business, marketers work with researchers to determine consumer or business needs. Those marketers pass requirements on to designers and engineers who develop the products or services. The engineers pass the designs to manufacturing and sales to create and deliver the products. And finally, the marketers (often with the aid of accountants or analyst) assess the results before another round of the same is initiated.


Now let's look at who performs these tasks in the education world. Basic research and determination of customer needs? At the classroom level at least, the teacher. Product design? The teacher. Product development? The teacher. Product delivery? The teacher. Product assessment? The teacher.


This is not to say that administrators do nothing. There are monumental issues of overall program development (e.g. Should we offer a nursing program), new program implementation, facilities construction and maintenance, marketing, budgeting, negotiation, discipline and, ah yes, even employee supervision. I thank my stars that trained professionals stand ready to perform these essential duties.


But the bottom line is that the classroom teacher is responsible for every aspect of the development and delivery of the product in the classroom. In other words, they perform tasks that are the responsibly of numerous differently trained professionals in the business world. I know this because I've managed products in both worlds and, frankly, it is one of the things that excites me about the daily challenge of teaching.


But therein lies a conundrum. To teach better, we are being told, we need to assess more. We need to be more like the measurement driven business world. The teacher though, is not like the business professional. They are responsible for the entire life cycle of product development and delivery in the classroom. To assess more means that they either have to 1) work more or 2) spend less time on planning, development and delivery.


Now we all know that there are those who can work more. But for many teachers I see around me, their waking hours during the academic year are filled with the activities of course development and delivery. I doubt if many would look kindly at giving up the few moments of personal time they have during the week. As to the old saw "work smarter, not harder," well, it's a trite and meaningless age-old piece of business wisdom, custom canned for delivery to the survivors of corporate layoffs as they contemplate a future of 24/7 employment. At the very least, when I have been in those situations, I had the comfort of higher compensation to balance the loss of personal and family time.


Today's reality of an assessment-driven education system though, is that the classroom teacher will have to cut back on planning and delivery activities to respond to the assessment demands. And the more dedicated the teacher, the more they will need to cut back, since they are the ones who already spend their entire waking time engaged in better delivery. The result: we know what we're getting, but what we know we're getting is worse than what we were producing before when we didn't know what we were getting. Read that twice, if need be, then, especially if you have been teaching for a decade or more, contemplate whether this does not seem to precisely speak to the ultimate results of modern educational policy at the high school level.


I see it in my own work. Even in the past five years, I have seen a creeping slide towards more meetings, more time discussing the nuances of outcomes language (e.g. our graduates don't communicate with coworkers, they interact with coworkers), and more time discussing assessment. Where I have cut back was first, in my personal life where I spend limited time on my once thriving second life as a sculptor, and second, in my course preparation and delivery. It's not like I drop professionalism; it's just that I make one course improvement per term where previously I made three or four or five. Or the extensive paper markups that I used to do, and for which I was often thanked by more serious students, have become rare and often replaced by a mere letter grade. It's not what I want. But the fact is the time spent responding to federal officials and accreditation committees must come from somewhere.


Ultimately, the question that I can imagine being thrown back at me is this: "Then would you just say to give up on assessment, even knowing that better assessment can create a better product?" in a nutshell, yes. Taking this back to my original business analogy, if a shortage of resources forced me to make a choice between building a better product or measuring consumer acceptance of that product, I would err on the side of building the best product I could. In a world of unlimited resources, I would do it all. In our real world of limited resources, trade offs will be made.


Of course, I do assess. In many ways and on many dimensions. I assess writing skills in business classes (which doesn't make one very popular, I might add), I assess math skills, I force students to present, I make them work in teams. Do I know scientifically whether my assessments are creating the best students, or whether I'm doing a better job teaching than I was five years ago? No. Intuitively I believe so. But I know I am putting the most I can into my product and I am comfortable that in a world of scarce resources, I am allocating my time in an ethical and professional manner.

Monday, October 25, 2010

Steve Smith's Assessment Journey


I started teaching in a volunteer ESL Program for a local church that was next to the University of Washington which I was attending as an undergraduate. I spent the first week doing a lot of exercises I had found in an ESL instructional book of which many were written. I was working with a group of Hmong. It wasn't until the second week that I realized that many of the students were illiterate in their own language. Much of what I had been doing had been worthless. I realized there was more to this teaching and learning business then I had realized.


After graduating, I moved to Ecuador, South America. I taught English and eventually became the director of a language and cultural center. I was responsible for 30 Spanish and English instructors many of whom had no teaching experience. I began to research teaching and learning strategies. I started to wonder if there were a systematic way to approach teaching and learning.

After 5 years we returned to the US. I started teaching computers. I also started a master’s program in adult education. Instructional design, Knowles’ principles of Andragogy, Gagne’s 9 Instructional Events, Constructionist theories of teaching and learning opened my eyes to a whole new vision of teaching and learning. I also was introduced to assessment through Kilpatrick's 4 levels of evaluation. While now outdated, it transformed how I viewed evaluation. Can the student do the learning task in class was only the beginning. Can they perform it outside the class without classroom support and finally the most important and hardest to assess; did the learning solve the original problem? I realized that what I did inside the classroom needed to be assessed at least in part on what the student could do outside the classroom. This was a monumental shift in how I viewed teaching and learning. I started to believe that if it could be measured, it could be learned and that instead of a Bell curve of grading my expectations were that everyone could succeed if I applied the appropriate instructional design principles.

I taught off and on in various formats including distance learning for the next 10 years. Recently, I finished the coursework for a PhD in Community College Leadership. In this program I was introduced to the concepts of Chaos Theory, Freire’s Transformational and Social Critical theories of learning, Qualitative vs. Quantitative research, living systems and Wheatley’s application to organizations. I realized that I had become too reductionist in my teaching and learning. I needed a more holistic approach. Some things are hard to measure and when you try to measure them, they change.

These experiences have shaped my view of assessment. I believe in the concept of assessing learning based on what the students can do “out there”. I believe that we need to measure not only what goes on in our classroom but also the larger core outcomes. This process is messy and we may not always be able to cleanly assess some of the critical learning components which happen in our classrooms such as the student is more confident, more engaged in their own learning, open to new ideas, more excited about continuing with their education and more willing to take emotional and intellectual risks. These are the mana from heaven that we seek out as teachers but may not be able to necessarily assess. I believe we owe it to our students to keep struggling to find the right balance of assessing with realizing that learning is not necessarily the sum total of its parts. Assessment is not an either or but rather an and/and focus. We need to assess to continue to grow and improve as an institution while allowing room and time for those things which are difficult to assess to flourish.

Steve Smith is the director of Curriculum Support Services


Tuesday, October 19, 2010

What does PCC get from being accredited, anyway?

From Laura Massey, Director of Institutional Effectiveness

I don’t know about you, but I am not motivated by scare tactics. A sentence beginning “Without accreditation” followed by frightening statistics does nothing more than bring forth my inner twelve year-old who mentally shuts down while rebelling in full voice. Instead, here are a few facts.

Because PCC is an accredited college . . .


  • Almost 40% of our students are able to receive Federal Financial Aid dollars.

  • The credits earned by 5,100 students who transfer to another college or university (each year) are accepted at the transfer institution.

  • Last year’s 3,400+ graduates have credentials that are valued by other institutions, employers and licensing agencies.

  • Almost $14 million in recently awarded grants will provide improved student services, new equipment, expanded curriculum and so much more.


But wait. This is supposed to be an assessment blog. Why the emphasis on accreditation?


Because to maintain our accreditation, PCC is required to “hasten its progress in demonstrating, through regular and systematic assessment, that students who complete their programs have achieved the intended learning outcomes of degrees and certificates. Further, the college must begin to demonstrate, in a regular and systematic fashion, how the assessment of student learning leads to the improvement of teaching and learning” from the visiting team response on behalf of the Northwest Commission on Colleges and Universities, Spring 2010.


Scare tactic? Hardly - although it is certainly an important statement requiring action. Let’s first put this in context at PCC.


We care deeply that our students are learning. Period. It is our professional responsibility to understand what is working well and make improvements when needed. This is nothing new or different from what hundreds and hundreds of PCC faculty do in classrooms each day.
However, Northwest is also asking we demonstrate (in their language that means document) that we regularly and systematically assess graduates to have achieved the College defined outcomes. Furthermore, that we link how we used what is learned through assessment to make changes where needed to improve teaching and learning.


The faculty-driven Learning Assessment Council developed an assessment model which was initially implemented in 2009-10. As support staff to that work, I could see how institutional learning is both energized and realized through faculty creativity, insight and on-going commitment to excellence - the same qualities that support student learning.


As we ‘”hasten” our progress and fully implement the model this year, I believe we are on the path to fulfilling Northwest’s recommendation.

Tuesday, October 12, 2010

From Cynthia Killingsworth, Accounting Instructor

I enrolled in Sylvia and Shirlee’s assessments class last spring with a narrow objective of improving testing in my accounting courses. This class ended up being a major teaching style alignment and I learned to appreciate the unavoidable bottom-line of the assessment movement. We talk a lot about measuring student performance, but it is really a matter of facing our own teaching effectiveness head-on. The harsh line “It’s the teaching, stupid!” comes to mind from a 2006 article about retooling math education. Here are some of my thoughts from this experience:

Do I understand what should be assessed? I discovered that I was assessing concepts easiest to measure, such as preparing a financial statement in proper format, but may have failed to fully consider those concepts harder to measure, but equally relevant for student success, such as interpreting financial information. This led to a rapid detour away from testing as my focus and towards reconsidering what I was teaching.

Do I understand what is most relevant for course content? In this detour I had to invest some time finding actual research in my discipline regarding the concepts or skills important for accounting student success. Ironically, the PCC assessment focus last year was critical thinking and problem solving, and when I researched my discipline, I rediscovered that the Accounting Education Change Commission had completed an extensive study in the 1990’s determining that critical thinking skills were being ignored in college accounting education. Accounting educators’ subsequent response was to include one mildly unstructured critical thinking problem at the end of each accounting textbook chapter, still preceded by dozens of highly structured procedural questions. Guided by this inference of relevance, it is not surprising that most accounting instructors still lean towards procedural content and not much has changed.

I learned that it pays to reexamine our discipline to determine if our focus is supported by research. I found that this might not be true in accounting education. Another interesting irony of the Accounting Education Change Commission study is that many of the commission’s recommendations aligned perfectly with the PCC core outcomes. I now have even more confidence in the wearability of PCC’s overall vision!

Do I understand the level of my students’ academic development? I reviewed the research subsequent to the Accounting Education Change Commission’s findings and learned more about student cognitive development. After this report was released, some college professors started introducing complex financial analysis projects at all levels of accounting courses. The problem with this quick fix was that most students cannot handle complex financial analysis until their senior year or graduate level studies. A tiered approach of first introducing students to concepts involving uncertainty, followed by increasing analytical tasks, was found to be more effective.

Do students need to know about all this? Students are the most significant stakeholders in the assessment movement, so it makes sense that they should be aware of what is happening. I was teaching a non-accounting general education (Introduction to Nonprofits) class at the same time I was taking the assessments class and spent some time discussing these concepts with my students. I was pleased and relieved to find that students were very interested in PCC’s core outcomes and wanted to understand meaningful ways to measure the quality and progress of their education. This desire confirmed a key assessments’ concept discussed in Sylvia and Shirlee’s class. Letter grades may stay around, but more of our effort needs to be focused on an assessment system that gives feedback and advice to students throughout the term. It should be just as much of a learning tool as their textbooks.

I have to admit that the above experiences unsettled my relatively peaceful teaching bubble, but my perspective has been broadened along with my courage to face the inevitable changes on the horizon. I remember when the Accounting Education Change Commission’s report was released about the poor quality of accounting education. I had just graduated from college and was facing the reality of paying my student loans. The thought occurred to me that I deserved a refund! As I face the future ramifications of “value-added” education, I hope I never forget my perspective as a primary stakeholder in education from 20 years ago.

Tuesday, October 5, 2010

Assessment: a blog post

by Andy Simon
September 2010



I haven’t been involved in the discussion of assessment for more than a year. I apologize if the issues I raise have already been heard and discussed. I confess that I am skeptical about the whole enterprise, and yet I know that many intelligent, aware, hard-working people–some of them my friends–have put in a great deal of time and effort examining proposals and strategies. I am not completely comfortable saying, in effect, to these folks, “your efforts are for naught, the project is inherently flawed.” For one thing, I may be wrong. Just on the other side of the next committee meeting a beautiful, effective, and efficient approach may be discovered or created. Yet, I was asked to write about my objections–qualms might be a better term. And so I shall.

The point of the project, I remember being told, is that taxpayers, politicians, and business leaders want to know that they are getting a good return for the money they invest in higher education. What they want to know–and what we must be able to show them–is that we are indeed enhancing students’ lives–adding value to them–by educating them.

My first observation about the project so described (and perhaps inaccurately so) is that we should be very careful about how we think about the relationship between those who sign the paychecks of teachers and those who receive the checks. John Dewey, a century ago, in defending the concept of academic freedom, offered a similar warning. He said, in effect, that although the trustees of a university are in an economic sense the employers of professors, the job of teaching is primarily to serve the public.


Nowadays, I see the idea of “the public” as a bit more problematic than Dewey did, and besides it might seem plausible (though I would say mistaken) to equate “the public” with the taxpayers. I would say that though we are employed by the taxpayers (in public education) we are responsible primarily to the generation we are educating and beyond that to future generations as well. For if we do not educate the current generation, who will there be to educate the generation that follows, and the generation after that?

This observation points to one of the problems with the very model the concept of adding value to students lives depends on. It is, after all, a rather industrial model–students’ lives are the raw material we work our magic on. They pass through our educational factories and, like raw steel turned into automobiles, their value is enhanced. But processing students’ lives can at best be only part of the function of our institutions of higher education. Another function–and a vital one–is to preserve some important things: ways of thinking, intellectual skills, bodies of knowledge, bodies of literature and the keys that unlock their meaning, bodies of art and the keys that unlock their meaning, historically significant works that inform our own culture and literature, and much, much more. How much value is added to our civilization because we can understand that the piles of stones that are scattered across Europe are the remains of a once vast empire, and because we can study the ideas that enabled that empire to rise and that caused it eventually to fall? Having that kind of knowledge adds at best a pittance of value to any one student’s life, but does that exhaust its value to our civilization?


It is important to note that higher education is one of the few extant institutions that predate the industrial revolution. It would be a grave mistake to force it into a industrial model merely because that is the only model we can think of. And yet, that is precisely what assessing education in terms of processing and credentializing students does. The success of the industrial model in many spheres of our lives has made us forget that there were–and hence are–other ways of thinking about education. One of them involves seeing education as a process of preparing and inducting students into a community–a community of scholars.

Clearly, there are problems with this model, too. One of them is that a closed community can perpetuate an unfair exclusivity. I was tempted to say that a pre-industrial approach to education saw the point as inducting students into a brotherhood, which would have been historically accurate as it reflects the way the “community of scholars” model has in the past excluded people who had a legitimate claim to be admitted. But the solution to such problems is not to throw out the model altogether but to try to ensure that there is open access to the community of the educated.

I can say quite explicitly what I think we must protect our institutions of higher education from: market forces. The threat to education that has existed at least as long as the advent of Capitalism has been enormously amplified by the computerization of our culture. Education has become a commodity–something to be bought and sold. That might not be so bad, but in our society, the only commodities that can be successfully bought and sold are ones that can be mass produced. In order to be mass produced, education must be standardized and homogenized. When I go to the store for, say, a pair of socks, the characteristics that make the socks I come home with unique an individual entities in the world are unimportant. What is important is that the socks I come home with are interchangeable with any of several dozen in the store I shopped in, any of tens of thousands in the stores the sock manufacturer supplies. That’s the way
it is with mass-produced commodities.

We are already well on our way to commodifying higher education. When we discuss the question whether college credits earned on-line are equivalent to credit earned in face-to-face classrooms we are addressing the interchangeability of our product. Of course, the answer to our question is obvious: if earning the credits in the two venues add equivalent value to our students’ lives, then they are clearly equivalent.

But if that is the approach to education that we want, why stop there? Why have professional teachers at all, with all their quirks and personalities? Why not have Harvard or Stanford develop the most effective on-line curricula (since massifying the classroom is impractical, though not impossible) as measured by value added to students’ lives, and then hire educational technicians to administer them? I’m being a bit facetious, but I really do fear that something like what I’ve described is the future for higher education: most students will enroll in institutions (note I didn’t say “attend”) where they will be offered the best on-line curricula the school system can afford. Only the elite, that is, the wealthy, will be able to afford colleges that offer face-to-face instruction from actual professors.

That poses a serious question: is there any real value in the uniqueness of face-to-face instruction by professional instructors? Maybe the answer is no. Maybe the individuality of instructors, their quirkiness, their personalities are irrelevant, or worse, impediments to the educational process. Obviously, I don’t think so. And I don’t think so because what I remember most about my undergraduate education is not the content of the classes but the inspiration I took away from many professors–and some of the quirkiest were the most inspiring. It may be just me, but I think inspiration is best transmitted face-to-face. I don’t think it comes across all that well through a computer screen. (I’m pretty old fashioned, but I believe that the immediate experience of presence is qualitatively different from any technology-mediated experience, no matter how life-like.)

Now that I’ve struck a personal note, let me pursue it a bit longer and then bring this long post to a close. I have no doubt that my education added a great deal of value to my life. Probably the most valuable contribution was never explicitly stated but implicit in everything I studied. My professors revealed to me the vast world of ideas–not specifically philosophical ideas, but the world of thought and knowledge. At some point I caught onto the notion that, if I played my cards right, I could spend the rest of my life studying anything at all that interested me, whether it be the sex lives of the Greeks (the revelation came to me while I was barely post-adolescent) or the musical traditions of the former Portuguese colonies, or anything else.

What a valuable lesson–to be shown to the vast continent of human knowledge and to be led to some of the points of entry to it. We have so many opportunities to enhance and enrich our students’ lives. We give students literature to read and insist on discussing it, and by so doing reveal to them the inner lives of other people, and by implication revealing their own inner lives, too. I feel some regret that throughout my teaching career I didn’t emphasize nearly enough to my students the importance of finding and creating beauty in their lives. I don’t think the intangible enhancements we have to offer are any less real because they are intangible, nor are they any less real because they are unmeasurable. I’m probably just tilting at windmills, but I think we have a responsibility to the future to do our best to protect education from the market forces that would industrialize it, and by so doing would destroy its humaneness and its ineffable value.

Tuesday, September 28, 2010

We are all in this together

From Sylvia Gray, out-going chair of the Faculty Learning Assessment Council


I’ve always wanted to get things in order and have them stay in order – but that just doesn’t seem to happen, no matter how hard I try. I don’t know how it works for the rest of you – but here we are – two years into this Learning Assessment project, and, shall we say, “enhanced” directives from the Northwest Commission on Colleges and Universities (NWCCU) (our accrediting agency) have come down to us. My first feeling (I admit – I’m not proud of it) was one of dismay – can’t they see what a big job it is to move an institution like PCC in a direction of this sort? And can’t they see what we’ve already accomplished? We were feeling pretty good about the progress we were making and the plan we had agreed on for future learning assessment.

Distancing myself, I do realize they are not asking for something different than what the Learning Assessment Council has been working toward – simply that we move things more intensely and more quickly. What the heck? Let’s be efficient. Maybe we can come up with rubrics for various core outcomes that we can use on the same set of student papers, for instance.

One thing this kick from the NWCCU forces us to do is to talk with each other about what we’re doing and to share ideas among ourselves – maybe more than we have in the past. When I think of some of my favorite things about being at PCC –apart from the absolute love of the classroom dynamic - it really is conversing with my colleagues about what and how we teach. I’ve been exposed to so many ideas as a result of similar conversations, and my own teaching is a blend of many ideas from my colleagues, all mixed together in my own particular way. It’s actually not just a luxury – it’s important that we continue this kind of cross-fertilization of ideas – and it’s a side benefit to the demands for accountability.

We are all in this together.

Sylvia Gray

Instructor of History

Tuesday, September 21, 2010

Posted by Shirlee Geiger

Did you hear about the letter PCC got this past August from our accrediting agency, the Northwest Commission on Colleges and Universities? The letter was shared at many in-service activities at the outset of fall term. In part it reads:

[T]he evaluators recommend that the College hasten its progress in demonstrating, through regular and systematic assessments, that students who complete their programs have achieved the intended learning outcomes…

This letter came with a carefully worded, polite and collegial, “or else.” It has gotten a lot of attention since it was received. (If you want to see the full copy, email me at sgeiger@pcc.edu, or talk to your SAC chair or division dean.)

Some people have seen something like this coming for a while --though not this strong or this fast --because they’ve been tuned in to trends in the politics and economics of higher education. For me, I had just been happily teaching my classes, joyfully oblivious to these trends and pressures…. That is, until I joined the learning Assessment Council in 2008.

Now I watch for news of the tsunami of change headed our way. If you listened in on any of the debates around reforming access to health insurance, you already are familiar with the components of the controversy over higher education.

· Just like health care, costs associated with higher education have been sky-rocketing.

· Also like health care, the negative consequences to individuals shut out of access to higher education are ever more drastic over the course of a life-time.

· Like Health Care, the current higher ed “system” is composed of a mix of publicly funded and private, for-profit institutions. Lots and lots of money changes hands….

· Some schools seem to get very good educational outcomes, at lower costs, than others. This makes it look ever more likely there are some “best practices” that could (if adopted widely) make the whole system more effective and efficient. But it is difficult to get access to statistics that allow a meaningful comparison of schools with one another, just like it is hard to get records that help us pinpoint the best and the worst of doctors, hospitals, or treatment types.

· Having access to medical insurance, or being shut out of it, creates obvious and glaring scenarios of social injustice. These tragic comparisons have contributed to the widespread belief that the current situation is just morally wrong – horribly and undeniably unfair. Yet it is very hard to find our way through to consensus on how to make things better. And this is true in education, too.

The Obama administration has made it clear that access to quality education, including college, for all citizens is a top priority. But the administration has also shown -- as with the controversy over health insurance -- it is willing to shake up the status quo. The demand for evidence-based medicine is paired with a demand for evidence-based education.

And that means you.

Here at PCC, we have been lucky. We have administrators, in the form of Chris Chairsell and Preston Pulliams, who have been willing to trust faculty to take the lead in crafting a response to this new demand for accountability and evidence-based educational practice. The Faculty Learning Assessment Council was formed by Sylvia Gray (Instructor in History at Sylvania) in Fall 2008. Last year was the initial implementation of our recommended plan. This year is the full implementation. You can see what we came up with, as well as some excellent assessment projects created by your colleagues, at http://pcc.edu/assessment

The Council has set in place a process where members of each SAC collaboratively devise an assessment practice that will be useful and meaningful to them in figuring out how to improve their program. The interest is in assessing the program, and ultimately the institution, not individual classes, learners, or educators. The focus on program or discipline level assessment requires (among other things) a new level of communication and collaboration among teachers, both full- and part-time. Many people who have made a start at creating and implementing program level assessment of core outcomes gave feedback to the Council that the collaboration was a wonderful and unexpected bonus of their work….

With this blog, we hope to continue the process of communicating across campus locations, discipline boundaries, and the isolating walls of our individual classrooms. Each week, a guest blogger will take center stage, bringing expertise or perplexities, perhaps highlighting best-practices, maybe pointing out what we stand to gain, or lose, as we work together to meet these new demands for evidence… You are welcome to comment on the blogs. If you would like a turn as Guest Blogger, just let me know, and we’ll schedule a time for you.

And who am I? I am the new and in-coming chair of the Learning Assessment Council. I am starting my 28th year as a part time instructor in philosophy. I first came to PCC as a student. I was 17 years old, already a cynical and bitter high school drop-out. I thought education was simply the polite way to refer to the main propaganda machine for the “system.” I wanted no part of it… except to study a little bit of literature, get exposed to a dab of history, and (quick!) learn another language so I could go out and travel the world – getting far, far away from anything resembling a classroom. So I did a year at PCC. Then I did some traveling – I lived on different continents, saw what life can look like outside the Pacific Northwest. I left pieces of my youthful cynicism in cheap hotels, and in exposure to the hardships and sorrows that are normal outside the US. I gradually lost that desire to get as far away from Portland as possible. But I have never lost the desire I acquired in my travels to be part of the solution, instead of part of the problem….

I returned to Portland, and have stayed rooted here. I continued at PSU, eventually went to graduate school... I have done lots of different kinds of work, in addition to teaching here at PCC. But I have continued to be drawn back to the very classrooms I was so hot to leave behind when I was young.

Teaching here for me has always been one small way I have tried to “pay it forward.” Some of those teachers I had at PCC, in my 17th year on this earth, changed the trajectory of my life. I was a sullen and angry young woman. But they saw a potential thinker, with some good ideas -- but much to learn. And they began the process of transforming me.

I am deeply, deeply grateful to them.

I am interested in providing good evidence of the good work we do here to interested parties in the world at large. And I am interested in looking out for ways we can do our good work even better.

So…What is your reaction to this letter from our accreditors? What is your experience with assessment of learning? What are your hopes and fears as we lean in to the changing winds?

Please join the conversation and let your colleagues know what you are thinking. Share your concerns, ideas, expertise, and peaks-around-the-corner at what education will be like in the coming decades….

After all, we are all in this together….There may well be a sullen 17-year-old waiting right now in your classroom. Waiting for you to see the possibilities in her she can’t yet see herself. Waiting for you to see her into her best possible self…

Let us share how we do this magic, so that we can touch ever more lives.

Certainly, we know this: There is no shortage of need…

Tuesday, August 3, 2010

This is a test post.

This Blog will be used by the faculty of Portland Community College to share thoughts and perspectives on the "accountability movement" in higher education, and (in that context) our college's work to meet the new demands by our accrediting agency for assessment of student learning.