Tuesday, January 11, 2011

Assessment and Blind Dates

Jeff Pettit’s Fear

The fear of assessment feels like the fear of a blind date. It seems like some person I don’t know will sit down, take a brief glimpse of who they think I am and then criticize me for something I do or something I say or something I don't do or something I don't say, or just out-right fails to see how great I am, despite my flaws.

Currently, for the most part, assessment is like the morning look in the mirror: working alone, fixing what can be fixed, briefly regretting the imperfections, … self-critiquing before heading out the door. I agree with Phil Seder's post from Nov 2 that this kind of self-critique is largely how most individual teachers improve, and the only improvement most teachers have time for. I don't agree with him that assessment should be abandoned to the status quo of teachers-as-islands and Peter Seaman's post from Nov 8 is a clear outline that there is a better way.

Assessment is not brief; it is recursive. Assessment is not a glimpse, but a continually improving probe into any area to which it is applied. It is not done by someone I don't know; here at PCC it is done by you and by me.

Assessment is also several separate things. It is Instructors assessing students to maximize their learning; I’m not going to talk about that much. It is also assessing Instructors to maximize their instruction; this is the look in the mirror. I’m going to talk about that a little. Most importantly, at least at this point in time for PCC overall, it is assessing courses, programs and everything departmental in the learning process. This is what I want to talk about, because this is what we fear.

We at PCC are in a unique position and I am relieved and glad for it. We have been told to assess PCC (courses, programs, departments, etc.) effectively, so we must. This is a new atmosphere from decades past, and some are saying, “No I won’t, because I’m afraid when you look at courses, you might take a peek at me and find fault in me and fire me.” This resistance to assessment is tilting at windmills, which are turning by the winds of change. But when viewed without fear, assessment is not a mandate to change, but an opportunity to improve. And by improve, I mean improve everything from curriculum, to textbook choice, to course outcomes, to instructor and student support. As other educational institutions drive improvement from the hierarchy above, or from private assessment companies, or in the case of No Child Left Behind, from the government, we at PCC get to trust the job of assessment to ourselves.

If we are to assess, this is the least frightening scenario. But it is, for most people, frightening nonetheless. Of all the good people at PCC with whom I’ve discussed assessment, none of them seems afraid of assessing themselves, or assessing others, or assessing courses, or departments, or administrators. But nearly all hold some fear of others assessing them! This fear spills over into a fear of assessment overall. When we imagine others looking at us (or even near us) in order to make improvements it would be difficult not to be afraid. Accepting criticism from others (whether it reflects on our own short-comings or the short-comings of our department) requires trust, vulnerability and openness. So the real question is this: do you trust your colleagues? Should you trust them? If you trust them completely, there should be no fear. If you don’t trust them, the problem is either your difficulty in trusting people, or their difficulty in trustworthiness. Maybe both. If you have trouble trusting people, stop it. But if our colleagues are not trustworthy, what then? I don’t know. Perhaps faith.

Or perhaps we can remove them from the process. The strangers and the untrustworthy are not asking what we are doing in our classes. We are simply tasked with assessing and improving our overall method of improvement. We are not being asked to prove we are good at our jobs.

This leads me to my following vision for assessment: Empirical relevant data is regularly and anonymously gathered from all Instructors teaching a set of particular courses. Student performance is pooled and posted internally. Neither student names nor instructor names are recorded nor is data organized or connected by class. However, (and this is where I am talking about individual Instructors improving themselves) Instructors can privately compare their students’ data to the over-all data if they record it before submitting (perhaps the assessment is even used by most Instructors as part of the students’ class assessment). Instructors can use this information to improve individually. More importantly, on the department level, based on the overall data, improvements and support are offered to improve courses overall. But the key is data, accurate data. Data data data. And accurate data can only come when we are free of fear – when everyone participates, when no one tries to gloss over imperfections as if they are on a blind date, when we all work together to improve PCC. This scenario can work because currently all that is being asked for externally is a workable method (and continually improving method) of gathering data, not the data itself. No one other than I would need to see how well my students are doing; no one outside the department would need see how well the department is doing. Privacy, in fact, will increase the accuracy of the data, increase participation, and reduce (eliminate?) fear! As long as the method of generating accurate and meaningful data is continually improved, the machine is alive and evolving and limitless.

True to my discipline, I will close with a mathematical proof.

Given:

  1. Course improvements are implemented by Instructors.
  2. Each instructor can improve.
  3. Each Instructor is either able to improve or is not able to improve and either desires to improve or does not desire to improve.

  4. Now, let T be a member from the set of all teachers who are able to improve and desire to improve (that's probably everybody). Because T needs to improve, T is imperfect. If T relies solely on T's-self for improvement, T's improvement is limited due to imperfection.

Therefore, T should not rely solely on T for improvement; otherwise improvement will be finitely limited.

Likewise, unless T's mentor is a demigod (which would contradict “2)” which is given), if T relies solely on advice from other teachers (T1, T2, … , Tn) for improvement, the limit of improvement might be larger, but still finitely limited due to individual imperfection (proven above).

Since T cannot improve based solely on other T’s, any course, degree or department created by a set of T’s is limited and will eventually reach it’s limited potential.

However, recursive assessment of courses is unbounded and therefore provides the possibility of infinite data, which must, by definition, be infinitely useful.

Therefore, in order to achieve limitless improvement: all teachers in the set of teachers that wish to improve themselves, their courses, their departments and PCC overall should set-up an accurate and continually improving system applied in such a way that no one is afraid of it. The continually improving system that everyone creates should gather data about how well students are learning, and base improvement on that data. In doing so we will not only continually and limitlessly improve, but also help others to overcome fear! Now everything is better and everyone is happier!

Woo!



Jeff Pettit is a math instructor here at PCC. He was an active (and fun!) participant in the Assessment class in Fall 2010 (The class is offered through the Learning Assessment Council -- watch for announcements of future classes.) He agreed to write a blog, and reveal to us some of his thinking process about assessment....

Tuesday, January 4, 2011

Don't miss the Anderson Conference

The Anderson Conference is coming!! This year TLC coordinators have invited some local assessment experts, who will come share both assessment theory and practice. Their main focus will be assessment at the course level. The conference title says a lot, I think:

Shifting from a Grading Culture to a Learning Culture: Assessment Theory & Practice


The sessions rotate through some of our campuses. You are welcome to come to some sessions, even if you don't make them all. For each campus session attended, PT Faculty will receive a $25 stipend.



January 27th at Rock Creek: 1:00pm-4:15pm
keynote:
Student Assessment That Boosts Learning
January 28th at Cascade: 9:00am-12:00pm
keynote:
Assessment for Learning Using Rubrics
January 28th at Sylvania: 1:00pm-4:00pm
keynote:
Descriptive Feedback

The TLC coordinators have forwarded to me the bios of the two main presenters (with break out sessions being offered after each keynote.) So I am sharing them with you here:

Judy Arter has worked at all levels of assessment from college to elementary school and from large-scale high-stakes testing to classroom/course assessment for learning. Her passion is formative assessment (assessment that supports student learning). She has developed performance assessments and rubrics for many contexts and subject areas.

Judy has facilitated over 1200 trainings in the area of student assessment since 1978. She worked at the Assessment Training Institute in Portland, OR from 1999 to 2010 and was director of Northwest Regional Educational Laboratory's (now Education Northwest) assessment unit from 1990 to 1999.

Judy is a co-author of Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance (2001), Assessment FOR Learning: An Action Guide for School Leaders (2005), Classroom Assessment for Student Learning: Doing It Right –Using It Well (2004), and Creating and Recognizing Quality Rubrics (2006).
She has a PhD in special education (University of Illinois, Champaign-Urbana, 1976) and a BS in mathematics (University of California, San Diego, 1971).

Loren Ford has a Master's Degree in Psychology and is a Licensed Professional Counselor. Over the last 33 years he has taught numerous psychology and history courses at Clackamas Community College. After years of using assessment strictly to give grades, he is experimenting with strategies to use student assessment differently to (1) motivate students to do practice work, and (2) as an instructional methodology to help students learn more quickly and deeply. For example, he is developing rubrics to help students understand the communication and reasoning proficiencies necessary for success in school and daily living.

The conference breakout sessions will address such topics as grading strategies, rubrics, providing effective feedback, and more.

Tuesday, December 7, 2010

A rubric for assessing our Assessment Progress

Shirlee Geiger
incoming Chair, faculty Learning Assessment Council

For most of my life, I have been a reader of fiction. But I remember that time with nostalgia, because my reading habits changed a while back, and then changed again. About a decade ago I found myself reaching for non-fiction instead of novels -- biographies, historical accounts for non-historians, popularized narratives of science discoveries, and then (slowly) economics and business ethics stuff. That wasn't as much fun as a gripping novel, actually, but well-written non-fiction had gotten its own grip on me. I read that stuff broadly and frequently.

In the last two years, though, I find myself growing nostalgic for my broad non-fiction period. Because these days I read about assessment, and the accountability movement, and then assessment some more. I often read just before going to bed, and this assessment stuff shows up in my dreams. I won't call them "nightmares" exactly. But I am pretty sure I had better quality dreams in my days of fiction reading....

I read something earlier this term, and I waited for a break in the blog line-up to bring it to you. (And thanks to all the guest bloggers, by the way. This blog has had lots of traffic, and I hear informally that many of the ideas presented here are parts of F2F conversations across our many campuses. I love good conversations, where multiple points of view are well represented. Then we have a beautiful array of thoughts to think about and talk about, and think about some more -- and the conversations are ever more productive. Thanks to all who have put their minds and energies to thinking and talking about assessment here at PCC.)

This something I waited to share on a blog appeared in a little newsletter, Assessment Update. That newsletter has become one of my favorites in this new reading phase I am in. People involved in Higher Ed in some capacity or other -- faculty, academic professionals, administrators -- write up short little pieces on some aspect of assessment at their institution. Often they are first-person narratives, usually told as tales of challenge and success. (Nothing is so gripping to me these days as an adventure story with an academic hero or heroine!) And, since I can relate to the challenges and risks and obstacles being faced, the sense of dramatic tension builds with each new paragraph....

So the one I want to write about comes from Assessment Update,Volume 22, Number 5. It was written by Trudy W. Banta from Marquette University. Like PCC, Marquette decided to go with a faculty-owned assessment process. And the heart of their approach, like ours, is a faculty peer review of assessment plans. They have a half-day end-of-year peer review session, just like Sylvia Gray pioneered last spring. And like our experience, their faculty reported loud and clear that they liked the chance to talk across discipline lines, and collaborate together around the common institutional mission.

Then Trudy went one step further.... She created a rubric for how to judge how far Marquette has gone at creating a "culture of evidence." And I want to share that rubric with you.

In the Anderson Conference this year, you will have a chance to learn from some local Assessment experts -- actually, from the assessment group that first got me to change my reading habits.... From them I learned that the simple fact of providing people with rubrics at the start of a class drives better summative scores at the end of a class. Rick Stiggins says that students are much more likely to hit a target when they know what and where it is, and it doesn't keep moving....

So this rubric defines our "target." (Sounds like gun practice, which is not that great of a metaphor for me, but I still like the basic points.) I believe, like Stiggins says, we'll be more likely to succeed if we know where we are aiming to go....

Assessment Component Beginning Assessment System Meets Expectations for Assessment System Assessment System Reflects Best Practices
Learning Outcomes

Program learning outcomes have been identified and are generally measurable
Measurable program learning outcomes.

Learning outcomes are posted on the program website.


Posted measurable program learning outcomes are routinely shared with students and faculty.
Assessment Measures





General measures are identified (e.g. student written assignment)





Specific measures are clearly identified (student global case studying the capstone course).


Measures relate to the program learning outcomes.

Measures can provide useful information about student learning.
Multiple measures are used to assess a student learning outcome. Emphasis on specific direct measures.

Rubrics or guides are used for the measures [and they are routinely normed.]

Measures are created to assess the impact on student performance of prior actions to improve student learning.
Assessment Results Data collected and aggregated for at least one learning outcome A majority of learning outcomes assessed annually.

Data collected and aggregated are linked to specific learning outcome(s).

Data are aggregated in a meaningful way that the average reader can understand.
If not all learning outcomes are assessed annually, a rotation schedule is established to assess all learning outcomes within a reasonable framework.

Data are aggregated and analyzed in a systematic manner.

Data are collected and analyzed to evaluate prior actions to improve student learning.
Faculty Analysis and Conclusions All program faculty receive annual assessment results.

Faculty input about the results is sought.
All program faculty receive annual assessment results and designate program or department faculty to meet to discuss assessment results in depth.

Specific conclusions about student learning are made based on the available assessment results.
All of previous level and faculty synthesize the results from various assessment measures to form specific conclusions about each performance indicator for that learning outcome.
Actions to Improve Learning and Assessment At least one action to improve learning or improve assessment is identified.

The proposed action(s) relates to faculty conclusions about areas for improvement.
Description of the action to improve learning or assessment is specific and relates directly to faculty conclusions about areas for improvement.

Description of action includes a timetable for implementation and identifies who is responsible for the action.

Actions are realistic, with a good probability of improving learning or assessment.
All of previous level and assessment methods and timetable for assessing and evaluating the effectiveness of the action are included in the planned action.

Where is your SAC in this process? Where is PCC?

I have a good idea of the answers to both questions, because I have been reading through both the SAC assessment plans AND the peer reviews from our session in November. Different SACs are in different places, but we are all of us on this chart somewhere.

This stuff is good reading, and it makes me proud and happy to be part of what PCC is doing to serve our students ever better, and through them help shape a better future for our community and for our world.

The position of chair of the Learning Assessment Council rotates, and I know that in at most another18 months, I will be handing the leadership off to someone else. (Interested?) Maybe at that point, my reading habits will change again. If you've read any good fiction lately, maybe you could let me know.... In the meantime, there is a high stack of assessment stuff waiting for me. (I'd be happy to share!) From my reading, I know that the ground rules of higher ed are changing across the globe. Like all big changes, some is for the worse.... but some is for the better. A faculty-led search for evidence in order to identify best teaching/learning practices has a good chance of being in the for-the-better category. Thanks to all who are working to make it so....

Sunday, November 28, 2010

David Rives, President of the American Federation of Teachers -Oregon

Ensuring Student Success in Oregon: Faculty Perspective

This year, a Legislative Workgroup for Higher Education was formed to look at several proposals coming forth to reorganize the Oregon University System (OUS). The Workgroup requested two major consultancy groups, the Western Interstate Commission on Higher Education and the National Center for Higher Education Management Systems, to assist with a proposal to restructure the higher education system in Oregon. That proposal has been set forth in Legislative Concept 2861, which will likely be drafted into a bill this next session. The legislature is also going to see a proposal for a restructure of OUS from the State Board of Higher Education, as well as a proposal for the University of Oregon to become more independent from the state system. All of these proposals go further than just reorganizing the governance boards that run the institutions of higher education in our state. They also seek to connect funding for education to new metrics for accountability and performance.


Currently, state funding is disbursed to each community college and university using formulas that are based largely on enrollment totals. Enrollment numbers can offer an indication of the accessibility of public education, but they don’t encapsulate other goals of higher education. The federal government, the state governor and legislature, numerous think tanks and foundations are all pressuring higher education institutions to use new metrics in determining how we are meeting our goals. (The Northwest Commission on Colleges and Universities desire that PCC demonstrate learning outcomes for students is just one example of this). Oregon already has a policy that, by 2025, 20% of the population will have at least a high school education, 40% will have at least some college (e.g., an associate degree or a certificate), and 40% will have at least a bachelor degree. 40/40/20 serves fine as an aspirational goal, but many educators, particularly at community colleges, would find it worrisome if colleges were to receive funding only according to how many degrees they awarded. Of course, these kinds of numbers-driven measures often have more in common with industry than education.


Even though the written reports of the consultants and the Chancellor of the OUS mention the need to utilize a variety of metrics in assessing higher education programs and institutions, a lot of the testimony and discussion at the legislative level has revolved around statistics that are easy to collect, like how many degrees an institution or program awards, or how many traditional, full-time, first year students enroll for a second year. Taking into account the number of degrees granted and the successful completion of programs of study is important. Most educators want to see as many students as possible complete their degrees—it is a measure of success and a measure of attaining goals. A case can be made that completing the classes required for a degree can contribute toward a being a more educated and civically-minded community member. But let’s not confuse metrics based on diplomas and degrees as the main measure of student success.


That brings up another point about the whole debate about accountability and performance. Largely absent from the discussion surrounding these accountability metrics, and some of the business terminology like “performance” and “productivity,” is the purpose of higher education. Higher education develops the capacity for abstract thought and critical reasoning. If we are going to assess what leads to quality education, we’re going to have to look at what’s involved in developing critical reasoning skills and abstract thinking.


Assessing such skills cannot be done through any easy, one-size-fits-all model, applicable to every institution and program of study in the state. The American Federation of Teachers website “What Should Count, http://www.whatshouldcount.org/, presents the some of the latest in ideas for assessing student success and institutional accountability. One example of assessments that look at what a higher education should offer would be the Essential Learning Outcomes (ELO’s) from the American Association of Colleges and Universities (http://www.aacu.org/leap/students/elo.cfm). These are broad learning outcomes that promote the kind of educational experience all students should have in some form. Here are the four main areas these outcomes cover:


  1. Knowledge of human cultures and the physical and natural world;



  2. Intellectual and practical skills, including inquiry and analysis, critical and creative thinking, written and oral communication, quantitative literacy, information literacy and teamwork and problem solving



  3. Integrative learning, including synthesis and advanced accomplishment across general and specialized studies.



  4. An understanding of key issues concerning personal and social responsibility, including civic knowledge and engagement, intercultural knowledge and competence and ethical reasoning.



Although these standards are relatively straightforward, it is not a simple matter to implement them. They have to be instituted into a program’s curricula. Teaching and assessment practices need to be designed to achieve results for the institution. This means a cooperative and coordinated effort among administrators, faculty and staff, both at the disciplinary level and the cross-disciplinary level. The American Federation of Teachers-Oregon wants to ensure that any statewide measures are aimed at truly assessing student success and that they involve faculty and staff in the process. At PCC, the involvement of faculty and staff in creating a set of tools to assess our learning outcomes at the program level is an example of what every college and university will have to do to if we educational professionals want to set the standards and not allow them to be dictated to us by outside groups. By defining the standards of quality education ourselves, we will also be able to look into how we can improve it in meaningful ways.


David Rives is the President of the American Federation of Teachers-Oregon

Tuesday, November 16, 2010

Martha Bailey: TLC co-ordinator's point of veiw

Assessment as Professional Development

According to some philosophers, all of us are ethical egoists, and we only choose to act when an action is to our benefit. While I do not hold to that as the best explanation for ethical behavior, I think the theory provides a useful position from which to discuss learning assessment with PCC faculty. We do many things as faculty because we are told they are part of the job: writing syllabi, conducting classes, grading—oh, yes—we have other motivations, too, but sometimes it just comes down to: I have to do this. Learning assessment, particularly at the program level, comes with that kind of mandate, as well as some extrinsic motivations: do this, and do it well, or we could lose accreditation, and that has more than a little impact on a community college like PCC.


But, what if we take the egoist’s view and ask, “What’s in it for me (besides keeping my job, and helping students advance)?” Where do we find the personal part of assessment of learning? I want to suggest that, at least in part, the answer is this gives us a tool for professional development. But before I pursue that idea, I want to acknowledge a position raised in a comment on last week’s blog. Jayabrush wrote “These discussions have in larger part been ones about sovereignty, and worthwhile ones, I might add. But I wonder if we need to have a more frank recognition that underneath all these discussions is a base fear, teachers' fear that if they are observed and found to "not teach well" then they might be fired instead of given an opportunity to improve.”


A similar fear was raised in an earlier posting, too. Both comments note that there is a real risk if the only use of assessment of the individual faculty member’s work is punitive. And it does happen, particularly for part-time faculty. Once a person is hired, outside of egregious actions, he or she will continue to be given classes, because department chairs need to fill teaching slots. In the last (I’m not sure how many) years, the level of evaluation of teaching performance by these instructors has been minimal—until an instructor applies for assignment rights, that key to access to staff development funds and other opportunities. Once this is done, if the instructor is denied such rights (and that does happen), the person can no longer teach at PCC, at least in that subject area. I’ve seen this happen to instructors who had taught for years, but never applied for the rights. So, of course, these new moves to assessment can appear to be another punitive move.


But there is another way to view assessment, and one that might even address the fears. An aside here: while the mandated assessment for accreditation is at the program level, and is not intended to single out a particular instructor, it is possible that, over time, one instructor’s classes may be deemed to be less successful than those of other instructors. If that happens, I would advocate moving the person into a plan similar to the one I am about to describe. But if assessment is considered an avenue for becoming as effective instructor as I can be, then I no longer need to fear it, though it probably will be uncomfortable at times. And if I can control when and how the assessment happens (not waiting until it is demanded), and even recruit help from my choice of allies, then assessment becomes a tool for both student learning and for faculty learning: assessment becomes a two-way street (the last phrase I have borrowed from a TLC brain-storming session on assessment, and I don’t recall who came up with the phrase).


What I mean is that we use assessment of various forms in the classroom to determine whether students are learning. And formative assessment, in particular, is designed to help both students and faculty see where student are succeeding, and where they need more work. But it can do the same for faculty: if students aren’t “getting it”, they often will offer suggestions of ways to help them. Not all of that feedback will be equally useful: some is worthless, while other pieces are absolute gems of insight.


The most useful level for assessing for professional development, then, is the classroom level, and not just toward the end of a class (the traditional student evaluation). It would be nice to be able to use longer-term, post-graduation assessment as well, but at the moment that isn’t practical. Rather, for the best possible interactive assessment and improvement throughout a course, there needs to be assessment planned by the instructor, and assessment offered by the students spontaneously. The class must be structured and carried out in an open and welcoming manner for the latter to be offered. For either these to be of any benefit toward professional development, the instructor does have to be willing to take the feedback as encouragement to improve and not simply as criticism. Someone who can work with student feedback, differentiating between the feedback of value and that offered with other intent, and who becomes a better instructor, will be able to approach evaluations by administrators with much greater confidence.


What continuous formative assessment in the classroom means will obviously vary with the course being assessed, since there are many types of courses offered at PCC. But the idea, and one that doesn’t need to add to the instructor’s time burden in the way Phil Seder described in his posting, is to regularly do small assessments of what is happening in class, and make course corrections along the way. Now, sometimes the needed course correction will be one that cannot be applied until the next time the course is taught—if my major assignment needs work, I’m not going to fix it for students this time around. But if I get feedback as we are going through, I can note that and include it in my course development, rather than having it come later as another task (this is something like the course development feedback loop Peter Seaman discussed).


The other piece here is that students will not only speak of course content and materials: sometimes they address aspects of course delivery, that is, issues related to the instructor directly. This offers the biggest opportunity for professional development—to work on me and my skills. And if this is coming continually, then when I see an opportunity coming up to learn in a given area where I am weak, I can jump on it. And, yes, it may mean the students get fewer comments on a piece of work, but if it leads to me being a more effective instructor overall, that benefits greatly outweighs the small harm.


Now, some might say that I am writing a piece such as this blog for egoistical reasons: after all, I do coordinate the TLC (Teaching Learning Center) at Cascade, and we do offer some of those sessions you might come to for improvement as an instructor. And I won’t deny that is somewhat true. But part of what I have learned as a TLC coordinator is that students benefit (learn more) when faculty continue to develop their skills; students may learn from less-skilled instructors, however, they truly appreciate getting to “study with” a highly-effective teacher. If we can let the students assess us even as we assess them, then learning truly does become “everyone’s business.”

Monday, November 8, 2010

A "Scientific" Basis for Assessment? By Peter Seaman

Phil Seder made a couple of points in his blog entry that I would like
to use as jumping-off points. But first a few words about me and where
I stand regarding assessment:


I started off in higher ed as an English teacher. I taught freshman
composition and literature courses, and I frankly struggled with
questions of worth - How do I know whether my students are getting
anything worthwhile from my classes? Is teaching really worthwhile, or
is it a waste of time? Could the students get the same thing if they
just read the textbook? How many drinks at happy hour will it take me
to forget these weighty questions? (I asked that last question only on
Friday afternoons). Eventually I became really curious about the
process of education and I started taking education courses on the
side. I even went so far as to get a certificate in teaching English as
a second language (ESL teachers use some fascinating methods), and then
I dove in head-first: I went to grad school full-time to get a master's
degree in something called Instructional Systems Technology.


"Instructional Systems Technology" sounds very intimidating, but if you
sound out the words, it's not so bad. The word "systems" implies that
instructional designers (or IDs) take a "systems" view of instruction,
meaning that we look at issues like repeatability, reliability,
validity, and so on. The word "technology" implies that we look for
ways to infuse technology in instruction (technology can be a method or
set of techniques, as well as a computer program).


Everyone who studies IST, or "instructional design" (its more common
name) learns a model called the ADDIE model.


ADDIE:
A = Analyze (the learning task, the learner, and the learning context)
D = Design (the instructional intervention - can be a book, an activity,
a computer application, or a combination of these, and much more)
D = Develop (the instructional intervention, also known as "production"
- a painful word for most IDs)
I = Implement (the intervention, or try it out)
E = Evaluate (the results of the intervention, or find out how it worked
- what long-lasting results it had upon behavior, cognitive functioning,
etc).

The process is iterative: we use the results of evaluation to improve
instruction continually (draw an arrow from "E" back up to "A" and we
can dump the results into our next "analysis" phase, and start the
process again).


You can see that ISD (instructional systems design - it goes by many
names) is really comfortable with evaluation. When I heard that the
accreditation authorities want PCC to use the results of assessment to
improve future instruction, I thought, "Of course - that's what you use
results for."


Okay, back to Phil Seder.

Phil made an excellent point in his blog entry about instructors doing
all of the steps - analyzing, designing, developing, implementing,
evaluating, which is analogous to washing the turkey, dressing the
turkey, cooking the turkey, serving the turkey, washing the dishes
afterward, and asking everyone how it tasted so you can improve your
recipe for next time. Phil has actually put his finger on one of the
great frustrations of my profession as an ID, namely the unwillingness
of instructors to *allow* anyone else to assist with any part of their
personal ADDIE process! Many instructors become so comfortable with
their own way of teaching that they view any advice or consultation as
an intrusion (I've even known instructors who refuse to let another
instructor observe any portion of their teaching process).


I'll admit: Letting someone else critique your professional work can be
very scary. And evaluation itself is unfortunately fraught (in many
organizations) with performance implications (administrators can use
evaluation results against teachers). And certainly there are control
issues (if I get to control every aspect of the instructional process,
then it will happen the way I want; to give away control to someone else
is to invite the possibility of loss of control). My larger point is
that instruction improves only through regular evaluation. To give away
control of some aspect of the process is to open up oneself to growth -
again, as long as there are protections within the organization.


Phil talked about how hard assessment is to do, and I agree. Sometimes
I wonder if it isn't better to prepare students for some kind of
external assessment - like the aviation instructors who prepare students
for FAA exams, or the real-estate instructors who prepare students for
state licensing exams. At least in these cases I can know the
assessment is external to me and free from my own bias. But it's still
scary because I as an instructor lose control of the situation the
moment the student walks out of my classroom (whether onground or
online). And of course when instructors are measured by how well their
students perform on standardized tests, instructors will "teach to the
test," which unfortunately limits learning to the confines of the test.


I guess I would close this blog entry by pointing to the necessity to
show results in almost any professional endeavor, and to wonder why
higher ed has been able to NOT show results for so long (sorry about
splitting that infinitive!). A few years ago, I overheard an instructor
at PCC say, "A student should take my course because she or he will be
better for having taken it." Okay, I accept that statement. But could
you say the same thing about a walk around the building? ("The student
will be better for having taken a walk around the building"). I'm sure
that every instructor knows, in the deepest chambers of the heart, that
students are better for having taken a course. But the fact is, it's
not enough in any profession to assert some value with no evidence of
value. And my profession of ISD absolutely depends on this evidence.
Think about it: If anyone can design instruction that works as well as
any other instruction, where is the value of the designer? I look at it
like riding in an airplane: When I first rode an airplane, I was really
frightened because I did not think there was any basis for an airplane
to stay aloft - it seemed like magic. But when I talked to pilots and
read books about aviation (kids' books are great for this kind of
explanation), I realized that aviation depends upon the application of
scientific principles: move a specially shaped object (fuselage with
wings) over the ground (and through the air) fast enough, and the object
will rise in the air - it will take off! It has to - the physical
properties of the earth demand it. So why can't we apply the same
principles in instruction: apply certain forces and depend on a certain
result?


Of course you say, "But students are not objects, uniformly shaped,
moving through the air at a certain speed." And of course, you are
correct! Students are humans and therefore arrive in the classroom with
an endless variety of behavioral, cognitive, and psychomotor attributes
that are incredibly hard to divine. But we have to start somewhere, and
we do that by applying certain interventions and measuring their
result. As long as we have organizational safeguards so that evaluation
data is not misused, we should not fear evaluation - it can only make
instruction more effective.

Tuesday, November 2, 2010

Phil Seder's Assessment Angst

I come from the world of business marketing. In that world, we plan, we produce and we measure success in order to refine programs and improve outcomes. I believe this to be a valid approach. I teach my students that it is an essential approach in the business world.


I thus find myself perplexed by my internal resistance to the idea of assessment that is being pushed down from the highest levels of government and has now come to roost at the community college level. There is no doubt in my mind that we can improve delivery and there is no doubt in my mind that assessment can lead to refinements in course content or delivery that results in better outcomes. So why my angst?


I think it is because in the business world, there is a fundamental difference from the world of classroom education (well, a bunch actually, but I’m going to deal with just one here). In business, marketers work with researchers to determine consumer or business needs. Those marketers pass requirements on to designers and engineers who develop the products or services. The engineers pass the designs to manufacturing and sales to create and deliver the products. And finally, the marketers (often with the aid of accountants or analyst) assess the results before another round of the same is initiated.


Now let's look at who performs these tasks in the education world. Basic research and determination of customer needs? At the classroom level at least, the teacher. Product design? The teacher. Product development? The teacher. Product delivery? The teacher. Product assessment? The teacher.


This is not to say that administrators do nothing. There are monumental issues of overall program development (e.g. Should we offer a nursing program), new program implementation, facilities construction and maintenance, marketing, budgeting, negotiation, discipline and, ah yes, even employee supervision. I thank my stars that trained professionals stand ready to perform these essential duties.


But the bottom line is that the classroom teacher is responsible for every aspect of the development and delivery of the product in the classroom. In other words, they perform tasks that are the responsibly of numerous differently trained professionals in the business world. I know this because I've managed products in both worlds and, frankly, it is one of the things that excites me about the daily challenge of teaching.


But therein lies a conundrum. To teach better, we are being told, we need to assess more. We need to be more like the measurement driven business world. The teacher though, is not like the business professional. They are responsible for the entire life cycle of product development and delivery in the classroom. To assess more means that they either have to 1) work more or 2) spend less time on planning, development and delivery.


Now we all know that there are those who can work more. But for many teachers I see around me, their waking hours during the academic year are filled with the activities of course development and delivery. I doubt if many would look kindly at giving up the few moments of personal time they have during the week. As to the old saw "work smarter, not harder," well, it's a trite and meaningless age-old piece of business wisdom, custom canned for delivery to the survivors of corporate layoffs as they contemplate a future of 24/7 employment. At the very least, when I have been in those situations, I had the comfort of higher compensation to balance the loss of personal and family time.


Today's reality of an assessment-driven education system though, is that the classroom teacher will have to cut back on planning and delivery activities to respond to the assessment demands. And the more dedicated the teacher, the more they will need to cut back, since they are the ones who already spend their entire waking time engaged in better delivery. The result: we know what we're getting, but what we know we're getting is worse than what we were producing before when we didn't know what we were getting. Read that twice, if need be, then, especially if you have been teaching for a decade or more, contemplate whether this does not seem to precisely speak to the ultimate results of modern educational policy at the high school level.


I see it in my own work. Even in the past five years, I have seen a creeping slide towards more meetings, more time discussing the nuances of outcomes language (e.g. our graduates don't communicate with coworkers, they interact with coworkers), and more time discussing assessment. Where I have cut back was first, in my personal life where I spend limited time on my once thriving second life as a sculptor, and second, in my course preparation and delivery. It's not like I drop professionalism; it's just that I make one course improvement per term where previously I made three or four or five. Or the extensive paper markups that I used to do, and for which I was often thanked by more serious students, have become rare and often replaced by a mere letter grade. It's not what I want. But the fact is the time spent responding to federal officials and accreditation committees must come from somewhere.


Ultimately, the question that I can imagine being thrown back at me is this: "Then would you just say to give up on assessment, even knowing that better assessment can create a better product?" in a nutshell, yes. Taking this back to my original business analogy, if a shortage of resources forced me to make a choice between building a better product or measuring consumer acceptance of that product, I would err on the side of building the best product I could. In a world of unlimited resources, I would do it all. In our real world of limited resources, trade offs will be made.


Of course, I do assess. In many ways and on many dimensions. I assess writing skills in business classes (which doesn't make one very popular, I might add), I assess math skills, I force students to present, I make them work in teams. Do I know scientifically whether my assessments are creating the best students, or whether I'm doing a better job teaching than I was five years ago? No. Intuitively I believe so. But I know I am putting the most I can into my product and I am comfortable that in a world of scarce resources, I am allocating my time in an ethical and professional manner.