Sunday, November 28, 2010

David Rives, President of the American Federation of Teachers -Oregon

Ensuring Student Success in Oregon: Faculty Perspective

This year, a Legislative Workgroup for Higher Education was formed to look at several proposals coming forth to reorganize the Oregon University System (OUS). The Workgroup requested two major consultancy groups, the Western Interstate Commission on Higher Education and the National Center for Higher Education Management Systems, to assist with a proposal to restructure the higher education system in Oregon. That proposal has been set forth in Legislative Concept 2861, which will likely be drafted into a bill this next session. The legislature is also going to see a proposal for a restructure of OUS from the State Board of Higher Education, as well as a proposal for the University of Oregon to become more independent from the state system. All of these proposals go further than just reorganizing the governance boards that run the institutions of higher education in our state. They also seek to connect funding for education to new metrics for accountability and performance.


Currently, state funding is disbursed to each community college and university using formulas that are based largely on enrollment totals. Enrollment numbers can offer an indication of the accessibility of public education, but they don’t encapsulate other goals of higher education. The federal government, the state governor and legislature, numerous think tanks and foundations are all pressuring higher education institutions to use new metrics in determining how we are meeting our goals. (The Northwest Commission on Colleges and Universities desire that PCC demonstrate learning outcomes for students is just one example of this). Oregon already has a policy that, by 2025, 20% of the population will have at least a high school education, 40% will have at least some college (e.g., an associate degree or a certificate), and 40% will have at least a bachelor degree. 40/40/20 serves fine as an aspirational goal, but many educators, particularly at community colleges, would find it worrisome if colleges were to receive funding only according to how many degrees they awarded. Of course, these kinds of numbers-driven measures often have more in common with industry than education.


Even though the written reports of the consultants and the Chancellor of the OUS mention the need to utilize a variety of metrics in assessing higher education programs and institutions, a lot of the testimony and discussion at the legislative level has revolved around statistics that are easy to collect, like how many degrees an institution or program awards, or how many traditional, full-time, first year students enroll for a second year. Taking into account the number of degrees granted and the successful completion of programs of study is important. Most educators want to see as many students as possible complete their degrees—it is a measure of success and a measure of attaining goals. A case can be made that completing the classes required for a degree can contribute toward a being a more educated and civically-minded community member. But let’s not confuse metrics based on diplomas and degrees as the main measure of student success.


That brings up another point about the whole debate about accountability and performance. Largely absent from the discussion surrounding these accountability metrics, and some of the business terminology like “performance” and “productivity,” is the purpose of higher education. Higher education develops the capacity for abstract thought and critical reasoning. If we are going to assess what leads to quality education, we’re going to have to look at what’s involved in developing critical reasoning skills and abstract thinking.


Assessing such skills cannot be done through any easy, one-size-fits-all model, applicable to every institution and program of study in the state. The American Federation of Teachers website “What Should Count, http://www.whatshouldcount.org/, presents the some of the latest in ideas for assessing student success and institutional accountability. One example of assessments that look at what a higher education should offer would be the Essential Learning Outcomes (ELO’s) from the American Association of Colleges and Universities (http://www.aacu.org/leap/students/elo.cfm). These are broad learning outcomes that promote the kind of educational experience all students should have in some form. Here are the four main areas these outcomes cover:


  1. Knowledge of human cultures and the physical and natural world;



  2. Intellectual and practical skills, including inquiry and analysis, critical and creative thinking, written and oral communication, quantitative literacy, information literacy and teamwork and problem solving



  3. Integrative learning, including synthesis and advanced accomplishment across general and specialized studies.



  4. An understanding of key issues concerning personal and social responsibility, including civic knowledge and engagement, intercultural knowledge and competence and ethical reasoning.



Although these standards are relatively straightforward, it is not a simple matter to implement them. They have to be instituted into a program’s curricula. Teaching and assessment practices need to be designed to achieve results for the institution. This means a cooperative and coordinated effort among administrators, faculty and staff, both at the disciplinary level and the cross-disciplinary level. The American Federation of Teachers-Oregon wants to ensure that any statewide measures are aimed at truly assessing student success and that they involve faculty and staff in the process. At PCC, the involvement of faculty and staff in creating a set of tools to assess our learning outcomes at the program level is an example of what every college and university will have to do to if we educational professionals want to set the standards and not allow them to be dictated to us by outside groups. By defining the standards of quality education ourselves, we will also be able to look into how we can improve it in meaningful ways.


David Rives is the President of the American Federation of Teachers-Oregon

Tuesday, November 16, 2010

Martha Bailey: TLC co-ordinator's point of veiw

Assessment as Professional Development

According to some philosophers, all of us are ethical egoists, and we only choose to act when an action is to our benefit. While I do not hold to that as the best explanation for ethical behavior, I think the theory provides a useful position from which to discuss learning assessment with PCC faculty. We do many things as faculty because we are told they are part of the job: writing syllabi, conducting classes, grading—oh, yes—we have other motivations, too, but sometimes it just comes down to: I have to do this. Learning assessment, particularly at the program level, comes with that kind of mandate, as well as some extrinsic motivations: do this, and do it well, or we could lose accreditation, and that has more than a little impact on a community college like PCC.


But, what if we take the egoist’s view and ask, “What’s in it for me (besides keeping my job, and helping students advance)?” Where do we find the personal part of assessment of learning? I want to suggest that, at least in part, the answer is this gives us a tool for professional development. But before I pursue that idea, I want to acknowledge a position raised in a comment on last week’s blog. Jayabrush wrote “These discussions have in larger part been ones about sovereignty, and worthwhile ones, I might add. But I wonder if we need to have a more frank recognition that underneath all these discussions is a base fear, teachers' fear that if they are observed and found to "not teach well" then they might be fired instead of given an opportunity to improve.”


A similar fear was raised in an earlier posting, too. Both comments note that there is a real risk if the only use of assessment of the individual faculty member’s work is punitive. And it does happen, particularly for part-time faculty. Once a person is hired, outside of egregious actions, he or she will continue to be given classes, because department chairs need to fill teaching slots. In the last (I’m not sure how many) years, the level of evaluation of teaching performance by these instructors has been minimal—until an instructor applies for assignment rights, that key to access to staff development funds and other opportunities. Once this is done, if the instructor is denied such rights (and that does happen), the person can no longer teach at PCC, at least in that subject area. I’ve seen this happen to instructors who had taught for years, but never applied for the rights. So, of course, these new moves to assessment can appear to be another punitive move.


But there is another way to view assessment, and one that might even address the fears. An aside here: while the mandated assessment for accreditation is at the program level, and is not intended to single out a particular instructor, it is possible that, over time, one instructor’s classes may be deemed to be less successful than those of other instructors. If that happens, I would advocate moving the person into a plan similar to the one I am about to describe. But if assessment is considered an avenue for becoming as effective instructor as I can be, then I no longer need to fear it, though it probably will be uncomfortable at times. And if I can control when and how the assessment happens (not waiting until it is demanded), and even recruit help from my choice of allies, then assessment becomes a tool for both student learning and for faculty learning: assessment becomes a two-way street (the last phrase I have borrowed from a TLC brain-storming session on assessment, and I don’t recall who came up with the phrase).


What I mean is that we use assessment of various forms in the classroom to determine whether students are learning. And formative assessment, in particular, is designed to help both students and faculty see where student are succeeding, and where they need more work. But it can do the same for faculty: if students aren’t “getting it”, they often will offer suggestions of ways to help them. Not all of that feedback will be equally useful: some is worthless, while other pieces are absolute gems of insight.


The most useful level for assessing for professional development, then, is the classroom level, and not just toward the end of a class (the traditional student evaluation). It would be nice to be able to use longer-term, post-graduation assessment as well, but at the moment that isn’t practical. Rather, for the best possible interactive assessment and improvement throughout a course, there needs to be assessment planned by the instructor, and assessment offered by the students spontaneously. The class must be structured and carried out in an open and welcoming manner for the latter to be offered. For either these to be of any benefit toward professional development, the instructor does have to be willing to take the feedback as encouragement to improve and not simply as criticism. Someone who can work with student feedback, differentiating between the feedback of value and that offered with other intent, and who becomes a better instructor, will be able to approach evaluations by administrators with much greater confidence.


What continuous formative assessment in the classroom means will obviously vary with the course being assessed, since there are many types of courses offered at PCC. But the idea, and one that doesn’t need to add to the instructor’s time burden in the way Phil Seder described in his posting, is to regularly do small assessments of what is happening in class, and make course corrections along the way. Now, sometimes the needed course correction will be one that cannot be applied until the next time the course is taught—if my major assignment needs work, I’m not going to fix it for students this time around. But if I get feedback as we are going through, I can note that and include it in my course development, rather than having it come later as another task (this is something like the course development feedback loop Peter Seaman discussed).


The other piece here is that students will not only speak of course content and materials: sometimes they address aspects of course delivery, that is, issues related to the instructor directly. This offers the biggest opportunity for professional development—to work on me and my skills. And if this is coming continually, then when I see an opportunity coming up to learn in a given area where I am weak, I can jump on it. And, yes, it may mean the students get fewer comments on a piece of work, but if it leads to me being a more effective instructor overall, that benefits greatly outweighs the small harm.


Now, some might say that I am writing a piece such as this blog for egoistical reasons: after all, I do coordinate the TLC (Teaching Learning Center) at Cascade, and we do offer some of those sessions you might come to for improvement as an instructor. And I won’t deny that is somewhat true. But part of what I have learned as a TLC coordinator is that students benefit (learn more) when faculty continue to develop their skills; students may learn from less-skilled instructors, however, they truly appreciate getting to “study with” a highly-effective teacher. If we can let the students assess us even as we assess them, then learning truly does become “everyone’s business.”

Monday, November 8, 2010

A "Scientific" Basis for Assessment? By Peter Seaman

Phil Seder made a couple of points in his blog entry that I would like
to use as jumping-off points. But first a few words about me and where
I stand regarding assessment:


I started off in higher ed as an English teacher. I taught freshman
composition and literature courses, and I frankly struggled with
questions of worth - How do I know whether my students are getting
anything worthwhile from my classes? Is teaching really worthwhile, or
is it a waste of time? Could the students get the same thing if they
just read the textbook? How many drinks at happy hour will it take me
to forget these weighty questions? (I asked that last question only on
Friday afternoons). Eventually I became really curious about the
process of education and I started taking education courses on the
side. I even went so far as to get a certificate in teaching English as
a second language (ESL teachers use some fascinating methods), and then
I dove in head-first: I went to grad school full-time to get a master's
degree in something called Instructional Systems Technology.


"Instructional Systems Technology" sounds very intimidating, but if you
sound out the words, it's not so bad. The word "systems" implies that
instructional designers (or IDs) take a "systems" view of instruction,
meaning that we look at issues like repeatability, reliability,
validity, and so on. The word "technology" implies that we look for
ways to infuse technology in instruction (technology can be a method or
set of techniques, as well as a computer program).


Everyone who studies IST, or "instructional design" (its more common
name) learns a model called the ADDIE model.


ADDIE:
A = Analyze (the learning task, the learner, and the learning context)
D = Design (the instructional intervention - can be a book, an activity,
a computer application, or a combination of these, and much more)
D = Develop (the instructional intervention, also known as "production"
- a painful word for most IDs)
I = Implement (the intervention, or try it out)
E = Evaluate (the results of the intervention, or find out how it worked
- what long-lasting results it had upon behavior, cognitive functioning,
etc).

The process is iterative: we use the results of evaluation to improve
instruction continually (draw an arrow from "E" back up to "A" and we
can dump the results into our next "analysis" phase, and start the
process again).


You can see that ISD (instructional systems design - it goes by many
names) is really comfortable with evaluation. When I heard that the
accreditation authorities want PCC to use the results of assessment to
improve future instruction, I thought, "Of course - that's what you use
results for."


Okay, back to Phil Seder.

Phil made an excellent point in his blog entry about instructors doing
all of the steps - analyzing, designing, developing, implementing,
evaluating, which is analogous to washing the turkey, dressing the
turkey, cooking the turkey, serving the turkey, washing the dishes
afterward, and asking everyone how it tasted so you can improve your
recipe for next time. Phil has actually put his finger on one of the
great frustrations of my profession as an ID, namely the unwillingness
of instructors to *allow* anyone else to assist with any part of their
personal ADDIE process! Many instructors become so comfortable with
their own way of teaching that they view any advice or consultation as
an intrusion (I've even known instructors who refuse to let another
instructor observe any portion of their teaching process).


I'll admit: Letting someone else critique your professional work can be
very scary. And evaluation itself is unfortunately fraught (in many
organizations) with performance implications (administrators can use
evaluation results against teachers). And certainly there are control
issues (if I get to control every aspect of the instructional process,
then it will happen the way I want; to give away control to someone else
is to invite the possibility of loss of control). My larger point is
that instruction improves only through regular evaluation. To give away
control of some aspect of the process is to open up oneself to growth -
again, as long as there are protections within the organization.


Phil talked about how hard assessment is to do, and I agree. Sometimes
I wonder if it isn't better to prepare students for some kind of
external assessment - like the aviation instructors who prepare students
for FAA exams, or the real-estate instructors who prepare students for
state licensing exams. At least in these cases I can know the
assessment is external to me and free from my own bias. But it's still
scary because I as an instructor lose control of the situation the
moment the student walks out of my classroom (whether onground or
online). And of course when instructors are measured by how well their
students perform on standardized tests, instructors will "teach to the
test," which unfortunately limits learning to the confines of the test.


I guess I would close this blog entry by pointing to the necessity to
show results in almost any professional endeavor, and to wonder why
higher ed has been able to NOT show results for so long (sorry about
splitting that infinitive!). A few years ago, I overheard an instructor
at PCC say, "A student should take my course because she or he will be
better for having taken it." Okay, I accept that statement. But could
you say the same thing about a walk around the building? ("The student
will be better for having taken a walk around the building"). I'm sure
that every instructor knows, in the deepest chambers of the heart, that
students are better for having taken a course. But the fact is, it's
not enough in any profession to assert some value with no evidence of
value. And my profession of ISD absolutely depends on this evidence.
Think about it: If anyone can design instruction that works as well as
any other instruction, where is the value of the designer? I look at it
like riding in an airplane: When I first rode an airplane, I was really
frightened because I did not think there was any basis for an airplane
to stay aloft - it seemed like magic. But when I talked to pilots and
read books about aviation (kids' books are great for this kind of
explanation), I realized that aviation depends upon the application of
scientific principles: move a specially shaped object (fuselage with
wings) over the ground (and through the air) fast enough, and the object
will rise in the air - it will take off! It has to - the physical
properties of the earth demand it. So why can't we apply the same
principles in instruction: apply certain forces and depend on a certain
result?


Of course you say, "But students are not objects, uniformly shaped,
moving through the air at a certain speed." And of course, you are
correct! Students are humans and therefore arrive in the classroom with
an endless variety of behavioral, cognitive, and psychomotor attributes
that are incredibly hard to divine. But we have to start somewhere, and
we do that by applying certain interventions and measuring their
result. As long as we have organizational safeguards so that evaluation
data is not misused, we should not fear evaluation - it can only make
instruction more effective.

Tuesday, November 2, 2010

Phil Seder's Assessment Angst

I come from the world of business marketing. In that world, we plan, we produce and we measure success in order to refine programs and improve outcomes. I believe this to be a valid approach. I teach my students that it is an essential approach in the business world.


I thus find myself perplexed by my internal resistance to the idea of assessment that is being pushed down from the highest levels of government and has now come to roost at the community college level. There is no doubt in my mind that we can improve delivery and there is no doubt in my mind that assessment can lead to refinements in course content or delivery that results in better outcomes. So why my angst?


I think it is because in the business world, there is a fundamental difference from the world of classroom education (well, a bunch actually, but I’m going to deal with just one here). In business, marketers work with researchers to determine consumer or business needs. Those marketers pass requirements on to designers and engineers who develop the products or services. The engineers pass the designs to manufacturing and sales to create and deliver the products. And finally, the marketers (often with the aid of accountants or analyst) assess the results before another round of the same is initiated.


Now let's look at who performs these tasks in the education world. Basic research and determination of customer needs? At the classroom level at least, the teacher. Product design? The teacher. Product development? The teacher. Product delivery? The teacher. Product assessment? The teacher.


This is not to say that administrators do nothing. There are monumental issues of overall program development (e.g. Should we offer a nursing program), new program implementation, facilities construction and maintenance, marketing, budgeting, negotiation, discipline and, ah yes, even employee supervision. I thank my stars that trained professionals stand ready to perform these essential duties.


But the bottom line is that the classroom teacher is responsible for every aspect of the development and delivery of the product in the classroom. In other words, they perform tasks that are the responsibly of numerous differently trained professionals in the business world. I know this because I've managed products in both worlds and, frankly, it is one of the things that excites me about the daily challenge of teaching.


But therein lies a conundrum. To teach better, we are being told, we need to assess more. We need to be more like the measurement driven business world. The teacher though, is not like the business professional. They are responsible for the entire life cycle of product development and delivery in the classroom. To assess more means that they either have to 1) work more or 2) spend less time on planning, development and delivery.


Now we all know that there are those who can work more. But for many teachers I see around me, their waking hours during the academic year are filled with the activities of course development and delivery. I doubt if many would look kindly at giving up the few moments of personal time they have during the week. As to the old saw "work smarter, not harder," well, it's a trite and meaningless age-old piece of business wisdom, custom canned for delivery to the survivors of corporate layoffs as they contemplate a future of 24/7 employment. At the very least, when I have been in those situations, I had the comfort of higher compensation to balance the loss of personal and family time.


Today's reality of an assessment-driven education system though, is that the classroom teacher will have to cut back on planning and delivery activities to respond to the assessment demands. And the more dedicated the teacher, the more they will need to cut back, since they are the ones who already spend their entire waking time engaged in better delivery. The result: we know what we're getting, but what we know we're getting is worse than what we were producing before when we didn't know what we were getting. Read that twice, if need be, then, especially if you have been teaching for a decade or more, contemplate whether this does not seem to precisely speak to the ultimate results of modern educational policy at the high school level.


I see it in my own work. Even in the past five years, I have seen a creeping slide towards more meetings, more time discussing the nuances of outcomes language (e.g. our graduates don't communicate with coworkers, they interact with coworkers), and more time discussing assessment. Where I have cut back was first, in my personal life where I spend limited time on my once thriving second life as a sculptor, and second, in my course preparation and delivery. It's not like I drop professionalism; it's just that I make one course improvement per term where previously I made three or four or five. Or the extensive paper markups that I used to do, and for which I was often thanked by more serious students, have become rare and often replaced by a mere letter grade. It's not what I want. But the fact is the time spent responding to federal officials and accreditation committees must come from somewhere.


Ultimately, the question that I can imagine being thrown back at me is this: "Then would you just say to give up on assessment, even knowing that better assessment can create a better product?" in a nutshell, yes. Taking this back to my original business analogy, if a shortage of resources forced me to make a choice between building a better product or measuring consumer acceptance of that product, I would err on the side of building the best product I could. In a world of unlimited resources, I would do it all. In our real world of limited resources, trade offs will be made.


Of course, I do assess. In many ways and on many dimensions. I assess writing skills in business classes (which doesn't make one very popular, I might add), I assess math skills, I force students to present, I make them work in teams. Do I know scientifically whether my assessments are creating the best students, or whether I'm doing a better job teaching than I was five years ago? No. Intuitively I believe so. But I know I am putting the most I can into my product and I am comfortable that in a world of scarce resources, I am allocating my time in an ethical and professional manner.