Monday, November 8, 2010

A "Scientific" Basis for Assessment? By Peter Seaman

Phil Seder made a couple of points in his blog entry that I would like
to use as jumping-off points. But first a few words about me and where
I stand regarding assessment:


I started off in higher ed as an English teacher. I taught freshman
composition and literature courses, and I frankly struggled with
questions of worth - How do I know whether my students are getting
anything worthwhile from my classes? Is teaching really worthwhile, or
is it a waste of time? Could the students get the same thing if they
just read the textbook? How many drinks at happy hour will it take me
to forget these weighty questions? (I asked that last question only on
Friday afternoons). Eventually I became really curious about the
process of education and I started taking education courses on the
side. I even went so far as to get a certificate in teaching English as
a second language (ESL teachers use some fascinating methods), and then
I dove in head-first: I went to grad school full-time to get a master's
degree in something called Instructional Systems Technology.


"Instructional Systems Technology" sounds very intimidating, but if you
sound out the words, it's not so bad. The word "systems" implies that
instructional designers (or IDs) take a "systems" view of instruction,
meaning that we look at issues like repeatability, reliability,
validity, and so on. The word "technology" implies that we look for
ways to infuse technology in instruction (technology can be a method or
set of techniques, as well as a computer program).


Everyone who studies IST, or "instructional design" (its more common
name) learns a model called the ADDIE model.


ADDIE:
A = Analyze (the learning task, the learner, and the learning context)
D = Design (the instructional intervention - can be a book, an activity,
a computer application, or a combination of these, and much more)
D = Develop (the instructional intervention, also known as "production"
- a painful word for most IDs)
I = Implement (the intervention, or try it out)
E = Evaluate (the results of the intervention, or find out how it worked
- what long-lasting results it had upon behavior, cognitive functioning,
etc).

The process is iterative: we use the results of evaluation to improve
instruction continually (draw an arrow from "E" back up to "A" and we
can dump the results into our next "analysis" phase, and start the
process again).


You can see that ISD (instructional systems design - it goes by many
names) is really comfortable with evaluation. When I heard that the
accreditation authorities want PCC to use the results of assessment to
improve future instruction, I thought, "Of course - that's what you use
results for."


Okay, back to Phil Seder.

Phil made an excellent point in his blog entry about instructors doing
all of the steps - analyzing, designing, developing, implementing,
evaluating, which is analogous to washing the turkey, dressing the
turkey, cooking the turkey, serving the turkey, washing the dishes
afterward, and asking everyone how it tasted so you can improve your
recipe for next time. Phil has actually put his finger on one of the
great frustrations of my profession as an ID, namely the unwillingness
of instructors to *allow* anyone else to assist with any part of their
personal ADDIE process! Many instructors become so comfortable with
their own way of teaching that they view any advice or consultation as
an intrusion (I've even known instructors who refuse to let another
instructor observe any portion of their teaching process).


I'll admit: Letting someone else critique your professional work can be
very scary. And evaluation itself is unfortunately fraught (in many
organizations) with performance implications (administrators can use
evaluation results against teachers). And certainly there are control
issues (if I get to control every aspect of the instructional process,
then it will happen the way I want; to give away control to someone else
is to invite the possibility of loss of control). My larger point is
that instruction improves only through regular evaluation. To give away
control of some aspect of the process is to open up oneself to growth -
again, as long as there are protections within the organization.


Phil talked about how hard assessment is to do, and I agree. Sometimes
I wonder if it isn't better to prepare students for some kind of
external assessment - like the aviation instructors who prepare students
for FAA exams, or the real-estate instructors who prepare students for
state licensing exams. At least in these cases I can know the
assessment is external to me and free from my own bias. But it's still
scary because I as an instructor lose control of the situation the
moment the student walks out of my classroom (whether onground or
online). And of course when instructors are measured by how well their
students perform on standardized tests, instructors will "teach to the
test," which unfortunately limits learning to the confines of the test.


I guess I would close this blog entry by pointing to the necessity to
show results in almost any professional endeavor, and to wonder why
higher ed has been able to NOT show results for so long (sorry about
splitting that infinitive!). A few years ago, I overheard an instructor
at PCC say, "A student should take my course because she or he will be
better for having taken it." Okay, I accept that statement. But could
you say the same thing about a walk around the building? ("The student
will be better for having taken a walk around the building"). I'm sure
that every instructor knows, in the deepest chambers of the heart, that
students are better for having taken a course. But the fact is, it's
not enough in any profession to assert some value with no evidence of
value. And my profession of ISD absolutely depends on this evidence.
Think about it: If anyone can design instruction that works as well as
any other instruction, where is the value of the designer? I look at it
like riding in an airplane: When I first rode an airplane, I was really
frightened because I did not think there was any basis for an airplane
to stay aloft - it seemed like magic. But when I talked to pilots and
read books about aviation (kids' books are great for this kind of
explanation), I realized that aviation depends upon the application of
scientific principles: move a specially shaped object (fuselage with
wings) over the ground (and through the air) fast enough, and the object
will rise in the air - it will take off! It has to - the physical
properties of the earth demand it. So why can't we apply the same
principles in instruction: apply certain forces and depend on a certain
result?


Of course you say, "But students are not objects, uniformly shaped,
moving through the air at a certain speed." And of course, you are
correct! Students are humans and therefore arrive in the classroom with
an endless variety of behavioral, cognitive, and psychomotor attributes
that are incredibly hard to divine. But we have to start somewhere, and
we do that by applying certain interventions and measuring their
result. As long as we have organizational safeguards so that evaluation
data is not misused, we should not fear evaluation - it can only make
instruction more effective.

4 comments:

  1. Teaching is at least as much an art as it is a science. An individual teacher has a personal vision of his or her subject. Excellent teaching is not just a matter of presenting the “correct” content using the “correct” teaching methodology. Neither of these will inspire students to pursue excellence. Students need the inspiration of their instructor’s personal vision.
    Teachers want to perform the entire process of course design and delivery themselves for the same reason artists wants to perform the design and delivery of their work themselves. Not from an insecure need for control, but because this is the only way to express their personal vision.

    Phil Thurber
    Sylvania Math Dept.

    ReplyDelete
  2. Phil, I was with you until I reached the very end of your comments! - when you wrote that an instructor performing "the design and delivery of their work themselves" is "the only way to express their personal vision." I agree that an instructor should have - has to have - a personal vision for teaching, but I disagree that the design and delivery necessarily has to be idiosyncratic. I hesitate to over-use this example, but would you want the pilot on your next commercial flight to be idiosyncratic in her design and delivery of the flight? Of course not: you'd want her to fly the plane in the optimal way - which is a standardized method. Okay, if you think a commercial airline pilot is a poor analog for a teacher, let's look at some other professional fields. I don't know much about the design and building of houses, but if I had to build one, I'd certainly have my personal vision of the house I wanted to build, and I'd certainly apply my own "art" to the work. But I have to believe I'd probably build a pretty crappy house because, relying only on my own "art," I'd have no access to the myriad improvements in house-building that have occurred over the centuries - thanks to math, physics, materials science, environmental science, and a host of other scientific fields. I'd argue that the same thing is happening to instruction: people are realizing it's not just an art; it can be greatly improved through the application of the scientific method. And just because we apply science to instruction doesn't mean we have to throw out "art" or one's personal vision - just as we don't have to build "cookie-cutter" houses (you can build a scientifically sound house that is full of artistic flourishes and that will still stand up during an earthquake). There will always be room for teachers to apply their art in the classroom or wherever they teach.

    Just to dwell one moment longer on the topic of "art," I am not an artist but I have the greatest respect for people who create art, for all they add to our culture. But if you think about the artistic fields, they all seem to be amenable to highly individualistic and idiosyncratic visions. Say I'm a sculptor and I work with clay - I have full freedom to apply my artistic vision to the clay I work with. If I mess up, I can't really hurt the clay. Same for painting on canvasses, making music with a violin, singing (though my singing certainly hurts my neighbors' ears!). But contrast these idiosyncratic endeavors with teaching students, where we really can - and do - have a major impact on the "material" of our "art." In the case of teaching, I think we have a responsibility to apply the best that art and science have to offer. I know when I first started teaching, I had no idea how to teach and I'm sure my teaching had a deleterious impact on some of my students (wherever you are, I'm sorry!). I think our students deserve better than that. Few modern professional fields allow a practitioner ultimate control over the design and delivery of his product - we are all subject to the best influences, whether artistic or scientific.

    ReplyDelete
  3. From Peter's entry: "As long as we have organizational safeguards so that evaluation data is not misused, we should not fear evaluation - it can only make
    instruction more effective."

    What will these safeguards be? Something like, "Even if you stink, we'll let you keep teaching; you just need to make improvements x, y, and z" ?

    These discussions have in larger part been ones about sovereignty, and worthwhile ones, I might add. But I wonder if we need to have a more frank recognition that underneath all these discussions is a base fear, teachers' fear that if they are observed and found to "not teach well" then they might be fired instead of given an opportunity to improve.

    ReplyDelete
  4. Peter,
    You say, “we have to start somewhere, and we do that by applying certain interventions and measuring their result.”
    You use the word “intervention” to refer to what teachers do with students in an educational venue.
    This isn’t a common word to use in this context. I looked up the word and found this definition of intervention used in a medical context:
    “A medical term in which patients are viewed as passive recipients receiving external treatments that have the effect of prolonging life.”
    Your use of the term suggests you view students as passive recipients receiving external treatments. This seems to be the view of students implicit in the outcome paradigm.
    -Phil Thurber

    ReplyDelete