Tuesday, November 16, 2010

Martha Bailey: TLC co-ordinator's point of veiw

Assessment as Professional Development

According to some philosophers, all of us are ethical egoists, and we only choose to act when an action is to our benefit. While I do not hold to that as the best explanation for ethical behavior, I think the theory provides a useful position from which to discuss learning assessment with PCC faculty. We do many things as faculty because we are told they are part of the job: writing syllabi, conducting classes, grading—oh, yes—we have other motivations, too, but sometimes it just comes down to: I have to do this. Learning assessment, particularly at the program level, comes with that kind of mandate, as well as some extrinsic motivations: do this, and do it well, or we could lose accreditation, and that has more than a little impact on a community college like PCC.


But, what if we take the egoist’s view and ask, “What’s in it for me (besides keeping my job, and helping students advance)?” Where do we find the personal part of assessment of learning? I want to suggest that, at least in part, the answer is this gives us a tool for professional development. But before I pursue that idea, I want to acknowledge a position raised in a comment on last week’s blog. Jayabrush wrote “These discussions have in larger part been ones about sovereignty, and worthwhile ones, I might add. But I wonder if we need to have a more frank recognition that underneath all these discussions is a base fear, teachers' fear that if they are observed and found to "not teach well" then they might be fired instead of given an opportunity to improve.”


A similar fear was raised in an earlier posting, too. Both comments note that there is a real risk if the only use of assessment of the individual faculty member’s work is punitive. And it does happen, particularly for part-time faculty. Once a person is hired, outside of egregious actions, he or she will continue to be given classes, because department chairs need to fill teaching slots. In the last (I’m not sure how many) years, the level of evaluation of teaching performance by these instructors has been minimal—until an instructor applies for assignment rights, that key to access to staff development funds and other opportunities. Once this is done, if the instructor is denied such rights (and that does happen), the person can no longer teach at PCC, at least in that subject area. I’ve seen this happen to instructors who had taught for years, but never applied for the rights. So, of course, these new moves to assessment can appear to be another punitive move.


But there is another way to view assessment, and one that might even address the fears. An aside here: while the mandated assessment for accreditation is at the program level, and is not intended to single out a particular instructor, it is possible that, over time, one instructor’s classes may be deemed to be less successful than those of other instructors. If that happens, I would advocate moving the person into a plan similar to the one I am about to describe. But if assessment is considered an avenue for becoming as effective instructor as I can be, then I no longer need to fear it, though it probably will be uncomfortable at times. And if I can control when and how the assessment happens (not waiting until it is demanded), and even recruit help from my choice of allies, then assessment becomes a tool for both student learning and for faculty learning: assessment becomes a two-way street (the last phrase I have borrowed from a TLC brain-storming session on assessment, and I don’t recall who came up with the phrase).


What I mean is that we use assessment of various forms in the classroom to determine whether students are learning. And formative assessment, in particular, is designed to help both students and faculty see where student are succeeding, and where they need more work. But it can do the same for faculty: if students aren’t “getting it”, they often will offer suggestions of ways to help them. Not all of that feedback will be equally useful: some is worthless, while other pieces are absolute gems of insight.


The most useful level for assessing for professional development, then, is the classroom level, and not just toward the end of a class (the traditional student evaluation). It would be nice to be able to use longer-term, post-graduation assessment as well, but at the moment that isn’t practical. Rather, for the best possible interactive assessment and improvement throughout a course, there needs to be assessment planned by the instructor, and assessment offered by the students spontaneously. The class must be structured and carried out in an open and welcoming manner for the latter to be offered. For either these to be of any benefit toward professional development, the instructor does have to be willing to take the feedback as encouragement to improve and not simply as criticism. Someone who can work with student feedback, differentiating between the feedback of value and that offered with other intent, and who becomes a better instructor, will be able to approach evaluations by administrators with much greater confidence.


What continuous formative assessment in the classroom means will obviously vary with the course being assessed, since there are many types of courses offered at PCC. But the idea, and one that doesn’t need to add to the instructor’s time burden in the way Phil Seder described in his posting, is to regularly do small assessments of what is happening in class, and make course corrections along the way. Now, sometimes the needed course correction will be one that cannot be applied until the next time the course is taught—if my major assignment needs work, I’m not going to fix it for students this time around. But if I get feedback as we are going through, I can note that and include it in my course development, rather than having it come later as another task (this is something like the course development feedback loop Peter Seaman discussed).


The other piece here is that students will not only speak of course content and materials: sometimes they address aspects of course delivery, that is, issues related to the instructor directly. This offers the biggest opportunity for professional development—to work on me and my skills. And if this is coming continually, then when I see an opportunity coming up to learn in a given area where I am weak, I can jump on it. And, yes, it may mean the students get fewer comments on a piece of work, but if it leads to me being a more effective instructor overall, that benefits greatly outweighs the small harm.


Now, some might say that I am writing a piece such as this blog for egoistical reasons: after all, I do coordinate the TLC (Teaching Learning Center) at Cascade, and we do offer some of those sessions you might come to for improvement as an instructor. And I won’t deny that is somewhat true. But part of what I have learned as a TLC coordinator is that students benefit (learn more) when faculty continue to develop their skills; students may learn from less-skilled instructors, however, they truly appreciate getting to “study with” a highly-effective teacher. If we can let the students assess us even as we assess them, then learning truly does become “everyone’s business.”

Monday, November 8, 2010

A "Scientific" Basis for Assessment? By Peter Seaman

Phil Seder made a couple of points in his blog entry that I would like
to use as jumping-off points. But first a few words about me and where
I stand regarding assessment:


I started off in higher ed as an English teacher. I taught freshman
composition and literature courses, and I frankly struggled with
questions of worth - How do I know whether my students are getting
anything worthwhile from my classes? Is teaching really worthwhile, or
is it a waste of time? Could the students get the same thing if they
just read the textbook? How many drinks at happy hour will it take me
to forget these weighty questions? (I asked that last question only on
Friday afternoons). Eventually I became really curious about the
process of education and I started taking education courses on the
side. I even went so far as to get a certificate in teaching English as
a second language (ESL teachers use some fascinating methods), and then
I dove in head-first: I went to grad school full-time to get a master's
degree in something called Instructional Systems Technology.


"Instructional Systems Technology" sounds very intimidating, but if you
sound out the words, it's not so bad. The word "systems" implies that
instructional designers (or IDs) take a "systems" view of instruction,
meaning that we look at issues like repeatability, reliability,
validity, and so on. The word "technology" implies that we look for
ways to infuse technology in instruction (technology can be a method or
set of techniques, as well as a computer program).


Everyone who studies IST, or "instructional design" (its more common
name) learns a model called the ADDIE model.


ADDIE:
A = Analyze (the learning task, the learner, and the learning context)
D = Design (the instructional intervention - can be a book, an activity,
a computer application, or a combination of these, and much more)
D = Develop (the instructional intervention, also known as "production"
- a painful word for most IDs)
I = Implement (the intervention, or try it out)
E = Evaluate (the results of the intervention, or find out how it worked
- what long-lasting results it had upon behavior, cognitive functioning,
etc).

The process is iterative: we use the results of evaluation to improve
instruction continually (draw an arrow from "E" back up to "A" and we
can dump the results into our next "analysis" phase, and start the
process again).


You can see that ISD (instructional systems design - it goes by many
names) is really comfortable with evaluation. When I heard that the
accreditation authorities want PCC to use the results of assessment to
improve future instruction, I thought, "Of course - that's what you use
results for."


Okay, back to Phil Seder.

Phil made an excellent point in his blog entry about instructors doing
all of the steps - analyzing, designing, developing, implementing,
evaluating, which is analogous to washing the turkey, dressing the
turkey, cooking the turkey, serving the turkey, washing the dishes
afterward, and asking everyone how it tasted so you can improve your
recipe for next time. Phil has actually put his finger on one of the
great frustrations of my profession as an ID, namely the unwillingness
of instructors to *allow* anyone else to assist with any part of their
personal ADDIE process! Many instructors become so comfortable with
their own way of teaching that they view any advice or consultation as
an intrusion (I've even known instructors who refuse to let another
instructor observe any portion of their teaching process).


I'll admit: Letting someone else critique your professional work can be
very scary. And evaluation itself is unfortunately fraught (in many
organizations) with performance implications (administrators can use
evaluation results against teachers). And certainly there are control
issues (if I get to control every aspect of the instructional process,
then it will happen the way I want; to give away control to someone else
is to invite the possibility of loss of control). My larger point is
that instruction improves only through regular evaluation. To give away
control of some aspect of the process is to open up oneself to growth -
again, as long as there are protections within the organization.


Phil talked about how hard assessment is to do, and I agree. Sometimes
I wonder if it isn't better to prepare students for some kind of
external assessment - like the aviation instructors who prepare students
for FAA exams, or the real-estate instructors who prepare students for
state licensing exams. At least in these cases I can know the
assessment is external to me and free from my own bias. But it's still
scary because I as an instructor lose control of the situation the
moment the student walks out of my classroom (whether onground or
online). And of course when instructors are measured by how well their
students perform on standardized tests, instructors will "teach to the
test," which unfortunately limits learning to the confines of the test.


I guess I would close this blog entry by pointing to the necessity to
show results in almost any professional endeavor, and to wonder why
higher ed has been able to NOT show results for so long (sorry about
splitting that infinitive!). A few years ago, I overheard an instructor
at PCC say, "A student should take my course because she or he will be
better for having taken it." Okay, I accept that statement. But could
you say the same thing about a walk around the building? ("The student
will be better for having taken a walk around the building"). I'm sure
that every instructor knows, in the deepest chambers of the heart, that
students are better for having taken a course. But the fact is, it's
not enough in any profession to assert some value with no evidence of
value. And my profession of ISD absolutely depends on this evidence.
Think about it: If anyone can design instruction that works as well as
any other instruction, where is the value of the designer? I look at it
like riding in an airplane: When I first rode an airplane, I was really
frightened because I did not think there was any basis for an airplane
to stay aloft - it seemed like magic. But when I talked to pilots and
read books about aviation (kids' books are great for this kind of
explanation), I realized that aviation depends upon the application of
scientific principles: move a specially shaped object (fuselage with
wings) over the ground (and through the air) fast enough, and the object
will rise in the air - it will take off! It has to - the physical
properties of the earth demand it. So why can't we apply the same
principles in instruction: apply certain forces and depend on a certain
result?


Of course you say, "But students are not objects, uniformly shaped,
moving through the air at a certain speed." And of course, you are
correct! Students are humans and therefore arrive in the classroom with
an endless variety of behavioral, cognitive, and psychomotor attributes
that are incredibly hard to divine. But we have to start somewhere, and
we do that by applying certain interventions and measuring their
result. As long as we have organizational safeguards so that evaluation
data is not misused, we should not fear evaluation - it can only make
instruction more effective.

Tuesday, November 2, 2010

Phil Seder's Assessment Angst

I come from the world of business marketing. In that world, we plan, we produce and we measure success in order to refine programs and improve outcomes. I believe this to be a valid approach. I teach my students that it is an essential approach in the business world.


I thus find myself perplexed by my internal resistance to the idea of assessment that is being pushed down from the highest levels of government and has now come to roost at the community college level. There is no doubt in my mind that we can improve delivery and there is no doubt in my mind that assessment can lead to refinements in course content or delivery that results in better outcomes. So why my angst?


I think it is because in the business world, there is a fundamental difference from the world of classroom education (well, a bunch actually, but I’m going to deal with just one here). In business, marketers work with researchers to determine consumer or business needs. Those marketers pass requirements on to designers and engineers who develop the products or services. The engineers pass the designs to manufacturing and sales to create and deliver the products. And finally, the marketers (often with the aid of accountants or analyst) assess the results before another round of the same is initiated.


Now let's look at who performs these tasks in the education world. Basic research and determination of customer needs? At the classroom level at least, the teacher. Product design? The teacher. Product development? The teacher. Product delivery? The teacher. Product assessment? The teacher.


This is not to say that administrators do nothing. There are monumental issues of overall program development (e.g. Should we offer a nursing program), new program implementation, facilities construction and maintenance, marketing, budgeting, negotiation, discipline and, ah yes, even employee supervision. I thank my stars that trained professionals stand ready to perform these essential duties.


But the bottom line is that the classroom teacher is responsible for every aspect of the development and delivery of the product in the classroom. In other words, they perform tasks that are the responsibly of numerous differently trained professionals in the business world. I know this because I've managed products in both worlds and, frankly, it is one of the things that excites me about the daily challenge of teaching.


But therein lies a conundrum. To teach better, we are being told, we need to assess more. We need to be more like the measurement driven business world. The teacher though, is not like the business professional. They are responsible for the entire life cycle of product development and delivery in the classroom. To assess more means that they either have to 1) work more or 2) spend less time on planning, development and delivery.


Now we all know that there are those who can work more. But for many teachers I see around me, their waking hours during the academic year are filled with the activities of course development and delivery. I doubt if many would look kindly at giving up the few moments of personal time they have during the week. As to the old saw "work smarter, not harder," well, it's a trite and meaningless age-old piece of business wisdom, custom canned for delivery to the survivors of corporate layoffs as they contemplate a future of 24/7 employment. At the very least, when I have been in those situations, I had the comfort of higher compensation to balance the loss of personal and family time.


Today's reality of an assessment-driven education system though, is that the classroom teacher will have to cut back on planning and delivery activities to respond to the assessment demands. And the more dedicated the teacher, the more they will need to cut back, since they are the ones who already spend their entire waking time engaged in better delivery. The result: we know what we're getting, but what we know we're getting is worse than what we were producing before when we didn't know what we were getting. Read that twice, if need be, then, especially if you have been teaching for a decade or more, contemplate whether this does not seem to precisely speak to the ultimate results of modern educational policy at the high school level.


I see it in my own work. Even in the past five years, I have seen a creeping slide towards more meetings, more time discussing the nuances of outcomes language (e.g. our graduates don't communicate with coworkers, they interact with coworkers), and more time discussing assessment. Where I have cut back was first, in my personal life where I spend limited time on my once thriving second life as a sculptor, and second, in my course preparation and delivery. It's not like I drop professionalism; it's just that I make one course improvement per term where previously I made three or four or five. Or the extensive paper markups that I used to do, and for which I was often thanked by more serious students, have become rare and often replaced by a mere letter grade. It's not what I want. But the fact is the time spent responding to federal officials and accreditation committees must come from somewhere.


Ultimately, the question that I can imagine being thrown back at me is this: "Then would you just say to give up on assessment, even knowing that better assessment can create a better product?" in a nutshell, yes. Taking this back to my original business analogy, if a shortage of resources forced me to make a choice between building a better product or measuring consumer acceptance of that product, I would err on the side of building the best product I could. In a world of unlimited resources, I would do it all. In our real world of limited resources, trade offs will be made.


Of course, I do assess. In many ways and on many dimensions. I assess writing skills in business classes (which doesn't make one very popular, I might add), I assess math skills, I force students to present, I make them work in teams. Do I know scientifically whether my assessments are creating the best students, or whether I'm doing a better job teaching than I was five years ago? No. Intuitively I believe so. But I know I am putting the most I can into my product and I am comfortable that in a world of scarce resources, I am allocating my time in an ethical and professional manner.

Monday, October 25, 2010

Steve Smith's Assessment Journey


I started teaching in a volunteer ESL Program for a local church that was next to the University of Washington which I was attending as an undergraduate. I spent the first week doing a lot of exercises I had found in an ESL instructional book of which many were written. I was working with a group of Hmong. It wasn't until the second week that I realized that many of the students were illiterate in their own language. Much of what I had been doing had been worthless. I realized there was more to this teaching and learning business then I had realized.


After graduating, I moved to Ecuador, South America. I taught English and eventually became the director of a language and cultural center. I was responsible for 30 Spanish and English instructors many of whom had no teaching experience. I began to research teaching and learning strategies. I started to wonder if there were a systematic way to approach teaching and learning.

After 5 years we returned to the US. I started teaching computers. I also started a master’s program in adult education. Instructional design, Knowles’ principles of Andragogy, Gagne’s 9 Instructional Events, Constructionist theories of teaching and learning opened my eyes to a whole new vision of teaching and learning. I also was introduced to assessment through Kilpatrick's 4 levels of evaluation. While now outdated, it transformed how I viewed evaluation. Can the student do the learning task in class was only the beginning. Can they perform it outside the class without classroom support and finally the most important and hardest to assess; did the learning solve the original problem? I realized that what I did inside the classroom needed to be assessed at least in part on what the student could do outside the classroom. This was a monumental shift in how I viewed teaching and learning. I started to believe that if it could be measured, it could be learned and that instead of a Bell curve of grading my expectations were that everyone could succeed if I applied the appropriate instructional design principles.

I taught off and on in various formats including distance learning for the next 10 years. Recently, I finished the coursework for a PhD in Community College Leadership. In this program I was introduced to the concepts of Chaos Theory, Freire’s Transformational and Social Critical theories of learning, Qualitative vs. Quantitative research, living systems and Wheatley’s application to organizations. I realized that I had become too reductionist in my teaching and learning. I needed a more holistic approach. Some things are hard to measure and when you try to measure them, they change.

These experiences have shaped my view of assessment. I believe in the concept of assessing learning based on what the students can do “out there”. I believe that we need to measure not only what goes on in our classroom but also the larger core outcomes. This process is messy and we may not always be able to cleanly assess some of the critical learning components which happen in our classrooms such as the student is more confident, more engaged in their own learning, open to new ideas, more excited about continuing with their education and more willing to take emotional and intellectual risks. These are the mana from heaven that we seek out as teachers but may not be able to necessarily assess. I believe we owe it to our students to keep struggling to find the right balance of assessing with realizing that learning is not necessarily the sum total of its parts. Assessment is not an either or but rather an and/and focus. We need to assess to continue to grow and improve as an institution while allowing room and time for those things which are difficult to assess to flourish.

Steve Smith is the director of Curriculum Support Services


Tuesday, October 19, 2010

What does PCC get from being accredited, anyway?

From Laura Massey, Director of Institutional Effectiveness

I don’t know about you, but I am not motivated by scare tactics. A sentence beginning “Without accreditation” followed by frightening statistics does nothing more than bring forth my inner twelve year-old who mentally shuts down while rebelling in full voice. Instead, here are a few facts.

Because PCC is an accredited college . . .


  • Almost 40% of our students are able to receive Federal Financial Aid dollars.

  • The credits earned by 5,100 students who transfer to another college or university (each year) are accepted at the transfer institution.

  • Last year’s 3,400+ graduates have credentials that are valued by other institutions, employers and licensing agencies.

  • Almost $14 million in recently awarded grants will provide improved student services, new equipment, expanded curriculum and so much more.


But wait. This is supposed to be an assessment blog. Why the emphasis on accreditation?


Because to maintain our accreditation, PCC is required to “hasten its progress in demonstrating, through regular and systematic assessment, that students who complete their programs have achieved the intended learning outcomes of degrees and certificates. Further, the college must begin to demonstrate, in a regular and systematic fashion, how the assessment of student learning leads to the improvement of teaching and learning” from the visiting team response on behalf of the Northwest Commission on Colleges and Universities, Spring 2010.


Scare tactic? Hardly - although it is certainly an important statement requiring action. Let’s first put this in context at PCC.


We care deeply that our students are learning. Period. It is our professional responsibility to understand what is working well and make improvements when needed. This is nothing new or different from what hundreds and hundreds of PCC faculty do in classrooms each day.
However, Northwest is also asking we demonstrate (in their language that means document) that we regularly and systematically assess graduates to have achieved the College defined outcomes. Furthermore, that we link how we used what is learned through assessment to make changes where needed to improve teaching and learning.


The faculty-driven Learning Assessment Council developed an assessment model which was initially implemented in 2009-10. As support staff to that work, I could see how institutional learning is both energized and realized through faculty creativity, insight and on-going commitment to excellence - the same qualities that support student learning.


As we ‘”hasten” our progress and fully implement the model this year, I believe we are on the path to fulfilling Northwest’s recommendation.

Tuesday, October 12, 2010

From Cynthia Killingsworth, Accounting Instructor

I enrolled in Sylvia and Shirlee’s assessments class last spring with a narrow objective of improving testing in my accounting courses. This class ended up being a major teaching style alignment and I learned to appreciate the unavoidable bottom-line of the assessment movement. We talk a lot about measuring student performance, but it is really a matter of facing our own teaching effectiveness head-on. The harsh line “It’s the teaching, stupid!” comes to mind from a 2006 article about retooling math education. Here are some of my thoughts from this experience:

Do I understand what should be assessed? I discovered that I was assessing concepts easiest to measure, such as preparing a financial statement in proper format, but may have failed to fully consider those concepts harder to measure, but equally relevant for student success, such as interpreting financial information. This led to a rapid detour away from testing as my focus and towards reconsidering what I was teaching.

Do I understand what is most relevant for course content? In this detour I had to invest some time finding actual research in my discipline regarding the concepts or skills important for accounting student success. Ironically, the PCC assessment focus last year was critical thinking and problem solving, and when I researched my discipline, I rediscovered that the Accounting Education Change Commission had completed an extensive study in the 1990’s determining that critical thinking skills were being ignored in college accounting education. Accounting educators’ subsequent response was to include one mildly unstructured critical thinking problem at the end of each accounting textbook chapter, still preceded by dozens of highly structured procedural questions. Guided by this inference of relevance, it is not surprising that most accounting instructors still lean towards procedural content and not much has changed.

I learned that it pays to reexamine our discipline to determine if our focus is supported by research. I found that this might not be true in accounting education. Another interesting irony of the Accounting Education Change Commission study is that many of the commission’s recommendations aligned perfectly with the PCC core outcomes. I now have even more confidence in the wearability of PCC’s overall vision!

Do I understand the level of my students’ academic development? I reviewed the research subsequent to the Accounting Education Change Commission’s findings and learned more about student cognitive development. After this report was released, some college professors started introducing complex financial analysis projects at all levels of accounting courses. The problem with this quick fix was that most students cannot handle complex financial analysis until their senior year or graduate level studies. A tiered approach of first introducing students to concepts involving uncertainty, followed by increasing analytical tasks, was found to be more effective.

Do students need to know about all this? Students are the most significant stakeholders in the assessment movement, so it makes sense that they should be aware of what is happening. I was teaching a non-accounting general education (Introduction to Nonprofits) class at the same time I was taking the assessments class and spent some time discussing these concepts with my students. I was pleased and relieved to find that students were very interested in PCC’s core outcomes and wanted to understand meaningful ways to measure the quality and progress of their education. This desire confirmed a key assessments’ concept discussed in Sylvia and Shirlee’s class. Letter grades may stay around, but more of our effort needs to be focused on an assessment system that gives feedback and advice to students throughout the term. It should be just as much of a learning tool as their textbooks.

I have to admit that the above experiences unsettled my relatively peaceful teaching bubble, but my perspective has been broadened along with my courage to face the inevitable changes on the horizon. I remember when the Accounting Education Change Commission’s report was released about the poor quality of accounting education. I had just graduated from college and was facing the reality of paying my student loans. The thought occurred to me that I deserved a refund! As I face the future ramifications of “value-added” education, I hope I never forget my perspective as a primary stakeholder in education from 20 years ago.

Tuesday, October 5, 2010

Assessment: a blog post

by Andy Simon
September 2010



I haven’t been involved in the discussion of assessment for more than a year. I apologize if the issues I raise have already been heard and discussed. I confess that I am skeptical about the whole enterprise, and yet I know that many intelligent, aware, hard-working people–some of them my friends–have put in a great deal of time and effort examining proposals and strategies. I am not completely comfortable saying, in effect, to these folks, “your efforts are for naught, the project is inherently flawed.” For one thing, I may be wrong. Just on the other side of the next committee meeting a beautiful, effective, and efficient approach may be discovered or created. Yet, I was asked to write about my objections–qualms might be a better term. And so I shall.

The point of the project, I remember being told, is that taxpayers, politicians, and business leaders want to know that they are getting a good return for the money they invest in higher education. What they want to know–and what we must be able to show them–is that we are indeed enhancing students’ lives–adding value to them–by educating them.

My first observation about the project so described (and perhaps inaccurately so) is that we should be very careful about how we think about the relationship between those who sign the paychecks of teachers and those who receive the checks. John Dewey, a century ago, in defending the concept of academic freedom, offered a similar warning. He said, in effect, that although the trustees of a university are in an economic sense the employers of professors, the job of teaching is primarily to serve the public.


Nowadays, I see the idea of “the public” as a bit more problematic than Dewey did, and besides it might seem plausible (though I would say mistaken) to equate “the public” with the taxpayers. I would say that though we are employed by the taxpayers (in public education) we are responsible primarily to the generation we are educating and beyond that to future generations as well. For if we do not educate the current generation, who will there be to educate the generation that follows, and the generation after that?

This observation points to one of the problems with the very model the concept of adding value to students lives depends on. It is, after all, a rather industrial model–students’ lives are the raw material we work our magic on. They pass through our educational factories and, like raw steel turned into automobiles, their value is enhanced. But processing students’ lives can at best be only part of the function of our institutions of higher education. Another function–and a vital one–is to preserve some important things: ways of thinking, intellectual skills, bodies of knowledge, bodies of literature and the keys that unlock their meaning, bodies of art and the keys that unlock their meaning, historically significant works that inform our own culture and literature, and much, much more. How much value is added to our civilization because we can understand that the piles of stones that are scattered across Europe are the remains of a once vast empire, and because we can study the ideas that enabled that empire to rise and that caused it eventually to fall? Having that kind of knowledge adds at best a pittance of value to any one student’s life, but does that exhaust its value to our civilization?


It is important to note that higher education is one of the few extant institutions that predate the industrial revolution. It would be a grave mistake to force it into a industrial model merely because that is the only model we can think of. And yet, that is precisely what assessing education in terms of processing and credentializing students does. The success of the industrial model in many spheres of our lives has made us forget that there were–and hence are–other ways of thinking about education. One of them involves seeing education as a process of preparing and inducting students into a community–a community of scholars.

Clearly, there are problems with this model, too. One of them is that a closed community can perpetuate an unfair exclusivity. I was tempted to say that a pre-industrial approach to education saw the point as inducting students into a brotherhood, which would have been historically accurate as it reflects the way the “community of scholars” model has in the past excluded people who had a legitimate claim to be admitted. But the solution to such problems is not to throw out the model altogether but to try to ensure that there is open access to the community of the educated.

I can say quite explicitly what I think we must protect our institutions of higher education from: market forces. The threat to education that has existed at least as long as the advent of Capitalism has been enormously amplified by the computerization of our culture. Education has become a commodity–something to be bought and sold. That might not be so bad, but in our society, the only commodities that can be successfully bought and sold are ones that can be mass produced. In order to be mass produced, education must be standardized and homogenized. When I go to the store for, say, a pair of socks, the characteristics that make the socks I come home with unique an individual entities in the world are unimportant. What is important is that the socks I come home with are interchangeable with any of several dozen in the store I shopped in, any of tens of thousands in the stores the sock manufacturer supplies. That’s the way
it is with mass-produced commodities.

We are already well on our way to commodifying higher education. When we discuss the question whether college credits earned on-line are equivalent to credit earned in face-to-face classrooms we are addressing the interchangeability of our product. Of course, the answer to our question is obvious: if earning the credits in the two venues add equivalent value to our students’ lives, then they are clearly equivalent.

But if that is the approach to education that we want, why stop there? Why have professional teachers at all, with all their quirks and personalities? Why not have Harvard or Stanford develop the most effective on-line curricula (since massifying the classroom is impractical, though not impossible) as measured by value added to students’ lives, and then hire educational technicians to administer them? I’m being a bit facetious, but I really do fear that something like what I’ve described is the future for higher education: most students will enroll in institutions (note I didn’t say “attend”) where they will be offered the best on-line curricula the school system can afford. Only the elite, that is, the wealthy, will be able to afford colleges that offer face-to-face instruction from actual professors.

That poses a serious question: is there any real value in the uniqueness of face-to-face instruction by professional instructors? Maybe the answer is no. Maybe the individuality of instructors, their quirkiness, their personalities are irrelevant, or worse, impediments to the educational process. Obviously, I don’t think so. And I don’t think so because what I remember most about my undergraduate education is not the content of the classes but the inspiration I took away from many professors–and some of the quirkiest were the most inspiring. It may be just me, but I think inspiration is best transmitted face-to-face. I don’t think it comes across all that well through a computer screen. (I’m pretty old fashioned, but I believe that the immediate experience of presence is qualitatively different from any technology-mediated experience, no matter how life-like.)

Now that I’ve struck a personal note, let me pursue it a bit longer and then bring this long post to a close. I have no doubt that my education added a great deal of value to my life. Probably the most valuable contribution was never explicitly stated but implicit in everything I studied. My professors revealed to me the vast world of ideas–not specifically philosophical ideas, but the world of thought and knowledge. At some point I caught onto the notion that, if I played my cards right, I could spend the rest of my life studying anything at all that interested me, whether it be the sex lives of the Greeks (the revelation came to me while I was barely post-adolescent) or the musical traditions of the former Portuguese colonies, or anything else.

What a valuable lesson–to be shown to the vast continent of human knowledge and to be led to some of the points of entry to it. We have so many opportunities to enhance and enrich our students’ lives. We give students literature to read and insist on discussing it, and by so doing reveal to them the inner lives of other people, and by implication revealing their own inner lives, too. I feel some regret that throughout my teaching career I didn’t emphasize nearly enough to my students the importance of finding and creating beauty in their lives. I don’t think the intangible enhancements we have to offer are any less real because they are intangible, nor are they any less real because they are unmeasurable. I’m probably just tilting at windmills, but I think we have a responsibility to the future to do our best to protect education from the market forces that would industrialize it, and by so doing would destroy its humaneness and its ineffable value.