Tuesday, November 8, 2011

Grading Gets Outsourced to India

by Shirlee

The way PCC is approaching the new demand for assessment in Higher Education is only one of several different models that have been adopted across the landscape of colleges and universities in the U.S. These differences became apparent as members of the Learning Assessment Council started to scout around in our first year of existence (2008-9), which was provided to us as a "year of inquiry." Here are short descriptions of some of the ways colleges and universities have decided to go:

1) Many institutions have adopted a single high-stakes standardized test, designed to measure core learning outcomes like communication and critical thinking. These are usually "value added" measurements, intended to show how much development there has been in student competency from the first term of entrance to an exit with a degree. These tests allow comparison of one institution with another, which is part of the call for "accountability" mentioned in last week's blog.

This first approach is based on an assumption that assessment of student learning is a separate kind of activity from instruction. Teachers may be skilled at teaching, according to this thinking, but they are not experts in measuring learning. Measurement expertise is called psychometrics. Experts provide the design and on-going re-design of the major competing high-stakes tests used by colleges and universities in the first model.

(2) In the second model, the idea continues to be that we should leave assessing of learning to the assessment experts, in order to disrupt teachers' lives the least amount possible. In this case, though, assessments are customized to different SACs or departments via the work of a team of psychometricians who are called in to (i) interview faculty about their specific student learning outcomes, and then (ii) design customized assessments to be used by all instructors in that subject area. In this way, for example, experts might come and consult with the PCC history SAC to determine what "critical thinking" means in the area of history, across PCC's history curriculum. Then all the instructors of a given section of history class would be required to administer the test the psychometricians came up with, and the results would be examined to see what they say about the effectiveness of history instruction at PCC. This model gives assessment results that can be used for continual program improvement (the other main purpose of assessment, as mentioned in last weeks blog). The major company that has emerged to do business in this second model is EduMetry.

(3) Many institutions created a new administrative office, and put someone in charge of organizing faculty assessment work. Often, this office oversees the adoption of an expensive assessment software system, and then trains faculty (usually department chairs or supervisors) in how to use it. The software system ensures consistency of reporting, and eases bundling of assessment results for display to the accrediting bodies. For this approach to work, the administrator has to have the power to compel reluctant faculty to both do assessments and then learn how to report results using the system. Faculty are involved to a greater extent in assessing than in either of the first two models, but they tend to be viewed by administrators as reluctant participants likely to drag their feet....

(4) Some colleges and universities have decided that assessing is a critical component of the instructional process, and must be kept as part of the bundle of teaching tasks. The idea here is that faculty are deeply invested in successful student learning, and when they see the connection between assessment and improved learning outcomes, they will embrace assessment as a new and useful tool for doing their important jobs even better. This model then leaves assessment in the hands of faculty, in the form of an assessment committee or council. PCC was set on this path through the recommendation to the college made by the faculty Learning Assessment Council that program/discipline assessment be the responsibility of SACs, and implemented as an ongoing component of Program Review. This last model is the only one that is fully respectful of faculty professionalism and expertise....

The national body for the union that represents PCC's instructors and APs has endorsed this last model, coming up with an interesting slogan that in higher ed we should count what counts. I remain deeply convinced that this last model is the best both for students and for teachers, in the long term. But I am also aware that some faculty at PCC would have picked one of the other models, had they had the choice. And I often call to mind a participant in one of our first assessment classes who voiced a very strong positive response to the second model above, and the company that is most successful in that endeavor, EduMetry.

EduMetry has a varied approach to assessment activities in Higher Ed, and I recently came across another aspect of their business plan in the Chronicle of Higher Education. EduMetry has started outsourcing grading to India through their program called Virtual TA. (See http://chronicle.com/article/Outsourced-Grading-With/64954/) In this part of their business, they devise rubrics for assignments, train and norm a group of assessors on use of the rubric, and then ask their assessors to provide detailed, rich feedback on student papers -- feedback of the sort we all might dream of providing, but are often too busy to actually do. One sociology instructor at a community college, is quoted in the article.

And although Ms. Suarez initially was wary of Virtual-TA—"I thought I was being replaced"—she can now see its advantages, she says. "Students are getting expert advice on how to write better, and I get the chance to really focus on instruction."

It is a new world of assessment in Higher Education. With so many things changing so rapidly, and with many different kinds of responses to the changes being pioneered at different institutions, tuning in to assessment news provides lots of surprises. I used to think that education was a service that couldn't be outsourced. But EduMetry has surprised me. The logic of it is just an extension of the thinking that leads to the first two assessment models I described above -- if teachers are experts at teaching and psychometricians are experts at assessing, and we should each do what we are experts in, then assessing should be peeled off from the work of instructors and handed over to someone else.....

I heard an instructor say the other day that grading was the least satisfying part of his job, and he wished he could teach without having to grade. I wonder if he would really be so happy if EduMetry granted his wish.... Instead of doing more assessing, like we have asked instructors to do at PCC, the day may come when we will do no assessing at all. In the back of my mind, I can hear David Rives (president of Oregon AFT) talk about the de-skilling of the instructor's job...

I say that perhaps we should be careful what we wish for....

2 comments:

  1. I think our faculty driven process is a good one. However, I sometimes wish I had an expert to talk to. My husband is in a graduate program in Pyschology and some of his colleagues have spent their entire program studying adult learners and things like self reflective behaviors. It sure would be nice to have an expert on some of these things.

    ReplyDelete
  2. I am a normal human being and as such despise the hours I spend grading. Well, sort of, and sort of not. Actually - I really feel that grading is where the rubber hits the road, so to speak; that it's where a hell of a lot of TEACHING happens. Without the grading part, how would I know what a class, overall, is getting or not getting? How would I read the amazing and personal stories my students come out with, knowing that each story is going to *me*? How can I correlate what happens in class with what happens in student's minds? I read their papers and refer back to class as I grade, and I talk about their papers as I lecture, so grading is core to the teaching.

    I think everyone who loves teaching, and everyone who's good at it, loves theater, loves the performative aspect of it; but that's only half the picture. The fun half. But the devil's in the details. I really can't imagine that decoupling these two parts of instruction is at all a good idea.

    ReplyDelete