Thursday, August 16, 2012

Feeling Beleaguered?

by shirlee

Many people who work in Higher Education are feeling beleaguered. Signs are easy to find, but I have had a special vantage point from my position as chair of the PCC faculty Learning Assessment Council these last two years (2010-2012). I have heard lots and lots of voices from dedicated and hard-working educators -- people who feel pressed for the time and resources needed to do their jobs well, who feel under-appreciated even when they ARE doing their jobs well, and who are just not clear on what exactly it is that the reformers want, other than to heap blame.

By the way, I have stepped out of the chairperson position now. But I know that assessment will remain a faculty-owned and led initiative at PCC under the capable leadership of Michele Marden (last year's Vice-Chair of the Council) and Wayne Hooke (a founding member and brave man, who steps into the Vice-Chair job.)

Meanwhile, Michael Morrow appointed me to the union's executive council as one of his parting presidential acts, and the council ratified my new role as VP for PT faculty at Cascade. I am hoping to help make sure that the valuable assessment work taken up by faculty is both recognized AND FUNDED by PCC. (We also need it to be supported by professional development opportunities and the kind of in-house trainings that are possible through our campus-based TLCs.) I plan to continue this blog, but now as a way of sharing my particular point of view -- NOT as an official vehicle of the Learning Assessment Council. If I come across particularly interesting or under-represented points of view as I talk with people here at PCC, I will invite them to write as guest-bloggers. And I will routinely try to entice other Council members or administrative supporters to take a turn as a poster.

Other than that, it will just be me posting to this blog.

So people are feeling beleaguered. And when feeling beleaguered, I think it is always a good idea to ask:

WHY are things so hard now?
What (in the *&%^$##) is going on?

Questions like these invite an analysis. And lots of people in academics are GOOD at analysis. As a result, they come up with lovely explanatory tales.

I have a private little system for sorting these explanations. Here is my classification array.
  • Things are bad because thoughtless or foolish or short-sighted people have made them bad (or at least prevented us from making the intelligent, long-term changes that would make things better.) Here's an example that I like a lot:
  • Things are bad because the education sector is undergoing HUGE changes due to changed world conditions and the fast pace of technological evolution. For an example, check back here NEXT week!
For now I want to look at the first two categories a bit.

things are bad due to the bad actions of the bad people

There was a time in my life when I would have gravitated toward bad-people sorts of explanations.

My parents were very conservative politically, and hugely loyal members of the Catholic church. I came of age with the shouting matches characteristic of the generational rift of my time and social group. From my dad I heard: "If you don't love America, why don't you move to Russia?" From my mom: "How could you throw away the gift of faith?"  In my youthful arrogance I yelled back that the REAL patriots questioned the stupid choices of their political leaders, and that REAL seekers of spiritual truth questioned the claims of infallibility of their religious guides.

I told them that I was different than they were. That I was better. That I knew that I was right and they were wrong. It wasn't LIBERALS who were ruining America, it was conservatives. It wasn't the atheists that had lost their moral compass, it was the non-thinking allegiance to any one oracle of truth (like the pope) that was the true moral evil...

Fast-forward 4 decades....Both my dear father and loving mother are dead. We won't get to yell about politics any more, thereby ruining some increasingly rare family get-togethers.  And so I can't tell them how I now see what I kept of their way of making sense of the world, of what our differing views nonetheless had in common.

Though I had swapped out the labels of the bad guys and the good guys -- Ronald Reagan was evil instead of the one true American Hero of our time! -- still, I kept the basic explanatory framework.  Just like my parents, I pictured our earth as a battlefield between the forces of good and the forces of evil. Whenever I come across something that was bad or badly functioning, I saw it as the work of the bad, bad, bad people.  Who should be stopped.....

Now, when I hear someone voice this kind of an explanation, I take a deep breath, close my eyes to try to call up my father's face, my mother's laugh .... and then turn away.  Things can be bad without the need to conjure up bad people.  This category of explanation just doesn't satisfy me any more. Anyone who looks for the bad guys in the conservative reformers (and/or their corporate masters), the greedy and overpaid administrators exploiting the heroic instructors (who alone care about the students in the classrooms,) the feckless and spineless politicians (who will do anything to get elected).... well, I wish them well. But an analysis that says bad stuff is due only or mainly to bad people tends to point to making things better by doing something bad to the bad people. It leads to demonization, a loss of curiosity, and recommendations for actions that are fundamentally warlike and intended to cause harm... That is because in this paradigm, we have to destroy in order to save, to beat some people down (the evil ones) in order for good to triumph, to hurt in order to help.

I think these explanations are false, and the advice we can glean from them unhelpful (or worse.) These days, I gravitate toward the idea that everyone is doing their very best....

things are bad due to the bad decisions of the foolish or ignorant people
In conversations about the accountability movement -- and specifically the emphasis on high stakes testing in the No Child Left Behind legislation -- lots of people deride the idea that we can successfully reform education simply by decisions over what and how to measure. In the Shanker piece, for example, the roles of childhood poverty, economic and social inequality, and problems with variable access to basic health care and good nutrition are all called out.... Education is put in context, and we are asked to consider the question how we can expect teachers, by themselves, to effectively solve problems arising from much wider structural problems.  The idea that reformers are foolish -- rather than evil -- seems like a step up to me. And it leaves us some empirical maneuvering room around the question in the background --

what ARE the causal factors relating to learning success and failure?

People can have different hunches -- and different research -- about the answer to that question without being cast as bad people. The main problem I see, however, with entries in this category of explanation is they point to educator helplessness. If the reason so many of our students are failing in our classrooms (and with ever higher individual debt loads) is located in society-wide economic and social factors, then what are we to do? We are unfairly blamed.... but that is not very satisfying for very long, because this analysis means there is nothing WITHIN our classrooms we can do to help our students succeed....

I like explanatory theories the best that both refrain from demonization AND leave some conceptual room for effective action. And that is why I like explanations the best that are both BIG (including relevant context) AND maintain room for effective local solutions...

I think much of what we have built over multiple generations in "the west" has been challenged FAST by global changes in economics and politics, and the sweeping "disruptive" technologies for communication. I think it is our turn, as educators, to be disrupted. I think much of what has caused this huge and fast--moving dislocation is big picture and beyond local control. But I also think that there is much we can do -- especially by turning toward one another, pooling our skills and wisdom, and staying open to change -- to gladly and joyfully find ways to better serve our students.

Those people we are all here to serve.

I will tell you more about this kind of explanatory tale in my next blog.

Friday, May 25, 2012

Survey Results

by Shirlee

ConversationMap Thank you to all who participated in our "big conversation" this year. We got many thoughtful and considered responses to our questions. We have made a "map" of the most common thinking we encountered -- just short representations of more nuanced and rich comments, but hopefully of help in representing the range of thinking among PCC faculty and staff. The Learning Assessment Council will use this to guide our recommendation to the college.

Conversation Map: The Big and Very Big Questions
Here is a map to the most frequent REASONS for answering our questions

I)  For purposes of continual program improvement of instructing and learning:

Should all SACs address and address and assess all core outcomes? (And if not, then what?)

  • college is not just job-training AND the core outcomes are what marks the difference 
  • the core outcomes are what pull us together as a faculty and help students have an integrated educational experience
  • the process of assessing for core outcomes is time-consmuing, annoying, frustrating.... and valuable
  • we need, however, to distinguish between graduates with 2 year degrees and graduates with certificates -- the core outcomes should only apply to degrees
  • our recent accreditation visit shows that by holding all SACs accountable for all core outcomes, we are on the right track; don't mess with success
  • all SACs do address all core outcomes at some level.... but they should only assess for the ones that are addressed in a major way 
  • limiting the scope of and responsibility for core outcomes 'guts the intent of th process' -- to guide us ALL in our planning and thinking
  • each CCOG should be linked to a college core outcome (so that we know where they are being addressed) BUT assessment should only be done where it can be done well, and the information used to feed program/discipline improvement

  • SACs are responsible for lots and lots content... if we add curriculum for core outcomes, what will get bumped?
  •  Instructors are experts in their fields -- let us do what we know how to do!
  • some core outcomes have absolutely no relevance to some SACs or classes OR are impossible to assess given skill level (e.g. ESOL or ABE)
  • some core outcomes are more core than others -- we should all be responsible for only the core of the core (most often mentioned: communication, critical thinking and self-reflection)
  • we have gen ed requirements and should use those to make sure graduates meet core outcomes -- then let others teach in their own disciplines
  • CTE SACs already have lots and lots and lots of assessments... why add more?
  • when the core outcomes were first introduced at PCC, we were promised that not every SAC would be responsible for all of them (only those relevant to field)
  • requiring all SACs to assess all core outcomes is guaranteed to make the assessment process meaningless -- broad and shallow, or else just done in order to keep administrators off instructor backs

II)  For purposes of  accountability (reporting to outside stakeholders):

Should all SACs address and address and assess all core outcomes? (And if not, then what?)

  • we shouldn't add a whole new layer of assessment to our process -- we have enough already!!
  • any additional hurdles (like exit tests or portfolios) will put a new block in from of students and depress graduation rates

  • it makes no sense at all to assess at the SAC level -- especially for LDC SACs that offer classes that are not required -- an institution-level assessment needs to be a th the institution level
  • if outcomes are 'out there" then assessments inside classes are irrelevant -- we need to track graduates
  • other colleges are doing more meaningful assessments, especially portfolios; we should, too

Thursday, May 17, 2012

The BIG Question

posted by Shirlee Geiger and Michele Marden

Should all SACs address and assess all 6 of PCC's core outcomes?
PCC's Learning Assessment Council (LAC) will be making a recommendation to the college at the end of Spring quarter regarding our assessment process, and we would like to have a LOT of input from PCC stakeholders. We have held three 2-hour sessions to get input, as well as a number of shorter information-sharing meetings in TLCs and in the meetings of various groups. We are putting out a survey to all faculty. For those who would like more information before weighing in, here is some background:
 The survey will be available through may 2012 at: PCC BIG QUESTION Survey
Accreditation of the college requires faculty to assess students in two different ways:
1.        For continual program/discipline improvement of student learning
2.        For competency to ensure students have meet the course and degree outcomes

The faculty on the Learning Assessment Council decided to start with assessment for the purpose of continual program/discipline improvement, thinking this is what would matter most to instructors. After being told by the NWCCU (our accreditors) that we had to HURRY UP!! (in August 2010), we then asked CTE SACs (nursing, welding, bio-tech etc) to assess the outcomes of their degrees and certificates, after they have mapped them to the core outcomes. We hoped this would work for BOTH purposes of assessment.
LDC/DE SACs (history, philosophy, math, developmental ed) assess the Core Outcomes directly since the Core Outcomes are the basis of the college’s transfer degrees. Want a refresher on PCC's Core Outcomes:
? go to:

We have some concerns about whether our accreditors are going to be fully pleased with our process later on down the road.
Two Conflicts:
Conflict 1: Students who obtain a transfer degree, take a variety of LDC courses to obtain their degree. How can we be sure they have met the degree outcomes (ie, the core outcomes)?
Two possible solutions (perhaps there are more):
1.      The college may be able to make the argument to the accreditors that the LDC/DE SACs address and assess the transfer degree outcomes (ie, core outcomes) for continual improvement so broadly that students will be competent when they graduate. If so, we need LDC/DE SACs to incorporate most of the Core Outcomes. This is our current path.
2.     The college may decide to assess for competency in a different way. Options include a capstone course, a standardized exam before graduation, or a portfolio. With the impending Completion Contracts where college funding will based on graduation rates, putting additional barriers for graduation may not be in the best interest of the college financially. Also, do we deny graduation if a student fails?

Conflict 2: Assessment of a CTE program’s degree/certificate outcomes is easier to assess for competency since students take specified courses that address the degree/certificate outcomes that have been mapped to the Core Outcomes. However, some CTE programs do not have a degree/certificate outcome for one (or more) of the Core Outcomes and expect that the LDC/DE courses student are required to complete for their degree to meet the missing Core Outcome(s).

Three possible solutions (perhaps there are more):
  1. The college may decide that LDC/DE disciplines should meet CTE program needs. If so, we need the LDC/DE SACs to incorporate most of the Core Outcomes in their courses.

  2. The college may decide to take away the student’s freedom to pick the courses and they must take courses that fit the missing core outcomes. If so, students lose what many value about a college degree – development of the person for their individualized choices. Also, there is a danger that this type of marginalization of the core outcomes to specific LDC/DE courses would go against purpose of the Core Outcomes which are intentionally broadly defined so that they are applicable in many different ways for many different  programs/disciplines.

  3. The college may decide that the CTE programs need to have at least one degree outcome that would map to each of the core outcomes.
The faculty Learning Assessment Council is following the national lead of our union, insisting that we STAND AGAINST the "de-skilling" of the faculty role. At PCC, we have formed a strong partnership with our administration, who has trusted faculty to take the lead in ensuring quality education for our students through relevant and well-crafted LOCAL assessment of learning outcomes. This means faculty will need to stay informed of the changing accreditation requirements, and participate in shaping PCC's response to the swirling changes blowing through our sector of education, both nationally and internationally. Thank you for taking the time to think about this issue, and making sure your experiences and skills help shape the decision on these questions here at PCC.

Thursday, April 26, 2012

Is assessment of student learning a waste of our time?

by Shirlee

I teach a class in philosophy of science. One of my colleagues here at PCC wrote the text I use, and I like it a lot. The class is aimed at helping students be thoughtful consumers of science journalism by giving a basic overview of scientific method and logic in the context of work done in philosophy and history of science. Part of the focus is on "questionable" science  -- claims made in mass media or online about non-standard or fringe claims and practices. One intended outcome of the class is that students, out in the world reading or hearing about such claims, will identify the kinds of questions to keep in mind when making up their minds about the latest (expensive!!) cures and potions, healers or fortune-tellers.

Using some stuff from that class, I am offering you a little pre-test.

There are a group of practitioners who offer a procedure that they say will improve your life in X ways. The procedure is very expensive. When asked how we can tell that it is effective, the practitioners offer to make available the stories of people who earlier paid for the procedure, and say it made a huge difference in their lives. The practitioners also say that they have all had extensive training, and hence are experts in applying the procedure. They can tell when it is working, and know what they are doing. They do say, however, that belief in the efficacy of the procedure is necessary for it to be effective. Some of the requirements are time-intensive and require discipline on the part of the buyer. If people can't apply themselves and stick with it, the procedure won't work.

Here is my pre-test for you:

True or False:
    Testimonials from satisfied customers are a reliable source of evidence about a procedure's effectiveness.

True or False:
    The collective belief in a group of practitioners that their procedure is effective is good evidence that it is effective.

True or False:
    We can trust that, if someone has had training in a procedure, s/he is the best (perhaps only) person who can judge the effectiveness of the procedure.

At the end of my class, I have high hopes that most students will be able to see why the best answer to these questions is "false" and will keep that in mind when they are being "pitched" to by people who stand to profit from their gullibility.

In my work with assessment of student learning outcomes, however, I have been dismayed by how many of my educational colleagues don't see the application of these critical thinking basics to the "accountability movement." When few people had access to Higher Education, the only people who could say whether an education "works" were the people who were supplying the education -- the educators themselves. But in recent days, especially as education has been getting ever more expensive, social scientists have been responding to a request for verification. DOES education deliver the X as promised? Especially in Higher Ed , the X includes the skill of critical thinking.... Is it true that people with Higher Ed credentials are better critical thinkers than those without such credentials?

Some people in Higher Ed have been annoyed that the question has even been asked. (I know some people who do acupuncture who feel the same way -- after all the practice has a long and distinguished history.) But some people have found the question tantalizing, and have started using the techniques of social science to answer it...

As my text says, the first step in investigation is to get a clear understanding of the item being investigated. That has been hard -- everyone is in favor of critical thinking, but it is not clear they are all in favor of the same thing. Still, with lots of people asking what that phrase means, some standard answers have started bubbling up. Once it is defined, the next step is to figure out a way to investigate. Lots of people came up with lots of proposals.... and again, one standard way has bubbled up as, if not perfect, then the best way so far. The next thing to do is create a way to do blind testing, with representative samples and respectable statistical analysis of the results.

Then we go look at the results.

We all know, from the history of science, that this kind of investigation is really good for upsetting orthodoxies and pissing people off in positions of power.  I think we are past burning people at the stake for publishing results..... but maybe there are contemporary analogues

Science does move slowly, and no one study is conclusive. Still, the results on Higher Ed are coming in even faster than the results on Acupuncture, now that they are both being scrutinized using the scientific method. Acupuncture is looking pretty good for a limited range of applications. How is Higher Ed doing?

If you are interested in seeing the results of this kind of investigation of Higher Ed, one book is indispensable reading. Academically Adrift.

I know in my class, I tell students that if someone making money from selling X is not interested in looking at the results of investigations, that by itself is a red flag to me.... Do you think you are effective at teaching critical thinking, as a college teacher? Want to look at the evidence?

A group of faculty formed a reading club at PSU to go through the results, and have been reading Academically Adrift together -- then talking about what these results of social science scrutiny mean for them, in the classroom. Want to do that at PCC? Let me know, and we will start reading groups in the TLCs!

But you had best be brave.....

Sunday, January 8, 2012

want some money?

by Shirlee

The Learning Assessment Council now has a budget!!

This is our 4th year of existence, and up until now our mission has been a bit temporary and provisional: we were helping PCC feel its way forward into the new era of "accountability" in higher ed. We insisted that assessment be respectful to both teachers and students, and that the efforts should not be wasted -- but should lead to meaningful and practical results that could actually be used to improve teaching and learning.... This process meant that faculty had to be deeply involved in designing assessments, as well as implementing them -- since it is faculty who will be most directly involved in implementing any program or instructional changes suggested by assessment results.

Our recommendation that assessment be faculty-owned and faculty-driven has, indeed, resulted in more work for faculty members -- which was not a universally popular consequence of our advice. But this fall. talking with people who had been through one or two of the assessment cycles, it was clear that opinion was changing. Enough people had learned enough interesting things from their program or discipline assessments, and were willing to talk about them, to change the energy. And then our work was recognized by our accrediting agency, NWCCU, which wrote that we were substantially in compliance with their standard on assessment. Hurray for us!!

And, to top it off, the council got our very own budget. Not a lot (who gets what they really need these days?) But not nothing, neither. So, in that very first meeting to discuss what to do with the $$, we all knew immediately what we had to do:

--offer it up to support the creative and exciting assessment activities being cooked up by PCC SAC members.

(OK, we did briefly consider taking the entire council to Hawaii, but we gave that up pretty fast....)

It would have been lovely to have this all ready as SACs were making their assessment plans in the fall. But life doesn't always offer up perfect timing... So we decided to offer it late, rather than not at all, on the assumption that people are never unhappy about getting more money than they anticipated.

The time line is tight, though. We are asking for applications by Feb 3. We have about $12,000 to give away, and we want to give it away in chunks up to $2000 to support SAC assessment projects. What might you use it for? Here are just a few ideas.

  • To invest in an assessment instrument that has been developed specifically for your field.
  • To hire an assessment expert or consultant to advise you on assessment design.
  • To acquire some software that would make your assessment cycle more effective or efficient.
  • To send a SAC member to a conference on assessment in your particular field.
  • To hire a psychometrician to help interpret assessment results.
These are just a few ideas -- and not particularly creative ones -- but I am hoping you get the idea. PCC is just one player in the world of Higher Ed, as we are all seeking to move into "evidence-based educational practice." There is a lot going on out there, and some of it would perhaps be particularly helpful to you, if you just had a bit of money to access it.

The application documents are available both from the Learning Assessment Council page and the Staff Development page (intranet).

Assessment coaches will be contacting SAC chairs to see if they have any interest in filing an application, and to offer help. If you want to get more information, or float an idea, please call:

Shirlee Geiger chair of the Learning Assessment Council 971 722-4659, or
Michele Marden, vice-chair 971 722-4786

Tuesday, December 20, 2011

now tell me again....who are the good guys?

There has been a lot of attention to for-profit colleges in the news of late, with a story-line that plays especially well with the generally left-leaning audience of educators. It goes something like this:

For-profit colleges exist for profit and -- just like in the general world of capitalism and the self-interested (aka selfish and greedy) competition of the marketplace -- the people who run them are willing to use some rather suspicious tactics when going for that profit. For example, for-profit colleges charge huge tuitions, and use blatantly false claims when assuring students that highly paid employment after graduation means tuition debt makes sense. They inflate the rate of completion and graduation of their students. And then they inflate the rate of graduates working in their field, along with the money they make in those jobs. They lie -- in order to make a profit. And then they are unconcerned about the wreck they make of the lives of the students whose money they so callously take...As long as they make money, that is all that matters.

I have heard this narrative from lots of places, and it made sense to me. The implicit contrast, of course, is with the noble people who work in the not-for-profit world of higher ed -- willingly forgoing the higher pay of the private sector in order to pursue the calling of seeking knowledge for its own sake, and passing it on to the eager young minds waiting to be shaped and guided....That would be me and my colleagues at PCC.

Alas, I had this little vision of the good guys and bad guys of higher ed shaken up last year at the American Association of Community Colleges, when I went to a session put on by Peter P. Smith. I went to hear him only because of his bio. He had served as the president of a community college in Vermont, and then as the founding president of the California State University at Monterey Bay. But then he left the noble not-for-profit world of higher ed to join Kaplan ( !) as a senior vice-president. (Kaplan is the largest provider of for-profit educational services in the world at the moment.) This, it seemed to me, was a MAJOR act of disloyalty and betrayal. How could anyone do that!?? How could he live with himself?!

I don't know exactly what I expected when I went to hear him.... but whatever it was, it wasn't what I got. First you need to know that a lot of marketing goes on at the AACC. There is an entire cavernous hall of vendors shilling expensive products, in row after row after row of booths. Lots of glossy pieces of paper get distributed. Logos are everywhere. Signs of the money to be made in higher ed are ubiquitous. The pure nobility of the pursuit of truth gets a bit lost in the hustle. In this context, it is easy to get a bit cynical. But 5 minutes into the presentation by this turncoat betrayer of the not-for-profit nobility of education and it was clear to me.... this guy is a serious idealist. It sounded to me like he believes more deeply in the intrinsic value of education than the most starry-eyed philosopher of education I ever met. I was flabbergasted! My conceptual categories were all confused! My sense of who is who was turned upside down. I felt that kind of vertigo that comes from having basic beliefs challenged.....

It has taken me a while to digest what I heard from him. He has a blog if you want to go and read his thinking: Peter P.Smith (He also has a book, but I haven't read it yet -- Harnessing America's Wasted Talent.) This all came back to me when I ran into a short article in Inside Higher Ed that quoted from him extensively. I am going to boil everything down, and no doubt oversimplify this. But here is the message I get from him, in a nutshell.

  • The world needs educated people now.
  • A lot.
  • The education techniques currently being used were good enough in previous eras (when we only needed an educated elite). They don't work now.
  • The accountability movement is all about bringing education into the information age, and finding ways to meet the new demands:
    • 100% of our citizens highly educated with skills in collaboration, communication, and critical thinking.
  • The biggest obstacle to developing new and effective education to meet the changed demands on higher ed are professional educators who resist change, and use their organizations to resist change effectively.
  • The for-profit education sector is the newest, and the forces resisting change are the least well organized there.
  • SO the for-profit education sector can and will lead higher ed into identifying and recognizing effective education techniques.
In this scenario, the people who are personally profiting in the non-profit education world -- the teachers and advisers and admins and APs like you and me -- are the major impediment to education that works.

Here at PCC, the Learning Assessment Council has adopted a strategy that goes against the smart and idealistic claims of Peter P. Smith. We think that faculty and CC staff can serve to drive a change to more effective education, not just stand in the way. We have charged YOU with crafting assessment strategies to see how you can meet student needs ever better. Still, I can see why Peter P. Smith has placed his bets against us. People who have benefited from the ways things have been done for a long time are quite often the most resistant to changing them. This phenomenon can be observed in industries and organizations across all sectors and around the world, as we have all scrambled to catch up with the changes we have witnessed the last two decades.

Life is changing in Higher Ed.... Help shape how PCC responds to the new demands. Get active in your SAC's program/discipline assessment project! Our students -- and the world that needs their skills -- will be the beneficiaries....

Wednesday, December 7, 2011

Data IDs Best Practices

by Shirlee

Sometimes I try to describe the accountability movement in higher education in words that can fit into the proverbial nutshell. That's when I rely on an analogy with the medical profession. Here is how it goes:

  • The practice of medicine has traditionally been considered a profession where the practitioners (the doctors) are considered to be "experts" who we are all asked to trust.
  • As a result, there have been few ways for someone "shopping" for a doctor to meaningfully compare one M.D. with another.
  • Even so, it is known that some doctors do, in fact, get better results -- and often at lower costs -- than other doctors, treating the same conditions in similar patients.
  • The cost of medical care has skyrocketed of late, and the mechanism we have created for payment (work-based insurance) is leading to huge social disparities, with a clear consensus that something has to be done, even as there is no consensus over what that is.
  • Rumblings have been going on for a while now that one way to contain costs and increase access is to assess patient outcomes, in order to identify BEST PRACTICES and then make that information available to patients and taxpayers.

In the above story, we can change all mention of doctors to professors, and patients to students, and everything works the same....Really.

  • Teaching in a college or university has traditionally been considered a profession where the practitioners (the professors) are considered to be "experts" who outsiders have been asked to trust.
  • As a result, there have been few ways for someone "shopping" for a college or teacher to meaningfully compare one option with another.
  • Even so, it is known that some colleges and teachers do, in fact, get better results -- and often at lower costs -- than others, even when the student populations are very similar.
  • The cost of higher education has skyrocketed of late, and the mechanism we have created for paying (student debt) is leading to huge social disparities, with a clear consensus that something has to be done, even as there is no consensus over what that is.
  • Rumblings have been going on for a while now that one way to contain costs and increase access is to assess student outcomes, in order to identify BEST PRACTICES and then make that information available to students, their families, and taxpayers.
Just like practitioners in the medical field, those of us in Higher Ed are being asked to:
  • expand access to our services
  • get ever-better outcomes for those who enter our doors
  • and do this with less money per student.

There are, however, points of dis-analogy between the two fields.
  • There are usually fairly clear indicators of success or failure (like mortality rates)with medical procedures --but success is harder in Higher Ed. If someone takes some community college classes, doesn't get a degree, but does get a promotion at work, is that success? or is it failure?
  • In the medical field there are some service-payers that are so large, and who have been keeping records for so long, that there is LOTS of data to be mined. The biggest and best of these data piles comes from Medicare and Medicaid -- but for higher ed, there is no comparable keeper-of-records who could furnish us with data to study. Instead, we are in the early stages, via assessment of learning outcomes, of gathering that data.
Now I mention all this because I read today that the HUGE pile of data on patient outcomes is about to be released, in a format that will make it especially search able. Here is the link, plus a short excerpt:

"The government announced Monday that Medicare will finally allow its extensive claims database to be used by employers, insurance companies and consumer groups to produce report cards on local doctors — and improve current ratings of hospitals.

"By analyzing masses of billing records, experts can glean such critical information as how often a doctor has performed a particular procedure and get a general sense of problems such as preventable complications.

"Doctors will be individually identifiable through the Medicare files, but personal data on their patients will remain confidential. Compiled in an easily understood format and released to the public, medical report cards could become a powerful tool for promoting quality care.

"There is tremendous variation in how well doctors do, and most of us as patients don't know that. We make our choices blind," said David Lansky, president of the Pacific Business Group on Health. "This is the beginning of a process to give us the information to make informed decisions." His nonprofit represents 50 large employers that provide coverage for more than 3 million people."

Notice that the ratings are happening on two levels -- the hospitals (analogous to the colleges) and the doctors (analogous to the instructors.) Many colleges have already taken steps to help create a data set that can be used to compare one institution to another, by using one of the standardized tests (usually of critical thinking and communication) that have been created to allow just such comparisons. Instead of that route, we here at PCC have asked SACs to create or adopt assessment instruments that can deliver info they need to continually improve instruction. This gives us locally useful information, but no way to compare ourselves, as a college, to others. But so far, neither approach (standardized test, customized SAC assessment) will provide a way to meaningfully compare one instructor to another, the way the Medicare info will allow comparisons of one doctor to another.... Still, I say, any data that is aggregated can be disaggregated. And I think it is wise to attend to trends in the medical world, as hints of what will be coming our way.

Some of all of this makes me joyful. The faster we can figure out -- and share around -- what works, the more our students will learn. According to an article Linda Gerber sent my way, there is now more student debt than credit card debt in the US of A. This is a staggering realization. Go read this and weep:
But some of all of this makes me wonder how many of the traditional ways of higher ed will be changed beyond recognition in this process. ...

Evidence-based educational practices are a new trend, just like evidence-based medical practices. When my oncologist, 4 years back, laid before me the success rates of various treatment options for my kind of cancer, and helped me poke through the list to decide what to do, I was very grateful for this trend. (Since this pre-dates PCC's insurance for part-time faculty, my insurance wasn't that great -- since it was an individual policy, I didn't get the advantage of group rates -- and cost was one of the factors I considered.) Will the day come when there is an analogous approach to selecting college or college teachers? -- a high school college adviser lays out the same kind of data on rates of learning for college writing or critical thinking, and compares what is available to the student's aspirations and budget?

And should such a day come, how will PCC look as an educational choice?

These are among the interesting questions of our times....