Tuesday, December 20, 2011

now tell me again....who are the good guys?

There has been a lot of attention to for-profit colleges in the news of late, with a story-line that plays especially well with the generally left-leaning audience of educators. It goes something like this:

For-profit colleges exist for profit and -- just like in the general world of capitalism and the self-interested (aka selfish and greedy) competition of the marketplace -- the people who run them are willing to use some rather suspicious tactics when going for that profit. For example, for-profit colleges charge huge tuitions, and use blatantly false claims when assuring students that highly paid employment after graduation means tuition debt makes sense. They inflate the rate of completion and graduation of their students. And then they inflate the rate of graduates working in their field, along with the money they make in those jobs. They lie -- in order to make a profit. And then they are unconcerned about the wreck they make of the lives of the students whose money they so callously take...As long as they make money, that is all that matters.

I have heard this narrative from lots of places, and it made sense to me. The implicit contrast, of course, is with the noble people who work in the not-for-profit world of higher ed -- willingly forgoing the higher pay of the private sector in order to pursue the calling of seeking knowledge for its own sake, and passing it on to the eager young minds waiting to be shaped and guided....That would be me and my colleagues at PCC.

Alas, I had this little vision of the good guys and bad guys of higher ed shaken up last year at the American Association of Community Colleges, when I went to a session put on by Peter P. Smith. I went to hear him only because of his bio. He had served as the president of a community college in Vermont, and then as the founding president of the California State University at Monterey Bay. But then he left the noble not-for-profit world of higher ed to join Kaplan ( !) as a senior vice-president. (Kaplan is the largest provider of for-profit educational services in the world at the moment.) This, it seemed to me, was a MAJOR act of disloyalty and betrayal. How could anyone do that!?? How could he live with himself?!

I don't know exactly what I expected when I went to hear him.... but whatever it was, it wasn't what I got. First you need to know that a lot of marketing goes on at the AACC. There is an entire cavernous hall of vendors shilling expensive products, in row after row after row of booths. Lots of glossy pieces of paper get distributed. Logos are everywhere. Signs of the money to be made in higher ed are ubiquitous. The pure nobility of the pursuit of truth gets a bit lost in the hustle. In this context, it is easy to get a bit cynical. But 5 minutes into the presentation by this turncoat betrayer of the not-for-profit nobility of education and it was clear to me.... this guy is a serious idealist. It sounded to me like he believes more deeply in the intrinsic value of education than the most starry-eyed philosopher of education I ever met. I was flabbergasted! My conceptual categories were all confused! My sense of who is who was turned upside down. I felt that kind of vertigo that comes from having basic beliefs challenged.....

It has taken me a while to digest what I heard from him. He has a blog if you want to go and read his thinking: Peter P.Smith (He also has a book, but I haven't read it yet -- Harnessing America's Wasted Talent.) This all came back to me when I ran into a short article in Inside Higher Ed that quoted from him extensively. I am going to boil everything down, and no doubt oversimplify this. But here is the message I get from him, in a nutshell.

  • The world needs educated people now.
  • A lot.
  • The education techniques currently being used were good enough in previous eras (when we only needed an educated elite). They don't work now.
  • The accountability movement is all about bringing education into the information age, and finding ways to meet the new demands:
    • 100% of our citizens highly educated with skills in collaboration, communication, and critical thinking.
  • The biggest obstacle to developing new and effective education to meet the changed demands on higher ed are professional educators who resist change, and use their organizations to resist change effectively.
  • The for-profit education sector is the newest, and the forces resisting change are the least well organized there.
  • SO the for-profit education sector can and will lead higher ed into identifying and recognizing effective education techniques.
In this scenario, the people who are personally profiting in the non-profit education world -- the teachers and advisers and admins and APs like you and me -- are the major impediment to education that works.

Here at PCC, the Learning Assessment Council has adopted a strategy that goes against the smart and idealistic claims of Peter P. Smith. We think that faculty and CC staff can serve to drive a change to more effective education, not just stand in the way. We have charged YOU with crafting assessment strategies to see how you can meet student needs ever better. Still, I can see why Peter P. Smith has placed his bets against us. People who have benefited from the ways things have been done for a long time are quite often the most resistant to changing them. This phenomenon can be observed in industries and organizations across all sectors and around the world, as we have all scrambled to catch up with the changes we have witnessed the last two decades.

Life is changing in Higher Ed.... Help shape how PCC responds to the new demands. Get active in your SAC's program/discipline assessment project! Our students -- and the world that needs their skills -- will be the beneficiaries....

Wednesday, December 7, 2011

Data IDs Best Practices

by Shirlee

Sometimes I try to describe the accountability movement in higher education in words that can fit into the proverbial nutshell. That's when I rely on an analogy with the medical profession. Here is how it goes:


  • The practice of medicine has traditionally been considered a profession where the practitioners (the doctors) are considered to be "experts" who we are all asked to trust.
  • As a result, there have been few ways for someone "shopping" for a doctor to meaningfully compare one M.D. with another.
  • Even so, it is known that some doctors do, in fact, get better results -- and often at lower costs -- than other doctors, treating the same conditions in similar patients.
  • The cost of medical care has skyrocketed of late, and the mechanism we have created for payment (work-based insurance) is leading to huge social disparities, with a clear consensus that something has to be done, even as there is no consensus over what that is.
  • Rumblings have been going on for a while now that one way to contain costs and increase access is to assess patient outcomes, in order to identify BEST PRACTICES and then make that information available to patients and taxpayers.

In the above story, we can change all mention of doctors to professors, and patients to students, and everything works the same....Really.

  • Teaching in a college or university has traditionally been considered a profession where the practitioners (the professors) are considered to be "experts" who outsiders have been asked to trust.
  • As a result, there have been few ways for someone "shopping" for a college or teacher to meaningfully compare one option with another.
  • Even so, it is known that some colleges and teachers do, in fact, get better results -- and often at lower costs -- than others, even when the student populations are very similar.
  • The cost of higher education has skyrocketed of late, and the mechanism we have created for paying (student debt) is leading to huge social disparities, with a clear consensus that something has to be done, even as there is no consensus over what that is.
  • Rumblings have been going on for a while now that one way to contain costs and increase access is to assess student outcomes, in order to identify BEST PRACTICES and then make that information available to students, their families, and taxpayers.
Just like practitioners in the medical field, those of us in Higher Ed are being asked to:
  • expand access to our services
  • get ever-better outcomes for those who enter our doors
  • and do this with less money per student.

There are, however, points of dis-analogy between the two fields.
  • There are usually fairly clear indicators of success or failure (like mortality rates)with medical procedures --but success is harder in Higher Ed. If someone takes some community college classes, doesn't get a degree, but does get a promotion at work, is that success? or is it failure?
  • In the medical field there are some service-payers that are so large, and who have been keeping records for so long, that there is LOTS of data to be mined. The biggest and best of these data piles comes from Medicare and Medicaid -- but for higher ed, there is no comparable keeper-of-records who could furnish us with data to study. Instead, we are in the early stages, via assessment of learning outcomes, of gathering that data.
Now I mention all this because I read today that the HUGE pile of data on patient outcomes is about to be released, in a format that will make it especially search able. Here is the link, plus a short excerpt:
http://my.earthlink.net/article/hea?guid=20111205/3c47d411-d964-4fce-933a-d1d1111584f2

"The government announced Monday that Medicare will finally allow its extensive claims database to be used by employers, insurance companies and consumer groups to produce report cards on local doctors — and improve current ratings of hospitals.

"By analyzing masses of billing records, experts can glean such critical information as how often a doctor has performed a particular procedure and get a general sense of problems such as preventable complications.

"Doctors will be individually identifiable through the Medicare files, but personal data on their patients will remain confidential. Compiled in an easily understood format and released to the public, medical report cards could become a powerful tool for promoting quality care.

"There is tremendous variation in how well doctors do, and most of us as patients don't know that. We make our choices blind," said David Lansky, president of the Pacific Business Group on Health. "This is the beginning of a process to give us the information to make informed decisions." His nonprofit represents 50 large employers that provide coverage for more than 3 million people."


Notice that the ratings are happening on two levels -- the hospitals (analogous to the colleges) and the doctors (analogous to the instructors.) Many colleges have already taken steps to help create a data set that can be used to compare one institution to another, by using one of the standardized tests (usually of critical thinking and communication) that have been created to allow just such comparisons. Instead of that route, we here at PCC have asked SACs to create or adopt assessment instruments that can deliver info they need to continually improve instruction. This gives us locally useful information, but no way to compare ourselves, as a college, to others. But so far, neither approach (standardized test, customized SAC assessment) will provide a way to meaningfully compare one instructor to another, the way the Medicare info will allow comparisons of one doctor to another.... Still, I say, any data that is aggregated can be disaggregated. And I think it is wise to attend to trends in the medical world, as hints of what will be coming our way.

Some of all of this makes me joyful. The faster we can figure out -- and share around -- what works, the more our students will learn. According to an article Linda Gerber sent my way, there is now more student debt than credit card debt in the US of A. This is a staggering realization. Go read this and weep: http://www.usatoday.com/money/perfi/college/2010-09-10-student-loan-debt_N.htm
But some of all of this makes me wonder how many of the traditional ways of higher ed will be changed beyond recognition in this process. ...

Evidence-based educational practices are a new trend, just like evidence-based medical practices. When my oncologist, 4 years back, laid before me the success rates of various treatment options for my kind of cancer, and helped me poke through the list to decide what to do, I was very grateful for this trend. (Since this pre-dates PCC's insurance for part-time faculty, my insurance wasn't that great -- since it was an individual policy, I didn't get the advantage of group rates -- and cost was one of the factors I considered.) Will the day come when there is an analogous approach to selecting college or college teachers? -- a high school college adviser lays out the same kind of data on rates of learning for college writing or critical thinking, and compares what is available to the student's aspirations and budget?

And should such a day come, how will PCC look as an educational choice?

These are among the interesting questions of our times....

Wednesday, November 23, 2011

NWCCU pressures us.... and who pressures NWCCU?

by Shirlee

Assessing student learning outcomes through PCC's SACs was put in place in response to pressure from our accrediting agency, The
Northwest Commission on Colleges and Universities. The NWCCU upped the pressure on us in 2010 by saying that PCC was not in compliance with their standard on assessment for on-going improvement of teaching and learning. Many people here at PCC then stepped up to the plate in 2010-11 to create and implement assessment plans and to file reports on what they learned. For many faculty members, it was the first time accreditation had ever been directly considered.

Even though the pressure on us has lessened a bit, you still might be interested in knowing that the six regional accrediting agencies are feeling some pressure of their own. There are three separate bodies that each take a chunk of oversight of higher ed -- (1) the Federal government, through the Dept of Education, (2) state governments, who write and enforce a wide variety of requirements and standards, often directed to technical or vocational education and (3) the five regional accrediting agencies, including NWCCU,who require a routine of self-study and then provide peer review. These three bodies have intertwining connections. For example, eligibility for financial aid from the Feds is tied to attendance at a college or university accredited by one of the accrediting agencies.

It is this financial connection that is one prominent focus of a new advisory committee to the Dept of Education. In their report, the committee explores the reasons for and against severing this link between accreditation and financial aid. One possible future route they outline is to allow states to monitor educational quality -- reducing the power and role of both the federal Dept of Education and the accrediting agencies. Another possible future route puts more power into the Dept of Ed, and weakens the state role and the importance of the accrediting bodies.

The report is only 11 pages long, so you might just give in a glance. To get to the report, please navigate to insidehighered.com and then click on the hot line labeled "discussion draft." What gets decided is ultimately going to effect what we do here at PCC, so it might be of interest to you for that reason. But you might also find some sort of emotional resonance in seeing the agency that pressured us to hasten our assessment process now experiencing it's very own pressure from this advisory committee.

If you are a good and compassionate person, you may feel some sympathy for them. Or, if you are another kind of person, you may have another kind of reaction.....

Tuesday, November 15, 2011

Occupy and Higher Ed: The Cost Accountibility Movement

by Shirlee

The Occupy Wall Street movement has drawn attention to many problems that have been with us for a long time, but without achieving headline status until now. Wealth inequality and its effect on democracy, for example. The distribution of profits in the financial sector, pointing to the question about what is contributed to our collective well-being to justify such profits. And the increasing likelihood that the next generation of adults will not be able to achieve the same level of prosperity and security as their parents....

In connection with that last concern, Occupiers have been drawing attention to the major shift in the funding sources for higher ed over the past three decades -- from grants and scholarships for students, supplementing relatively generous public funding of the institutions themselves to increasing reliance on student debt. The accumulation of student debt is being likened to indentured servitude, even a kind of debt-slavery, by Occupiers. Average debt for students achieving a bachelor's degree is cited as between $23,000 and $25,000.-- and this at a time when unemployment is running high.

So why does higher education cost so much? Where does the money go? These are questions that belong in the Cost Accountability wing of the Accountability Movement. PCC's Learning Assessment Council exists in response to the accountability movement, and the effort to respond to the spiraling costs of Higher Ed has been part of our program from the start. As we here at PCC are busily assessing student learning outcomes, with an eye to making sure our students are learning what we promise to be teaching them, the plan is to locate and share around the instructional practices that are most effective. In this way we will be able to serve more students, more effectively, with less cost per student -- the 3-sided demand on Community Colleges set down by the Obama Administration. The hope is for the quality of our instruction to go up (as measured by achievement of learning outcomes) even as the cost of providing that quality goes down. This is our indirect response to the problem of student debt. Is it enough?

Inside Higher Ed has a piece on the Occupiers' anger about college debt. And in response to student agitation about college loan debt, the Obama Administration has posted a description of its plans. See: https://wwws.whitehouse.gov/

But if you are just plain interested in why Higher Ed costs so much these days, here is the source I recommend:
Delta Cost Project.

Here is a juicy tidbit from page 17:
Among public institutions, spending per student for instruction declined between 2002 and 2005, most dramatically in public community colleges. When state funds increased in 2006, instructional spending increased as well, but not enough to make up for losses in prior years.

Delta Cost Project has charts and graphs comparing costs and funding sources of community colleges to other sectors of Higher Ed. Student debt is increasing partly due to increased costs of education, but the larger factor is what they call "cost shifting" -- shifting from taxpayer funding to individual funding. The increased costs attributable to instructional versus non-instructional institutional costs are also charted. I think it is interesting reading.

We work all day, long and hard, on behalf of our students. It is important to me that we are not just saddling them with debt that will decrease the quality of their future lives -- instead of increasing the quality through all those intangibles associated with being an "educated person." For example, I try to imagine my life without my college experiences, and I cannot do it. College set me on my life's path, and made me into the person I think of as myself. But I graduated with no debt at all. I paid my way by working, with the help of grants and scholarships. That is near impossible these days... Would I be so pleased with my education if it had taken me 30 years to pay it off?

I fear that higher ed looks different with a monthly debt payment that appears to go on forever, and with no good job prospect in sight. I think as educators, we need to know where the money goes -- the money that is at least in part being spent today on our paychecks, but will be paid back by our students long into their futures. And we need to know that it is not just knowledge and skill we are providing our students.... it is years of a repayment plan, too.

Tuesday, November 8, 2011

Grading Gets Outsourced to India

by Shirlee

The way PCC is approaching the new demand for assessment in Higher Education is only one of several different models that have been adopted across the landscape of colleges and universities in the U.S. These differences became apparent as members of the Learning Assessment Council started to scout around in our first year of existence (2008-9), which was provided to us as a "year of inquiry." Here are short descriptions of some of the ways colleges and universities have decided to go:

1) Many institutions have adopted a single high-stakes standardized test, designed to measure core learning outcomes like communication and critical thinking. These are usually "value added" measurements, intended to show how much development there has been in student competency from the first term of entrance to an exit with a degree. These tests allow comparison of one institution with another, which is part of the call for "accountability" mentioned in last week's blog.

This first approach is based on an assumption that assessment of student learning is a separate kind of activity from instruction. Teachers may be skilled at teaching, according to this thinking, but they are not experts in measuring learning. Measurement expertise is called psychometrics. Experts provide the design and on-going re-design of the major competing high-stakes tests used by colleges and universities in the first model.

(2) In the second model, the idea continues to be that we should leave assessing of learning to the assessment experts, in order to disrupt teachers' lives the least amount possible. In this case, though, assessments are customized to different SACs or departments via the work of a team of psychometricians who are called in to (i) interview faculty about their specific student learning outcomes, and then (ii) design customized assessments to be used by all instructors in that subject area. In this way, for example, experts might come and consult with the PCC history SAC to determine what "critical thinking" means in the area of history, across PCC's history curriculum. Then all the instructors of a given section of history class would be required to administer the test the psychometricians came up with, and the results would be examined to see what they say about the effectiveness of history instruction at PCC. This model gives assessment results that can be used for continual program improvement (the other main purpose of assessment, as mentioned in last weeks blog). The major company that has emerged to do business in this second model is EduMetry.

(3) Many institutions created a new administrative office, and put someone in charge of organizing faculty assessment work. Often, this office oversees the adoption of an expensive assessment software system, and then trains faculty (usually department chairs or supervisors) in how to use it. The software system ensures consistency of reporting, and eases bundling of assessment results for display to the accrediting bodies. For this approach to work, the administrator has to have the power to compel reluctant faculty to both do assessments and then learn how to report results using the system. Faculty are involved to a greater extent in assessing than in either of the first two models, but they tend to be viewed by administrators as reluctant participants likely to drag their feet....

(4) Some colleges and universities have decided that assessing is a critical component of the instructional process, and must be kept as part of the bundle of teaching tasks. The idea here is that faculty are deeply invested in successful student learning, and when they see the connection between assessment and improved learning outcomes, they will embrace assessment as a new and useful tool for doing their important jobs even better. This model then leaves assessment in the hands of faculty, in the form of an assessment committee or council. PCC was set on this path through the recommendation to the college made by the faculty Learning Assessment Council that program/discipline assessment be the responsibility of SACs, and implemented as an ongoing component of Program Review. This last model is the only one that is fully respectful of faculty professionalism and expertise....

The national body for the union that represents PCC's instructors and APs has endorsed this last model, coming up with an interesting slogan that in higher ed we should count what counts. I remain deeply convinced that this last model is the best both for students and for teachers, in the long term. But I am also aware that some faculty at PCC would have picked one of the other models, had they had the choice. And I often call to mind a participant in one of our first assessment classes who voiced a very strong positive response to the second model above, and the company that is most successful in that endeavor, EduMetry.

EduMetry has a varied approach to assessment activities in Higher Ed, and I recently came across another aspect of their business plan in the Chronicle of Higher Education. EduMetry has started outsourcing grading to India through their program called Virtual TA. (See http://chronicle.com/article/Outsourced-Grading-With/64954/) In this part of their business, they devise rubrics for assignments, train and norm a group of assessors on use of the rubric, and then ask their assessors to provide detailed, rich feedback on student papers -- feedback of the sort we all might dream of providing, but are often too busy to actually do. One sociology instructor at a community college, is quoted in the article.

And although Ms. Suarez initially was wary of Virtual-TA—"I thought I was being replaced"—she can now see its advantages, she says. "Students are getting expert advice on how to write better, and I get the chance to really focus on instruction."

It is a new world of assessment in Higher Education. With so many things changing so rapidly, and with many different kinds of responses to the changes being pioneered at different institutions, tuning in to assessment news provides lots of surprises. I used to think that education was a service that couldn't be outsourced. But EduMetry has surprised me. The logic of it is just an extension of the thinking that leads to the first two assessment models I described above -- if teachers are experts at teaching and psychometricians are experts at assessing, and we should each do what we are experts in, then assessing should be peeled off from the work of instructors and handed over to someone else.....

I heard an instructor say the other day that grading was the least satisfying part of his job, and he wished he could teach without having to grade. I wonder if he would really be so happy if EduMetry granted his wish.... Instead of doing more assessing, like we have asked instructors to do at PCC, the day may come when we will do no assessing at all. In the back of my mind, I can hear David Rives (president of Oregon AFT) talk about the de-skilling of the instructor's job...

I say that perhaps we should be careful what we wish for....

Wednesday, November 2, 2011

Two theories of Assessment

by Shirlee

As assessment has become ever-more-prominent in education and the non-profit world -- a trend that has been building for the past 25 years, according to my reading -- two distinct sets of reasons for assessing have been given. Often they live side-by-side in uneasy alliance. But they are very different, and that difference could make a BIG difference for faculty lives.

Assessment Theory #1 -- ACCOUNTABILITY

The problem assessment is supposed to solve:
Lots of money goes into education and non-profits, and it is hard to tell if it is well-spent or wasted.


The thinking: Lots of money, for example, is given to feed the hungry. Is it being spent well? Are there fewer hungry people than there would have been without the spending? Could it have been spent more effectively? It is hard to say.... Education is in this same boat -- lots of money gets spent, but how can we tell if it is being spent effectively? In this thinking, this aspect of the non-profit/education world contrasts sharply with commercial enterprises. That is, in a market environment, there is a straightforward way to tell if an investment was a good one from a business point of view -- did it lead to the creation and selling of a good or service that enough people wanted to buy so as to make a profit? In a market, it is possible to compare investments over time, using the metric of return on investment, to judge whether or not money was well-spent. But in education, as in most of the not-for-profit world, it is harder to tell what is "working" and nearly impossible to tell if one kind of investment in education -- for example, college scholarships to kids from poorer families -- yields a better or worse return on investment than another strategy -- for example, full-day pre-kindergarten for kids based on income eligibility.

In this context, assessment is intended to provide metrics that enable valid comparison of one possible use of funds with other possible uses. In the world of higher education, this leads to the call for standardized exams of basic skills -- usually writing and critical thinking -- to be given to all students (or a representative sample of all students), across all schools. The scores on these standardized tests would allow easy and clear comparisons of one institution with another. Indeed, there are several exams that are competing to fulfill this dream, such as the CLA (or for community colleges, CCLA) or the ACT CAPP. Lots of colleges and universities have responded to the call for assessment using this theory of assessment by mandating the use of one of these tests.... After the admistration gets the results, they then inform faculty of how well (or badly) they are doing. Poor results lead to lots of administration pressure on under-performing faculty.

Assessment Theory #2 -- CONTINUAL IMPROVEMENT

The problem assessment is supposed to solve:
Our world is in dire need of the skills and competencies characteristic of educated people -- primarily communication, collaboration, and critical thinking skills -- and this mission is so important, we must devise ways to identify and quickly roll-out best practices.

The thinking: teachers tend to be intrinsically motivated, curious innovators. However, they have historically worked in great isolation. One teacher might informally track what works and doesn't work in his/her classroom -- and make changes based on observations -- but this is primarily a matter of independent practice. Education is so important in our changed world, though, we have to take these kinds of individual practices and formalize them. In this way, assessments allow us to formally track the results of innovations, and more quickly and easily isolate components of the most effective education practices, so they can be shared more widely.

In this context, assessments must be in the control of faculty members, as it is their familiarity with their student population, curriculum, content, and current practices that leads to the determination of what to assess. Standardized assessments for a general population of students will not help (for example) history faculty to discern best practices for critical thinking in history. Faculty must control the assessment process, because they have the detailed familiarity to know how to focus an assessment.


At PCC, we have been firmly within the fold of Assessment Theory #2. It is, after all, what our accreditors have asked of us -- evidence that assessment of student learning is being used to improve both teaching and learning. It is also the process that is most respectful of faculty, so it is not suprising that a faculty Council would come up with this sort of a recommendation. Additionally, it is the only approach that could lead to results that would actually be useful to instructor practice.

Notice, however, that Theory #2 leads to ever more customized and distinct assessments, while Theory #1 leads to ever more generalized and standardized assessments. These two theories lead to incompatible pictures of what GOOD ASSESSMENT looks like. The more locally useful a particular assessment is to a given SAC, the less useful it will be to compare one college to another....

I mention all this because one of my heroines in the Assessment World is Trudy W. Banta, editor of Assessment Update: Progress, Trends, and Practices in Higher Education. In the most recent edition (Sept-Oct 2011) she has written a piece warning that "... the promise of assessment for improvement might be diminished by increased focus on assessment for accountability."

She writes:
...[A] significant portion of US colleges and universities may be moving in the direction of providing to the public information based on scores of standardized tests of generic skills that inevitably will be used to compare quality of institutions. It just seems to be human nature to hone in on those easy numbers when we seek a standard for making comparisons."

I offer two thoughts in this context:
(1) We, here at PCC, are lucky to have both an administration AND an accrediting agency operating from Assessment Theory #2. This is the approach that fully respects faculty as THE key players in driving program improvement. Although assessment takes up hours and energy, when done by faculty (and done well) it leads to results that make a difference in student lives -- it leads to continual program improvement and better learning outcomes.

(2) While we here at PCC are getting better and more proficent at assessing our programs and disciplines -- witness the amazing variety and innovation of assessment approaches across our SACs, with ever-better strategies and instruments -- it simply may not be enough to stave off the rush to standardization. At this point, there is still a national push for assessments that fit the picture from Theory #1. Assessing to compare one institution with another is very, very, very different from assessing to be able to do our important job ever better. While both are part of the rise of assessment, they lead to very,very,very different pictures of what good assessment looks like. A one-size-fits-all test looks ridiculous on one model, and the only thing that will work on the other....


Wait. I really meant:
very, very, very, very different.....

I'll keep an eye out and let you know what I see on the standardization horizon. Until then, we will continue with our PCC plan of asking for a splendid locally-controlled profusion of SAC-specific assessment!!






Wednesday, October 26, 2011

I like your blog a lot, but can you tell me what a SAC is?

I got an email from someone within the California higher ed system with that line in it. So I tried to tell him....

Every instructor who teaches credit courses at PCC is a member of a SAC, I said to him. The SAC is made up of all the teachers who teach the same kinds of classes.

After a bit, he replied something like:

"So all the teachers in a given discipline get together to make decisions about their field, regardless of their address. Is that it?"


I would have liked to say "yes" to that question.....But I have learned, from these three years talking assessment wide and far across the district, that quite a few of our faculty members don't know that they are part of a SAC. Just like my e-mail correspondent, they don't know what a SAC is. He's from California, though. These are PCC teachers who don't know what a SAC is.... even though they are part of one!


Are you one of the faculty who don't know what a SAC is?
Or are you one of the faculty who don't know that many PCC faculty don't know what a SAC is?


First, with the acronym: Subject Area Committee.

PCC is a multi-campus college. Each campus has departments, with department chairs, deans, and a campus president (among many other important people). But the departments are typically not discipline-specific. For example, in my department at Cascade, there is one dept chair who hires, assigns classes to, and evaluates instructors in:

anthropology
economics
geography
history
humanities
philosophy
political science
psychology
religion


(I think he must get tired.)

But within PCC, curriculum decisions (among others) are supposed to be made by the curriculum experts, and those are the instructors in a particular subject-area, from all the different campuses, all across the district -- like all the history teachers, or writing teachers, or microelectronic engineering teachers.

Voila! All the instructors who teach in a particular subject area are members of that Subject Area Committee.


SAC participation is mandatory for full-time instructors. But I have discovered a wide variation among SAC practices regarding part-time faculty. In some SACs, invitations are extended to PT instructors, and there is a clear welcome mat put out. (Usually, SAC chairs rotate among FT faculty, and this can vary by the rotation of the chair in a particular SAC.) In some SACs, invitations are extended to PT instructors, but there is not much outreach. In some SACs, invitations are not extended, and there is little to no consternation over low attendance rates. Some full-time faculty think it just isn't fair to ask PT teachers to engage in SAC work, given the pay inequity -- it would be further exploitation of over-worked adjuncts, they say. Some FT teachers say that, if all the part-time teachers in their discipline came to the SAC meetings, they would out number the full-timers by quite a bit. If they come, should they be allowed to vote? After all, it is the FT faculty who have ultimate responsibility for SAC work, not PTers.

Here's the problem, from my point of view. Program or discipline assessment is different from assessing individual students' learning. It is different from measuring the effectiveness of an individual teacher. Program assessment asks each SAC to ask the question, "How are we doing?"

To ask that question, there has to be a "we" -- and we need to know who the "we" includes.

For purposes of program/discipline assessment, adjunct faculty are part of the "we." At PCC, at the recommendation of faculty, program assessment is to be done by faculty, within the institutional structure of the SAC. If the SAC isn't inclusive -- if fewer than half of the SAC members even know there is a SAC -- the program assessment is not going to be adequate to assess the program.

Do you know who is part of your "we"?

I am posting this edition of Assessing PCC on the in-service day set aside for SAC meetings....
And I am asking you to consider who attended your Subject Area Committee meeting, and who was absent.
How many members of your "we" were there? Are you a member of a "we" that you don't even know about?

And what are you going to do about that...?

-- Shirlee

Tuesday, October 18, 2011

More effective, more satisfying, and less isolated teaching...

by Shirlee Geiger, chair faculty Learning Assessment Council 2011-12

Suppose there was a way to become a more effective teacher, to make every class you teach more satisfying and enjoyable to you -- in addition to being more effective for the students -- and to end the feeling of isolation that is commonly reported by teachers. Would you do it?


The New Yorker from Oct 3 has one of those long, fascinating articles they often publish. All these benefits to educators are listed in the 4th full page of the piece (page 49). My partner pointed the essay out to me, saying it was relevant to the work of the Learning Assessment Council at PCC. Since the assessment work has taken over my life, I sat myself on down and read the essay.

The essay is by Atul Gawande, who has gotten quite famous of late writing about common-sense ways to improve patient outcomes in medicine. (He is the son of my aunt's doctor in Ohio, so I have heard about him for a while now.) The pressures on medical practice in the U.S. provide a striking parallel to the the pressures on higher ed -- how to get better outcomes, while increasing access, and all at lower costs. One of the common responses to these pressures in both fields is to assess outcomes in order to identify practices that work best, and then roll those out in order to increase both effectiveness and efficiency.

Dr. Gawande is writing about the benefits of coaching. He enlisted a tennis coach, sort of by accident, who in just minutes was able to show him how to increase the speed of his serve by 10 mph. After that, Gawande wondered if coaching might help him improve his surgical outcomes in the same way it improved his tennis game. (As a competitive fellow, Dr. Gawande had been comparing his medical outcomes to the national statistics for a variety of procedures, and discovered that he had plateaued as a surgeon -- while still beating the national average, he was no longer increasing the rate by which he was beating them.) (Please note that assessment of outcomes is the whole back story here!)

He followed up his curiosity by engaging a surgical coach. And he got better.

It is the middle of his essay, however, that is devoted to coaching in the education world. It is almost like Gawande had heard about Critical Friends Groups -- which are starting to form at PCC right now. The parallels between medical coaching and the Critical Friends Groups are what my partner had spotted when recommending the essay to me. 14 faculty and APs from PCC were trained as coaches this past summer in the Critical Friends techniques.

If you join one of our CFG groups, what can you expect? Well, Gawande quotes a veteran teacher at length, talking about her experience. Here is some stuff from page 49:

Elite performers, researches say, must engage in "deliberate practice" -- sustained, mindful efforts to develop the full range of abilities that success requires. You have to work at what you're not good at. In theory, people can do this themselves. But most people do not know where to start or how to proceed. Expertise, as the formula goes, requires going from unconscious incompetence to conscious incompetence to conscious competence and finally to unconscious competence. The coach provides outside eyes and ears, and makes you aware of where you're falling short.... [this is] discomfiting information to convey, and [it must be done] directly but respectfully.

[A teacher was asked if she liked the coaching she received.] "I do," she said. "It works with my personality." [....] She told me that she had begun to burn out. "I felt really isolated, too." Coaching had changed that. "My stress level is a lot less now." That might have been the best news for the students. They kept a great teacher, and saw her get better. "The coaching has definitely changed how satisfying teaching is."

In the Critical Friends Groups, all participants take turns as both coaches and coached. This is different than the model Dr. Gawande describes in his essay. But in other ways, the idea is the same. The groups foster the kind of respect and trust needed to be willing to talk about not just your strengths as a teacher, but the areas where you can improve. The Critical Friends technique is nationally recognized, and is one of several similar initiatives that have been developed in the past decade based on this idea of professional development. It was selected by the Learning Assessment Council as the best fit with PCC and our work together.

Now the question is:

Who wouldn't want work that is more satisfying, more effective, and less lonely?

Do you want to be the best teacher you can be? Join a Critical Friends Group. For more information, contact Sally Earll at 971-722-7812or
sally.earll@pcc.edu.

Tuesday, October 11, 2011

PCC Technology Woes!

Shirlee Geiger is the current chair of the faculty Learning Assessment Council

The Fall term started with some serious technology issues at PCC. Email and phone communication were slowed, and occasionally did not function at all. The problems highlighted how much we have all come to depend on instant connection and contact. Some students would ask me, at the start of a class, if I got their emails. (The answer was usually, "No!") Their communications were routine and mundane -- about having to come late to class or leave early, or a question about a course requirement that hadn't been cleared up in the face-to-face session. These conversations got me to thinking about how email and voice mail have changed the nature of teacher-student relationships. I know that as an undergraduate (lots of decades ago!), I didn't expect access to my college instructors. I guess I vaguely knew they had offices, but I did not seek them out. I would watch some students try to sneak in quick one-on-one conversations with a teacher before or after class, but to me it always seemed sort of rude or intrusive. If the teachers had phones, I sure didn't know the numbers...And departments had secretaries way back then, who answered department and instructor phones and either passed calls through or took messages. I don't know that there was even a way to dial a teacher directly....All this meant that teachers were distant, to my mind, not exactly people. But I knew that they knew a lot of things. Because of that, I was sort of afraid of them.

I don't think distant, fear-laced relationships are optimal for teaching or learning, but it was what we had back then. I am guessing that, had I been able to call a teacher, or dash off an email, it would have made a difference in my attitude as a student. I think it would have helped....

Now students are used to sending off a quick email, and often expect a fast response. I put my phone number on my syllabus, and encourage people to use it... and they do! I feel, as a teacher, much more approachable than my teachers were to me. Things have changed. It didn't happen fast, but incremental changes have added up to expectations of routine contact between teachers and students, outside of classrooms. It has changed the profession of teaching -- how instructors allocate their time, and what a typical daily workload looks like.

This year, I am thinking that a similar thing is happening with program and institutional assessment. Lots of small changes -- in attitude, expectation, and routine -- are starting to add up. Assessment is now where email was a decade ago.... installed in our SACs, beginning to be used, but only starting to get embedded in our day-to-day work life. But I think a time is coming when we routinely will ask for the data. How did that change in instruction/curriculum/prerequisites/degree requirements affect student learning? We will want to measure, so that we can track the difference our innovations make....And we will look back and ask how we ever got by without our metrics....

Our accrediting agency, NWCCU, has noticed the changes at PCC, and is satisfied that we have "hastened our progress" to routinely use assessment of student learning to improve instruction and learning. Thanks are due to many, many PCC community members who stepped up to the new demands of the "accountability movement," and began devising and implementing ways to tell if we are making the difference in student lives that we are promising. To see some exemplary SAC assessment reports, go to http://www.pcc.edu/assessment

I am wishing the Tech people here at PCC well as they try to figure out the problems. There is no going back now -- not to the days before email, or to the days before assessment.

Wednesday, May 25, 2011

What does program assessment look like?

Here are some photos of members of the math SAC working together, using a process offered by the PALs -- Program Assessment for Learning. The PALs are a sub-group of the Learning Assessment Council that can help SACs move from just tabulating assessment results to figuring out what those results mean about the quality of your program, and especially where the good job you are doing now can get even better..... This is what our accrediting agency wants of us -- not just jumping through ever more meaningless hoops, and the filing of ever more silly reports. Instead, SAC members have the power to create an assessment instrument that is designed to answer your own questions about how effectively you are advancing student learning.






Want to schedule a session of the PALs for your SAC? email Paul Wild at pwild@pcc.edu or Sally Earll at sally.earll@pcc.edu.



Wednesday, May 18, 2011

data-driven educational practices

Michele Marden is a Math Instructor at Sylvania, and a member of the Learning Assessment Council

Study Skills and Self Regulated Learning: Can students self assess? Can we help them do so?

Occasionally a student asks me for advice about how to study math. My responses until very recently were along the lines of “work more problems” or even the less specific “study more.”

Recently I have had a realization: I don’t know how to help my students overcome their struggles because I never had significant struggles with math and I have had precious little training (ie, none) in how one best learns math. This is complicated by the fact that I enjoy math and understand its value while many of my students do not. It is embarrassing that it took nearly 13 years for this realization to solidify!

Last year on a five-hour flight back to Portland, I read a study skills book. It discussed general studying tips and different studying methods for various subjects. Some of the suggestions were obvious, others were not. All went beyond my most common suggestions of “work more problems” or “study more.” None of them were ideas that I shared regularly with students, until now. For my lower level classes, I incorporate “study skills 101 for math.” Sharing study skills has certainly helped some of my individual students, but I always wonder if there is something more I could do…

At a recent math conference, I attended a session about Self Regulated Learning (SRL) given by Lawrence Morales at Seattle Central Community College. SRL goes way beyond my study skills 101. From my brief introduction at this session, I believe that the SRL helps students recognize and handle lack of motivation issues and also gives them tools to evaluate their learning. Note: SRL can be used for all disciplines/programs.

Self Regulated Learning might be a tool for faculty to help students recognize their lack of progress (or lack of motivation) through a framework that would let them self-correct before they are so discouraged that they quit college. Morales described incorporating SRL in his classes as peeling an onion: It has to be an ongoing process where students are trained to self-regulate as opposed to discussing it in one or two class meetings.

SRL is research based and claims to increase student learning. Below is some of the data analysis from a slide Morales shared at the conference session from a CUNY study.

[Data is from slide 16 of the powerpoint from the session. See http://bit.ly/orwasrl for the entire powerpoint given by Morales.]


The CUNY Study: Results

Developmental Students

Self Regulated Learning

Control

Completed Course

73%

67%

Passed final exam

54%

34%

Passed course

50%

33%

Passed COMPASS post-test

47%

27%



Intro Math Students

Self Regulated Learning

Control

Passed final exam

73%

50%

Passed course

68%

49%

Passed COMPASS post-test

64%

39%







Intriguing results!!!

Maybe this will help me give some additional support that will help struggling students succeed.

Suggested readings by Morales:

  • How Learning Works by Ambrose
  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64-70.

If you know about SRL or are interested in chatting about it, please post a comment or contact me.


Tuesday, May 10, 2011

Kendra and Shirlee invite you to help figure something out!

Kendra Cawley is PCC's dean of instructional support and Shirlee Geiger is current chair of the faculty learning assessment council

Here is a question you might find interesting:

Should all PCC SACs be responsible for assessing for all PCC core outcomes?


Background for the question
Last fall, PCC SACs were asked to create 2 year plans for program assessment. Lower Division Collegiate SACs were asked to focus on the core outcomes, and assess for communication and one other (of their choosing.) Career Technical SACs were asked to pick their "biggest" degree or certificate, map the degree/cert outcomes to the core outcomes and then assess their students for the degree/cert outcomes. 93% of all SACs filed their two-year assessment plans in time to get feedback from the peer review session held in November. 40 faculty from across the district met to talk over the plans they had been assigned to read, in teams that spanned all our common divides -- LDC and CTE, full time and adjunct, big campus and smaller campus.

In that session, as at other points in the process this past year, faculty asked the question:
do all SACs have to assess for all core outcomes?

One reason for answering that question with a resounding "Yes!" is that the core outcomes are what we all have in common - they are what define our shared purpose and our ultimate promise to students, the community, and the tax payers who have supported us so generously by passing our bond proposals.

One reason for answering with a resounding "No!" is that the very point of locating program assessment in the SACs is the firm conviction of members of the Learning Assessment Council that assessment activities must be meaningful -- and the only hope for assessments to be meaningful is that they be created by faculty, based on what we want to know. If a given SAC doesn't see itself as teaching to a particular core outcome, assessing for it would just be silly and a waste of everyone's time.

Or maybe your answer isn’t so resounding, but conditional, or situational, or even uncertain. Tell us why.

You will have a chance to hear more about this issue at the 3rd Annual Assessment Circus, coming to Cascade Campus on May 20th, 9 SM to noon. Anyone who pre-registers will be sent a follow up survey to record their preference. (A later follow-up will go to all faculty, but the results of the "informed" group will be tallied separately.) You can also post your ideas here, by using the comments feature....


We hope to see you there!

Shirlee and Kendra

Tuesday, May 3, 2011

Assessing for Excellence by Alexander Astin

by Shirlee

An institution's assessment practices are a reflection of its values.

I came across this statement -- offered as "a basic premise of this work" -- in the opening pages of the book Assessment for Excellence, by Alexander Astin. There is lots and lots and lots (and lots) of stuff written about assessment these days, so it takes something catchy to keep me reading. I am a values-driven sort of gal, so the idea that we can tell what someone cares about in education, by looking at how they do assessment -- well, it kept me reading.

Astin says that the word "assessment" covers two very different activities.

The first is basically a kind of measuring. He says most faculty measure things as a form of record keeping -- because they are required to do so as part of their job. We give tests, and record the scores, and are thereby able to calculate grades and turn them in..... This is assessing as measuring.

The second assessing activity requires using the measurements for individual and institutional improvement. In this activity, we ask what to make of what we measured. What does it mean? This is inherently evaluative, and requires a clear-eyed understanding of basic educational purposes and motives.

Many SACs have gathered their items to measure, at this point in the annual assessment process. Some have already done their measuring. If assessing were just measuring, we would record the numbers (as the good record-keepers we are), and move on to something else. But we are now asked to turn to the deeper, more important aspect of assessment -- asking what the measurements mean, as we keep a focused eye on the point of all our work: student learning.

Astin also contrasts three different ways of understanding "excellence in education."

(i) He says some people think the best colleges are the most resource-rich. They have beautiful campuses, hefty endowments, big sports stadiums and incredible labs... and with these resources, they attract the students with the highest scores, and then charge them a hefty price.

(ii) He says the second notion of "excellence" leads to roughly the same listing of "best to worst" -- but based on the idea of "reputation." The schools that attract the most talent (and can thereby turn away the most talent) are the best by a reputational standard.

(iii) But the best way to measure "excellence" in education (according to Astin) is neither via resources nor reputation. He says what matters is "the development of talent." Those institutions that lead to the most actualization of high potential are the best.

Here at PCC, we are asking faculty to develop their skills not just in measuring and record-keeping, but in evaluating what they have measured. What do our scores say about how we are doing by our students? How can we use these results to serve our students even better?

In this, faculty will need to develop more meaningful talents than those of keeping good records. We must, in collaboration, look deeply at our measurements as searchers-after-the-meaning. The meaning that matters is the meaning that helps us excel... Even as we develop new talents (beyond record keeping!) in ourselves, we will be finding ways to better develop the talents of our students.

How we do assessment says who we are and what we care about, says Astin. Like him, I care about people becoming their very best selves... And, following his lead, I like what our assessment process at PCC says about who we are.

Who are we?

We are the ones working hard to develop the talents most needed for our troubled and complex world, so that the life of those who come after us will be better for us having been here... We are working hard to make sure we do this job, and we do it well. We are assessing for excellence. Just like Astin says we should....

Thank you for all you do....

Tuesday, April 26, 2011

The Completion Agenda

as discovered in New Orleans by
Shirlee Geiger

Last year at the American Association of Community Colleges annual convention, held in Seattle, I felt right at home. For one thing, coffee stations were everywhere, and the caffeine buzz could be heard several blocks away. This year, in New Orleans, it was very hard to find good coffee. Even the famous stuff with the chicory (served with a side of powdered-sugar-sprinkled donuts) was a long walk away, and the place didn't even open until 9. I was astounded by the cultural differences between hot, muggy, and slow Southern life there on the banks of the Mississippi and hyper-tensive hurry and-get-it-done-yesterday Pacific Northwest life we live here on the Willamette...

I was also astounded by the difference in the visibility of Assessment. Last year, in Seattle, there were more sessions on assessment than I could attend. Sylvia and I had to split up to get to them all... This year, there wasn't a single session with "Assessment" in the title. I was feeling a bit forlorn (as well as sleepy, without my daily dose of caffeine), but eventually realized that assessment had not left the scene. It had just receded to the background as an assumption so widespread it didn't need to be mentioned.

Of course, we assess, doesn't everybody?

So what was front and center at the AACC this year? The Completion Agenda. It was everywhere, as in ubiquitous. All over the darn place. There was no way to get away from it.

If you want a cool and clever interactive widget sort of introduction to the problem that the Completion Agenda is supposed to address, please go here:
http://completionagenda.collegeboard.org/

Don't keep reading until you take the test for Oregon.
.
.
.
.
.


Did you take it? (Really, I think you should.... It won't take long.) (That's the kind of assurance needed for hyper-tensive caffeine-fueled Pacific NWers like me.)
.
.
.
.
.

OK, now that you have played with the widget, let's think about what this means for us.

The Obama Administration's Education Department wants a higher rate of our U.S. population to not only attend college, but to finish college -- having earned meaningful degrees.

This means that more of our children need to make it through High School.


But graduating High School and being ready for college are not (as we all know) the same thing. Does the push for HS completion mean more remediation needed at community colleges?

Well, if it does, that is a problem.... There is a move to fund colleges -- not on the traditional measure of how many students are served, described as fte -- but on the basis of degrees awarded. This is a global trend. Here is a place to take a quick tour: completionbasedfunding.pdf. Since remedial classes take up limited financial aid without generating credits toward graduation, there is the risk that as less college-ready HS students are funneled into college, the college completion rate will plummet. If funding is tied to completion, then our funding will plummet....

But there is another problem, as well. One speaker mentioned a study (and I don't have the reference, sorry... not enough coffee) of people who had completed the credits for an associates degree, but never applied to have the degree awarded. In follow up interviews, the people said that an AA or AAS would not do anything to increase their desirability in the workplace... why bother picking up a degree if it doesn't help? So are we being pushed to award ever more degrees, at the same time that our degrees have less and less purpose and use in the workplace?

In our new world of Higher Education, on display at the AACC in New Orleans, it is clear that assessment of learning outcomes is the norm, and the tie between degrees awarded and outcomes assessed is ever tighter. Now comes the sound of the funding shoe dropping... assessments tied, via completion rates, to budgets.

It is the kind of thing to keep a person up at night.... even if you are not wired from drinking too much coffee...


Tuesday, April 19, 2011

Learner Responsibility in Learning Assessment

By Cara Lee, Mathematics Faculty, Part-Time

Cara teaches at Cascade, and has presented lively sessions on student empowerment in PCC's Teaching Learning Centers

With fierce determination I stood in front of my Math 65 class last term at the SE Center in week 5. I had been feeling increasingly frustrated and drained after each class as the responsibility of the planning, facilitation and assessment was on my shoulders along with the weight of the learning for 35 students.

In my mind I wanted to chew them out for not stepping up to the plate, not doing their homework and not being fully present in class. I knew, however, that a stern lecture or speech would simply yield another “Whatever.”

Instead, I said, “Do you know what I love about teaching college?” Students started to look up from their papers. “You are adults and you are in charge of your own learning. You don’t have to be here, but you are here - by choice.” They gave me their full attention as they sat up straighter, smiled and immediately became more engaged.

In the discussion about learning assessment, I have been trying to figure out where the learners come in. What is their role and what are their responsibilities? Isn’t it their job to learn and to know whether they’ve learned the material or not?

The familiar adage comes to mind, “You can lead a horse to water but you can’t make it drink.” I think this is the mindset of teacher-centered teaching, where the focus is on what the teacher is doing. If the students don’t get it then it’s their fault. They haven’t taken responsibility.

But rather than place all of the responsibility on the students or the teacher, I think the answer lies in the middle. We can form a partnership with our learners on the first day. Agreements are made on the first day whether we do it consciously or not. Most learners are adept at sensing what is actually required of them and whether they will have any control in the process.

In Learner-Centered Teaching: Five Key Changes to Practice, Maryellen Weimer shares the way her colleague took the horse metaphor one step further. “He said it was the teacher’s job to put salt in the oats so that once the horse got to water, it was damn thirsty” (p.103).

Many students do not have experience being responsible for their own learning and don’t have the skills to know what they don’t know. It felt like many of my learners in the math 65 class wanted me to pour the water into their mouths and tell them when to swallow, while they complained about the process. But I was the one who had assumed too much responsibility for their learning and that is what I needed to correct.

In class that day I handed out a mid-term self-assessment form for each student to complete. They each wrote their goals for the class in order to determine how they were doing. They calculated their current grade and listed the number of hours they were studying per week compared to the 10 recommended (hint, hint). They listed other resources they were using. They decided whether or not they were meeting their goals and wrote out a plan for the rest of the term.

I didn’t know what was going to happen but I was very impressed by the shift in the class. The learners did accept the responsibility that I gave back to them. The complainers stopped complaining and started working and asking more questions. The biggest complainer became the hardest worker. With the awareness that they had a choice, they made a different choice.

So in learner-centered teaching, what are we as teachers responsible for? According to Weimer,

We do have an obligation to show (not tell) students the value and necessity of learning. We have an obligation to make our content relevant, demonstrate its power to answer questions, and otherwise show its inherent intrigue. Once a student interest is piqued, we have the responsibility to lead them to all the learning resources they need. As the student learns, we have the responsibility to monitor the process and offer constructive feedback and assessment” (p. 103).

And what are students responsible for?

Fundamentally, the responsibility to learn is theirs and theirs alone. We can try to force them into accepting that responsibility along with the obligation to grow and develop as learners, but we do them a much greater service if we create conditions and develop policies and practices that enable them to understand their responsibility and that empower them to accept it” (Weimer, p.104).

I work hard to show interesting and relevant motivations for my content, and I also think that if I am the hardest worker in the class then there is a problem. Some of these shifts are subtle, yet very powerful and only the beginning, I think, of the transformation from teacher-centered to learner-centered teaching. My initial response to the learning assessment movement was, “Oh great, more work for the teachers,” but now I think it’s a different type of work for the teachers and more work for the learners – or more ownership at the least.

I highly recommend Weimer’s book, which gives five key changes to practice. The five changes are the balance of power, the function of content, the role of the teacher, the responsibility for learning, and the purpose and processes of evaluation. She gives many concrete examples of how she centers on her learners in her classes.

Inspired by the Anderson Conference I have made three self-assessment tools for my students which you are welcome to use and modify to fit your style. Self reflection is one of PCC’s core outcomes and I am excited about using these tools to show students how to take charge of their own learning. You can find these tools as well as the latest handouts for my TLC talk, Create the Students of your dreams: 3 ways to empower and motivate students at http://www.pcc.edu/staff/index.cfm/1394,12879,30,html.

Now, would someone please pass the salt...


Weimer, Maryellen. (2002). Learner Centered Teaching: Five Keys Changes to Practice.

San Francisco, CA: Jossey-Bass Inc., Publishers.