Wednesday, January 26, 2011

Grade Fog and..... Fog Lights

Assessment Blog Jan 25 2011



Extra! Extra! Hear All About it!!

There is fast-breaking news in the assessment world. The Lumina Foundation (which is funded in part with Gates money) has just released a report that has the potential to shake up Higher Ed in profound ways. It is called The Degree Qualifications Profile. The U.S. Higher Ed system has been playing catch-up with the European Union (and their Bologna Process.) And this counts as a significant step forward.

What's it all about? Diligent readers of this blog may be familiar with the phrase "grade fog." Since teachers are in charge of how they determine student grades, it is impossible to meaningfully compare one student's grade of B (from Instructor #1) to the next student's grade of B (From instructor #2.) #1 might try to factor in effort, and award students with good grades for trying hard. #2 doesn't care about effort -- or attendance or participation -- just the results. Those are very, very different grades of B. And that's in one institution, where both instructors belong to the same department or SAC and are teaching under the same CCOGs (or equivalent.) Comparisons get all the more obscure as we go from one institution to another, across divides of public and private, research centers and community colleges, for-profit and non-profit schools.

Into this foggy mess, the Lumina Foundation has just put out a preliminary report. They are setting outcomes associated with "quality" degrees. The purpose is to have clear and explicit outcomes that are generally accepted, across institutions, for college degrees (associates through masters -- doctorates are not yet covered.) And wherever there are outcomes, assessments are not far behind. At this point, the document is very general and abstract. The plan, it appears, it to use this as a framework for on-going conversations about how to fill in the details...

Is this a good thing for Higher Ed? Well, like most everything in life, the matter appears mixed.

From my point of view, outcomes coming from a non-profit foundation sound better than outcomes dictated by the Department of Education (with money tied.) And setting out outcomes can be better or worse, depending on assessments. The Biggest drawback of the No Child Left Behind approach (to my mind) is not that outcomes became required for each grade level, but that certain high stakes tests became the one and only way they were measured. So I will be waiting to see what happens when the assessment shoe drops (so to speak.)

Also from my point of view, though, one of the most vexing and intriguing pieces of the assessment whirl is what is often referred to as "measuring the unmeasurable." Here at PCC, we have language for some of what we are trying to do in our Core Outcomes. We promise that our students will become increasingly self-reflective and willing to turn toward our collective social and environmental problems (instead of running away.) We say our students will learn to understand who they are in this world through increased awareness of cultural variation and vast differences in the shared human venture of "meaning making." How has the Lumina Project benchmarked these vital aspects of education?

This is what the report says:


In addition, many colleges and universities emphasize their role in fostering personal growth and helping students examine their values and commitments. But such elements of institutional mission rarely are specified as criteria for awarding degrees. Therefore, they are not explicitly included in this Degree Profile, even though values reflection and personal growth are inherent in many of the competencies that the Profile does include.


I think I'll be staring at this paragraph a long, long time. If we measure what matters, and we are not measuring this...... well, you can see where that reasoning goes.


Here's the link to an Inside Higher Ed take on things:
http://www.insidehighered.com/news/2011/01/25/defining_what_a_college_degree_recipient_should_know_and_be_able_to_do

Here's the Lumina Foundation page:

http://www.luminafoundation.org/

Here's the report itself:

http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf


If you work in Higher Ed, this report will change your job. Check it out!

Tuesday, January 18, 2011

Testing, testing, 1,2,3....

This week I want to direct your attention to a disconcerting book just being published, with some main points summarized at Inside Higher Ed.

Many people have been worried that the emphasis on "accountability" in colleges and universities would lead to the universally required adoption of some high stakes tests, similar to the pattern in K-12 from "No Child Left Behind." Members of the faculty Learning Assessment Council, as part of our first year of inquiry (2008-9), investigated the different tests, and organizations that created and administer the tests, that are most used for this purpose. One in particular, the CLA (College Learning Assessment), looks poised to win the testing wars. It has been adopted by all universities in the California and Texas Higher Ed system, and is administered at Lewis and Clark (closest to home.) The test also comes close to the vision of the original Spellings Commission -- to provide a meaningful way to compare the results of one institution to another. The results are bench-marked based on information about the institution, its student population, and other demographics. This test has a pre- and post design. A sample of in-coming students are tested, and then a sample of students leaving with degrees. The difference is the "value added" by the college experience

The report in Inside Higher Ed uses results of the CLA to look generally at how Higher Ed is doing in value added (which they say is not very good), and then to try to isolate what characteristics in a student's experience account for the best gains.

There is lots to think about -- not just in the results of the study, but in the methodology. The CLA looks to me to be the best of the tests out there (it is the only one that uses essay answers, instead of multiple choice or T/F.) But is it good enough to be used in this way -- to tell how colleges and universities are doing on their core mission?

At the council's recommendation, PCC did not adopt the version of the CLA created for community colleges. We thought meaningful assessment -- useful for program improvement -- needed to be closer to the people who matter, the instructors. But if the CLA continues to be widely adopted, and used in ways like this study, there may come a time when we will have to use it. Either it will be mandated (!!) or its use will be so standard that it would hurt us to opt out. Either way, the CLA and its uses is an important piece of the assessment story in colleges and universities.

Shirlee
chair, faculty Learning Assessment Council 2010-11

Tuesday, January 11, 2011

Assessment and Blind Dates

Jeff Pettit’s Fear

The fear of assessment feels like the fear of a blind date. It seems like some person I don’t know will sit down, take a brief glimpse of who they think I am and then criticize me for something I do or something I say or something I don't do or something I don't say, or just out-right fails to see how great I am, despite my flaws.

Currently, for the most part, assessment is like the morning look in the mirror: working alone, fixing what can be fixed, briefly regretting the imperfections, … self-critiquing before heading out the door. I agree with Phil Seder's post from Nov 2 that this kind of self-critique is largely how most individual teachers improve, and the only improvement most teachers have time for. I don't agree with him that assessment should be abandoned to the status quo of teachers-as-islands and Peter Seaman's post from Nov 8 is a clear outline that there is a better way.

Assessment is not brief; it is recursive. Assessment is not a glimpse, but a continually improving probe into any area to which it is applied. It is not done by someone I don't know; here at PCC it is done by you and by me.

Assessment is also several separate things. It is Instructors assessing students to maximize their learning; I’m not going to talk about that much. It is also assessing Instructors to maximize their instruction; this is the look in the mirror. I’m going to talk about that a little. Most importantly, at least at this point in time for PCC overall, it is assessing courses, programs and everything departmental in the learning process. This is what I want to talk about, because this is what we fear.

We at PCC are in a unique position and I am relieved and glad for it. We have been told to assess PCC (courses, programs, departments, etc.) effectively, so we must. This is a new atmosphere from decades past, and some are saying, “No I won’t, because I’m afraid when you look at courses, you might take a peek at me and find fault in me and fire me.” This resistance to assessment is tilting at windmills, which are turning by the winds of change. But when viewed without fear, assessment is not a mandate to change, but an opportunity to improve. And by improve, I mean improve everything from curriculum, to textbook choice, to course outcomes, to instructor and student support. As other educational institutions drive improvement from the hierarchy above, or from private assessment companies, or in the case of No Child Left Behind, from the government, we at PCC get to trust the job of assessment to ourselves.

If we are to assess, this is the least frightening scenario. But it is, for most people, frightening nonetheless. Of all the good people at PCC with whom I’ve discussed assessment, none of them seems afraid of assessing themselves, or assessing others, or assessing courses, or departments, or administrators. But nearly all hold some fear of others assessing them! This fear spills over into a fear of assessment overall. When we imagine others looking at us (or even near us) in order to make improvements it would be difficult not to be afraid. Accepting criticism from others (whether it reflects on our own short-comings or the short-comings of our department) requires trust, vulnerability and openness. So the real question is this: do you trust your colleagues? Should you trust them? If you trust them completely, there should be no fear. If you don’t trust them, the problem is either your difficulty in trusting people, or their difficulty in trustworthiness. Maybe both. If you have trouble trusting people, stop it. But if our colleagues are not trustworthy, what then? I don’t know. Perhaps faith.

Or perhaps we can remove them from the process. The strangers and the untrustworthy are not asking what we are doing in our classes. We are simply tasked with assessing and improving our overall method of improvement. We are not being asked to prove we are good at our jobs.

This leads me to my following vision for assessment: Empirical relevant data is regularly and anonymously gathered from all Instructors teaching a set of particular courses. Student performance is pooled and posted internally. Neither student names nor instructor names are recorded nor is data organized or connected by class. However, (and this is where I am talking about individual Instructors improving themselves) Instructors can privately compare their students’ data to the over-all data if they record it before submitting (perhaps the assessment is even used by most Instructors as part of the students’ class assessment). Instructors can use this information to improve individually. More importantly, on the department level, based on the overall data, improvements and support are offered to improve courses overall. But the key is data, accurate data. Data data data. And accurate data can only come when we are free of fear – when everyone participates, when no one tries to gloss over imperfections as if they are on a blind date, when we all work together to improve PCC. This scenario can work because currently all that is being asked for externally is a workable method (and continually improving method) of gathering data, not the data itself. No one other than I would need to see how well my students are doing; no one outside the department would need see how well the department is doing. Privacy, in fact, will increase the accuracy of the data, increase participation, and reduce (eliminate?) fear! As long as the method of generating accurate and meaningful data is continually improved, the machine is alive and evolving and limitless.

True to my discipline, I will close with a mathematical proof.

Given:

  1. Course improvements are implemented by Instructors.
  2. Each instructor can improve.
  3. Each Instructor is either able to improve or is not able to improve and either desires to improve or does not desire to improve.

  4. Now, let T be a member from the set of all teachers who are able to improve and desire to improve (that's probably everybody). Because T needs to improve, T is imperfect. If T relies solely on T's-self for improvement, T's improvement is limited due to imperfection.

Therefore, T should not rely solely on T for improvement; otherwise improvement will be finitely limited.

Likewise, unless T's mentor is a demigod (which would contradict “2)” which is given), if T relies solely on advice from other teachers (T1, T2, … , Tn) for improvement, the limit of improvement might be larger, but still finitely limited due to individual imperfection (proven above).

Since T cannot improve based solely on other T’s, any course, degree or department created by a set of T’s is limited and will eventually reach it’s limited potential.

However, recursive assessment of courses is unbounded and therefore provides the possibility of infinite data, which must, by definition, be infinitely useful.

Therefore, in order to achieve limitless improvement: all teachers in the set of teachers that wish to improve themselves, their courses, their departments and PCC overall should set-up an accurate and continually improving system applied in such a way that no one is afraid of it. The continually improving system that everyone creates should gather data about how well students are learning, and base improvement on that data. In doing so we will not only continually and limitlessly improve, but also help others to overcome fear! Now everything is better and everyone is happier!

Woo!



Jeff Pettit is a math instructor here at PCC. He was an active (and fun!) participant in the Assessment class in Fall 2010 (The class is offered through the Learning Assessment Council -- watch for announcements of future classes.) He agreed to write a blog, and reveal to us some of his thinking process about assessment....

Tuesday, January 4, 2011

Don't miss the Anderson Conference

The Anderson Conference is coming!! This year TLC coordinators have invited some local assessment experts, who will come share both assessment theory and practice. Their main focus will be assessment at the course level. The conference title says a lot, I think:

Shifting from a Grading Culture to a Learning Culture: Assessment Theory & Practice


The sessions rotate through some of our campuses. You are welcome to come to some sessions, even if you don't make them all. For each campus session attended, PT Faculty will receive a $25 stipend.



January 27th at Rock Creek: 1:00pm-4:15pm
keynote:
Student Assessment That Boosts Learning
January 28th at Cascade: 9:00am-12:00pm
keynote:
Assessment for Learning Using Rubrics
January 28th at Sylvania: 1:00pm-4:00pm
keynote:
Descriptive Feedback

The TLC coordinators have forwarded to me the bios of the two main presenters (with break out sessions being offered after each keynote.) So I am sharing them with you here:

Judy Arter has worked at all levels of assessment from college to elementary school and from large-scale high-stakes testing to classroom/course assessment for learning. Her passion is formative assessment (assessment that supports student learning). She has developed performance assessments and rubrics for many contexts and subject areas.

Judy has facilitated over 1200 trainings in the area of student assessment since 1978. She worked at the Assessment Training Institute in Portland, OR from 1999 to 2010 and was director of Northwest Regional Educational Laboratory's (now Education Northwest) assessment unit from 1990 to 1999.

Judy is a co-author of Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance (2001), Assessment FOR Learning: An Action Guide for School Leaders (2005), Classroom Assessment for Student Learning: Doing It Right –Using It Well (2004), and Creating and Recognizing Quality Rubrics (2006).
She has a PhD in special education (University of Illinois, Champaign-Urbana, 1976) and a BS in mathematics (University of California, San Diego, 1971).

Loren Ford has a Master's Degree in Psychology and is a Licensed Professional Counselor. Over the last 33 years he has taught numerous psychology and history courses at Clackamas Community College. After years of using assessment strictly to give grades, he is experimenting with strategies to use student assessment differently to (1) motivate students to do practice work, and (2) as an instructional methodology to help students learn more quickly and deeply. For example, he is developing rubrics to help students understand the communication and reasoning proficiencies necessary for success in school and daily living.

The conference breakout sessions will address such topics as grading strategies, rubrics, providing effective feedback, and more.