Friday, January 14, 2011

Rarely is the Question Asked: Is Our Professors Teaching? Part II

Heather Cox Richardson

Randall asked a good question in his post wondering whether or not college and university professors are encouraged to improve their teaching. He has inspired me to blog about teaching issues in a more systematic way than I have before.

Today the topic that is consuming me is assessment. This is not a new obsession, either on my part or on that of the profession. We’ve talked about assessment for years. . . but what have we learned?

What, exactly, do we want our students to learn in our classes? Long ago, I figured out I should design my courses backward, identifying one key theme and several key developments that were students’ “takeaway” from a course. That seems to have worked (and I’ll write more on it in future).

But I’m still trying to figure out how to use assessments, especially exams, more intelligently that I do now. My brother, himself an educator who specializes in assessments, recently showed me this video (below), which—aside from being entertaining—tears apart the idea that traditional midterms and finals do anything useful in today’s world.

Shortly after watching the video, I happened to talk separately with two professors who use collaborative assignments and collaborative, open-book, take-home exams. They do this to emphasize that students should be learning the real-world skills of research and cooperation just as much—or more—than they learn facts. As one said,
facts in today’s world are at anyone’s fingertips . . . but people must know how to find them, and to use them intelligently. This is a skill we can teach more deliberately than we currently do.

These two people are from different universities and are in different fields, but both thought their experiment had generally worked well. One pointed out—as the video does—that the real world is not about isolation and memorization; it’s about cooperation to achieve a good result.

The other said she had had doubts about the exercise because she had worried that all the students would get an “A.” Then she realized that it would, in fact, be excellent news if all her students had mastered the skills she thought were important. When she actually gave the take-home, collaborative assignment, though, she was surprised—and chagrined—to discover the same grade spread she had always seen on traditional exams. She also saw that some of her student groups had no idea how to answer some very basic questions, and that she would have to go back over the idea that history was not just dates, but was about significance and change.

And that is maybe the most important lesson. The collaborative exam revealed that there were major concepts that a number of students simply weren’t getting. So she can now go back and reiterate them.

I’m still mulling this over, but I do think I’ll experiment with collaborative assessment techniques. Historians have some advantages doing this that teachers in other fields don’t. We can ask students to identify the significance of certain events, to write essays, and to analyze problems. With the huge amount of good—and bad—information on the web in our field, though, we could also ask students to research a topic, then judge their ability to distinguish between legitimate and illegitimate sources (something that might have helped Joy Masoff when she was writing her Virginia history textbook).

As I’ve been thinking this over, a third colleague has inadvertently weighed in on it. He discovered students had cheated on a take-home exam, working together and then slightly changing each essay to make them look original. At least an assigned collaboration would eliminate the problem of unapproved collaboration!


Randall said...

That's a very provocative talk and video.

Reminds me of a piece in the Chronicle in 2009, which challenged the idea of "different learning styles" that ed experts always tell us about. This may argue against what the gentleman said in the video. But perhaps not?

Here's a bit from that:

"Almost certainly, you were told that your instruction should match your students' styles. For example, kinesthetic learners—students who learn best through hands-on activities—are said to do better in classes that feature plenty of experiments, while verbal learners are said to do worse.

Now four psychologists argue that you were told wrong. There is no strong scientific evidence to support the 'matching' idea, they contend in a paper published this week in Psychological Science in the Public Interest. And there is absolutely no reason for professors to adopt it in the classroom." . . .

"'Lots of people are selling tests and programs for customizing education that completely lack the kind of experimental evidence that you would expect for a drug,' Mr. Pashler says. 'Now maybe the FDA model isn't always appropriate for education—but that's a conversation we need to have.'"

Unknown said...

I'd like to explore ways of grading students that pointed toward more overarching, long-term goals, as Heather suggests. Seems like a lot of the issues are in the intro surveys -- by the time we're working with majors and grad students, we have a clearer idea of how to judge their progress. But these intro classes are usually subject to a set of outside influences, because they fulfill general education requirements. So it seems to me that we have to first discuss what the overall goals are, to understand what we're about.

I don't really see how forcing students to jam a few facts into short-term memory for a test contributes to them developing cultural literacy (if that's what we're about), or critical thinking (if that's what we're about). Nor do short answers or blue books do much for their ability to write (if that's what we're about), unless they've already mastered a set of basics that most freshmen and sophomores probably haven't got yet. This is just my opinion from my own educational experience, from assisting professors at a public institution, and from raising two high-schoolers. My prior teaching experiences mostly involved specific outcomes, like preparing people to pass NASD licensing exams or to sell computers. I'm looking forward to doing this with my own history classes -- hopefully sometime soon!

Rand said...

It's interesting to consider that in the math and science worlds coursework centers around problem-sets, which while not explicitly co-operative are assumed to be often done by study groups working together.

On the other hand my experience with the social sciences world has been the concentration on essays and less formal writing assignments, which are harder to do co-operatively without verging on plagiarism.

The exception to this rule is encouraging/grading on group discussion, which while often useful has the downside of being less objective (it's harder to fairly assess a point you personally disagree with when a student is arguing it in your face vs. on a neatly typed piece of paper + there's the dilemma of handsome students) and it's a more taxing challenge for the educator as encouraging group discussion is hard, hard work sometimes.

hcr said...

My only real experience with this sort of assessment in history was when three students made a documentary under my supervision. They had different interests, skill sets, and ability. I was shocked at how quickly they melded into a team, with each leading in their own specialities, while learning from the others. Their project was brilliant, and it is not a stretch to say it gave them important experience for their future careers (each ended up in a field that reflected his or her role on the project).

But that raises the issue that all three of them were highly motivated. What would have happened if there had been a fourth student involved who didn't want to be? Would s/he have gummed up the works? Learned anything? Contributed?

As always, it's easy to teach the ones who want to learn. How do we deal with the ones who don't?