Tuesday, May 31, 2011

Notes from Grad School: The Last Professors

Dan Allosso

In a 2010 review of Frank Donoghue’s book The Last Professors, D.R. Koukal mentioned that while reading it he had several pangs of survivor’s guilt, as a member of the “dwindling community” of tenured humanities professors. This is an interesting sentiment, which could open a lively discussion on the role of tenured faculty in the change facing the educational system, but that’s a topic of a future post. Donoghue resists the idea that this is a new crisis. He calls attention to the historical griping of business magnates like Carnegie, Birdseye, and Crane against the academy, and to F.W. Taylor’s claim that progress happens only when people use “originality and ingenuity to make real additions to the world’s knowledge instead of reinventing things which are old.”

Donoghue also questions assumptions regarding the traditional bundle of activities that go with being a professor. Among these are the beliefs that, “professors are authors,” that “our scholarship determines our relative prestige,” and that “scholarship is integrally related to teaching.” Donoghue suggests that “the typical academic monograph sells 250 copies and goes largely unread, except in institutional venues of evaluation.” He suggests this element of academic work is a product of the quest for tenure, and that once this goal is acquired many professors lose interest in research. It might be fruitful for grad students like myself to try to "unbundle" the various aspects of our chosen profession, ask ourselves which of them we really want to focus on, and then pursue opportunities that match. We might even find that the traditional system is not the ideal solution, which might alter our perspective on the prospect of change.

“Any meaningful debate about tenure,” Donoghue says, “has to start with the fact that it is slowly but surely disappearing, and the current workforce in higher education is unwittingly hastening its extinction.” But when he says the current workforce, Donoghue does not mean just tenured professors. “Tenured and tenure-track professors currently constitute only 35% of college teaching personnel” in America, he says, and “this number is steadily falling.” Non tenure-track instructors are part of this workforce, and “In no other workforce is there such a wide disparity, both in income and in day-to-day life, between groups of people whose jobs are, in part at least, so similar,” he says. Donoghue notes that the use of adjunct faculty varies widely: “at Stanford, 6.4% full-time faculty members are off the tenure track, while at Harvard the figure is 45.4%. One pattern, though, is indisputable: those sectors of higher education that are currently expanding at the fastest pace–community colleges and for-profit institutions–are most resistant to the idea of a tenured faculty. Nationwide, 65% of the faculty at two-year institutions are part timers, and 80% are not on the tenure track.” These are scary numbers, viewed from Donoghue's perspective. But what is it like to teach in a place without tenure if your dream was to teach? What is it like to work in a place without such a rigid, caste-like disparity between roles?

Donoghue describes what he terms for-profit universities as, at best, vocational and professional training institutes, and at worst, mere degree-mills capitalizing on the wage disparity between holders of high school and college diplomas. He quotes a memo from a manager at a for-profit university, assessing the team of instructors based on student evaluations: “most of you do an excellent job . . . if you score below 4.0–I will be talking with you directly. We cannot retain instructors with scores in the 3.0 range. Have a good day!” Although this message is a bit brutal, as Donoghue says, it also seems to underscore an issue of accountability he is less enthusiastic about addressing. He chides the University of Phoenix for granting an MBA to Shaquille O’Neal, but does not seem equally concerned with the weekend “executive MBAs” granted by prestigious academic institutions. Donoghue criticizes the financial performance of these for-profit universities as well as their mission, as if there is no thought of profit (financial or otherwise) at traditional institutions. “They focus on the tight relationship between curriculum and job preparation and the appeal primarily to the older, working adults who are steadily becoming the typical American college student,” he says. But do we really want to be opposing working people who give up their leisure time, in order to try to make a better life for themselves and their families?

Donoghue seems to view the question of why undergraduates go to college as a battleground where the humanities must overturn students’ (and administrations’) market calculations. In the future, he suggests “the BA and BS will largely be replaced by a kind of educational passport that will document each student’s various educational certifications from one or several schools, the credentials directly relevant to his or her future occupation.” But “this will not be the whole picture. Like American society as a whole, with its widening gap between haves and have-nots, America’s universities will grow increasingly stratified. The elite, privileged universities and colleges will continue to function much as they do today, championing the liberal arts and the humanities and educating the children of the elite and privileged for positions of leadership . . . the gulf between these elite universities and institutions that educate everyone else will widen in new ways that will complicate our efforts to define both the idea of higher education and the concept of access to higher education.” While the "educational passport" at first seems like a utilitarian instrument for meeting job requirements, the idea it contains of a lifelong process of education, belonging to the student and portable between schools, has a certain appeal. In addition to a BS and an MA, for example, I hold several certifications, including an NASD principal's license and a UNIX System Administrator's certification from a Tier-1 University's technical night-school. If access to elite institutions does become more limited, flexible and effective educational options for the rest of us will become more important than ever.

The Last Professors presents a series of facts that are interesting and surprising. For example, “nearly 80% of the total PhD’s in the country are awarded by just 133 universities. This is a staggering imbalance, as those…universities make up a tiny fraction–less than 2%–of the 3,500 traditional institutions of higher learning in the country and only one fifth of all institutions that grant PhD’s.” Donoghue also notes that there were “a record 17.3 million students enrolled in college in 2004, up 28% since 1991. Enrollment is expected to increase another 11% by 2013. The image of an 18 to 20-year-old, full-time student in residence at a traditional college, however, is now a figment of the past; only 16% of all undergraduates now fit that profile. Today, the majority of students are over the age of 25, as compared to just 22% in 1970.” Clearly the game is changing, and grad students should begin to think about and act on these changes. Donoghue suggests there are two main things that tenured professors can do to improve their situation. First, they must challenge the main corporatist tenet, “the assumption that a practical, occupation oriented college education leads to a secure job and thus it is crucial to improving one’s quality of life.” The second action that humanists need to take, he says, is to “balance their commitment to the content of higher education with a thorough familiarity with how the university works.” Donoghue suggests tenured professors (and those who want to be tenured professors) “need to resist the tendency to romanticize our work,” and need to better understand the real distribution of work at a university, and the causes and likely effects of this distribution. While I think this is a good suggestion, the pool of tenure-track opportunities is drying up without a corresponding decrease in the number of new PhDs (see graph from 2010).

Some of us may need to think about finding new ways to do what we love, rather than just elbowing our way to the front of the queue. And the prospect of drawing a line in the sand and trying to argue against the vocational goals that bring so many students to higher education seems misguided. The claim that undergraduates have to choose between academic purity and a degree that will help them get ahead in the world sets up a confrontation the humanities cannot win. And it flies in the face of experience, since in many places, the humanities have found at least a relatively safe haven in their role as a vital complement to job-preparation and a key to general education.

Monday, May 30, 2011

A Thank You to Our Troops—All of Them—on this Memorial Day

Heather Cox Richardson

Memorial Day came out of the Decoration Days held after the Civil War. This seems like a logical thing for me, a scholar of nineteenth century America, to write about today.

Instead, though, I’d like to talk about a group of soldiers that often gets forgotten when we remember our troops. I mean the WACs, the more than 150,000 women who served in the U.S. Army during World War II.

The army had an existing Army Nurse Corps, but in 1941, Massachusetts Congresswoman Edith Nourse Rogers introduced a bill to establish an Army women’s corps, distinct from the Nurse Corps. During WWI, women had worked with the Army overseas, but because they were contract employees they got no housing, food, protection, or benefits. Rogers wanted to be sure that the same did not happen again.

Army leaders objected to permitting women to join the army directly, so officials hammered out a bill that created the Women’s Army Auxiliary Corps (the WAAC). This group was to work with the Army and receive some of the benefits of military service, but not all. The Army would provide up to 150,000 women with food, housing, uniforms, and pay—although at a lower scale than men. Women could serve overseas, but they would not get overseas pay, life insurance, or veterans medical and death benefits.

Even with these limits in place, the bill went nowhere until Pearl Harbor. Then Army Chief of Staff General George C. Marshall threw his weight behind it, recognizing that using women to type and run switchboards would free men to fight. “Who will then do the cooking, the washing, the mending, the humble homey tasks to which every woman has devoted herself; who will nurture the children?” wailed one congressman. The need to mobilize for war outweighed the strong cultural objections to the new plan. On May 15, 1942, President Roosevelt signed it into law.

The Army Air Force was especially eager for WAACs to work in weather operations, as cryptographers, radio operators, parachute riggers, and to keep track of personnel records (on the statistical control tabulating machines that led to modern-day computers). Some also flew planes for the AAF.

WAACs also worked for the ground Army, processing men, issuing weapons, tracking supplies, analyzing maps, dispatching boats, servicing equipment, calculating bullet velocity, and operating radios.

So successful were they that five days after the invasion of North Africa, Lt. Gen. Dwight D. Eisenhower requested five WAAC officers—including two that spoke French—be dispatched to Allied headquarters to become executive secretaries. The ship carrying the women was hit by a torpedo. Pulled out of the water by British destroyers, the women survived and served on Eisenhower’s staff for the rest of the war.

The WAACs had proved so useful that it was a crisis for the Army when recruiting fell off dramatically in 1943. As much as Army officers liked WAACs for freeing men to fight, the men fighting—and their female relatives—resented the women whose service took the men from typewriters and put them on the front lines. Popular images of the WAACs were that they were loose women who cast off traditional roles and joined the service to get men. Suddenly, “nice” girls did not join the WAAC.

Here the Army stepped in to convert the WAAC into part of the regular Army: the Women’s Army Corps (WAC). This would not only undercut the tarnishing of the women who wanted to serve their country, it would enable the Army to offer the same protections to women serving overseas as it did to men. In July, Congress created the WAC. The War Department quickly stepped up recruiting.

Immediately, WACs went to the overseas theaters. 300 WACs with the SHAEF (Supreme Headquarters, Allied Expeditionary Force) translated reports from the French underground and compiled files on the German officers. They kept the situation maps current. Other WACs followed the Army to Normandy and took over the switchboards abandoned by the Germans. In the Southwest Pacific Theater, WACs arrived with standard-issue clothing, including ski pants and heavy coveralls. They developed skin diseases from the heavy clothing—the dreaded “jungle rot”—and malaria because they shed their heavy clothes in desperation, leaving themselves vulnerable to mosquitoes. Women in this theater had to be locked in a barbed wire compound at all times to protect them from the male troops.

Like the men, women demobbed after the war ended. By the end of 1946, only 10,000 women were still in the Army, and they hoped to stay. Earlier that year, Army officials asked Congress to make the WAC permanent. Congress did so in 1948, and the WAC remained part of the Army until 1978, when women were integrated into the regular Army in all branches except combat.

The tens of thousands of women who had served in the Army during WWII braved social ostracism to serve their country. In the process, they proved that women could perform every bit as well as men, even in that bastion of males: the Army.

Those women went back to their homes across America. When they married and had children, they made sure their daughters knew that they could grow up to be anything they wanted.

Thanks, Mom.

Happy Memorial Day.

Friday, May 27, 2011

Ye Complicated Cartoons of Yesteryear

Randall Stephens

Political cartoons of today are quite unlike those of previous decades, and very unlike those of previous centuries. (Caveat: This is just me speaking as the armchair amateur.) Quite a few op-ed cartoons that run in Newsweek, the Globe, the New York Times and elsewhere are easy enough to figure out at a glance. Many a cartoonist makes his/her case with a visual pun and a on-liner. Look at quite a few prints from the postwar era to the present and you might note the grace and simplicity of the style and the clarity of meaning. (CLICK the detail of the cartoon to enlarge this famous 1956 Herblock cartoon.)

Not so with 18th and 19th century political and social satires. A tangle of text bubbles, symbols, and, in the case of 18th- and early 19th-century cartoons, gutter vulgarities crowd the page. It is fun to look at these antique prints that fed off political battles and showcased the personality clashes of the day. In my classes we spend quite a bit of time unraveling the meaning of various satirical etchings and lithographs. (Finding the dog urinating on something in an 18th century political cartoon is a like a Where's Waldo game.) And note the typically bizarre sexual quality of the devil in the print below.

So, here's "A political, anatomical, satirical, lecture on heads and no heads; as exhibited at St. J--ms's 1766" from the Library of Congress. (CLICK the image to enlarge.) And the description from the LOC site runs as follows:

Print shows the Earl of Bute holding up a bust of William Pitt as the starting point of his lecture on "Heads and No Heads"; on a table before him and on a shelf to the left are several busts identified by number with corresponding descriptions in the printed caption. A woman assistant, on the left, says "I hand them in" and the devil, on the right holding a burning candle in one hand and a candlesnuffer in the other, says "I hand them off", two extinguished candles sit on the table. Nine men form the audience in the foreground, each utters a comment, such as, "Good Lord deliver us", "A good Exhibition this", "Yes, the Characters are well drawn", and "My, all Birds of a Feather".

Hmmm. . . . A send-up of types? The accompanying text is a thicket. This one is still a real head-scratcher for me! Chris Beneke? Maura Farrelly? John Fea? A paleocartoonologist? Some enlightenment?

Thursday, May 26, 2011

Tornadoes through Time

Randall Stephens

I'm visiting family in the Kansas City area, on the edge of the infamous tornado alley. (I have many memories of sleeping in the basement while the dark clouds and funnels blew fiercely overhead.) The recent devastation in Alabama and Missouri reminds us of the brutal toll nature can take. My cousin's home in Joplin was obliterated by the twister. Fortunately, she was away from home. This is the worst tornado season here since the early 1950s.

The worst in American history was the "Tri-State" tornado, which ripped through Missouri, Illinois, and Indiana on March 18, 1925. It blasted a path of destruction across the region and took 695 lives.

On the day after it hit the Chicago Tribune announced "A tornado tore through southern Illinois late yesterday after lashing Missouri, and then caused considerable damage in Indiana before it died out." Radio and newspapers broadcast the details shortly after the winds died down. "In some places, where the wind struck hardest, whole buildings were moved from their foundation, a grain elevator in De Soto having been carried intact some forty feet." A schoolhouse in the same city collapsed. Only a few pupils in the packed building escaped unharmed.

Midwesterners and Americans across the country were shaken. A swift death by a storm could happen anywhere in the region, and with little warning.

In the era before accurate forecasting and warning systems, Americans had good reason to worry. The deadliest tornadoes in the nation's history could wipe out a community and leave a trail of casualties in its wake. Here's a list of those deadliest, compiled by the National Oceanic and Atmospheric Administration's (NOAA) (see the full 25 here):

Rank, Date, Region, death toll

1) 18 Mar 1925, Tri-State (MO/IL/IN), 695

2) 06 May 1840, Natchez MS, 317

3) 27 May 1896, St. Louis MO, 255

4) 05 Apr 1936, Tupelo MS, 216

5) 06 Apr 1936, Gainesville GA, 203

6) 09 Apr 1947, Woodward OK, 181

7) 24 Apr 1908, Amite LA, Purvis MS, 143

8) 22 May 2011, Joplin MO, 118 (est.)

9) 12 Jun 1899, New Richmond WI, 117

10) 8 Jun 1953, Flint MI, 116

Most of these violent twisters touched down in the heart of tornado country. But, historians might ask what factors have made certain storms at certain times and places more deadly than others. What factors have combined to cause the greatest destruction and loss of life? How have poverty and rural isolation factored in?

Technological innovation has lessened the damage and helped prepare civilians for the worst. Over at NOAA Roger Edwards provides some background to that relatively recent scientific research:

The National Severe Storms Laboratory has been the major force in tornado-related research for several decades. NSSL has been a leader in Doppler radar development, research and testing, and has run numerous field programs to study tornadoes and other severe weather since the early 1970s. Others heavily involved with tornado research include UCAR/NCAR, the University of Oklahoma, the Tornado Project, Tornado History Project, and overseas, the European Severe Storms Lab (Germany) and TORRO (UK). Members of the SELS/SPC staff have done research related to forecasting tornadoes for many years. Almost every university with an atmospheric science program, as well as many local National Weather Service offices, have also published some tornado-related studies.

One of the major advances for storm detection and tracking, of course, was Doppler radar, described by NOAA as follows:

Doppler radar can see not only the precipitation in a thunderstorm (through its ability to reflect microwave energy, or reflectivity), but motion of the precipitation along the radar beam. In other words, it can measure how fast rain or hail is moving toward or away from the radar. . . . Doppler radar and severe storms research were joined in the early 1960s when the National Severe Storms Project began in Kansas City, and continue to this day at the National Severe Storms Laboratory (NSSL) in Norman, Oklahoma. The Union City tornado in 1973 began a treasure trove of NSSL research Doppler measurements of supercells and other hazardous storms. In the 1980s, the push to get Doppler radars into warning operations became well-organized as the NEXRAD (NEXt generation weather RADar) program.

We're roughly half way through tornado season. Let's hope we've seen the worst of it. If not, let's hope early warning systems do their work.

Wednesday, May 25, 2011

Calling All Grad Students

Dan Allosso

Part of the title of my last blog post was “Notes from Grad School.” This is the tentative title of a series of posts by me and other graduate students, about our thoughts and experiences as we prepare to become professional historians. We’re hoping some of these posts will come from grad students who read this blog, whether or not they have posted with us before.

Graduate students in history face a series of challenges and opportunities. Some of these are similar to those faced by all grad students, some of them are shared primarily with grad students in the humanities, and some of them are ours alone. Among the topics we might cover in this series of posts:

Why did we enter a Masters or PhD program in history?

What do we hope to do and what do we expect to do when we complete our programs–especially if those two things differ?

How do we see and how does society seems to see the role of historians in our culture–especially if those two things differ?

How will changes in economics, technology, academic standards, and the nature of the historical profession impact our career choices and opportunities?

What is our sense of mission, and do the changes going on all around us signal new opportunities?

These are just a few of the topics on which it might be interesting to see the perspective from graduate school–there are probably many others that will occur to us as contributors come forward, and as the column develops. Contributors need not be experts in order to venture their opinions (as my prior post demonstrates), but in general this is envisioned as a place where criticisms are paired with suggestions for change, and the overall direction is toward solutions.

The perspective of the graduate student, of course, is the perspective of someone who is relatively new to the profession and seeing it with fresh eyes. Naturally, there will be some degree of “reinventing the wheel,” as a new generation of budding historians makes discoveries that may already be well known to older members of the profession. But, since every generation encounters a unique set of circumstances, and since our generation enters the profession during a period of remarkable technological change, we hope these notes will be fresh and interesting for the blog’s general readers.

If you are a grad student interested in sharing your observations about these or other issues, please contact me at dan@danallosso.com. Even if you have just a brief idea or a reaction to something, and you don't want to put together a blog post, drop me a line. I may be able to add your reaction to those of others, and say something more general about what people think as a result. And, if you're not a grad student but have some ideas you really want to get off your chest, fire away! The world is changing all around us--what does it mean for historians?

Tuesday, May 24, 2011

.-- .... .- - / .... .- - .... / --. --- -.. The Telegraph and the Information Revolution

Heather Cox Richardson

On May 24, 1844, Samuel Morse sent his famous telegraph message, “What hath God wrought?” from the U.S. Capitol to his assistant in Baltimore, Maryland. (See the dots and dashes in the title.) Morse had begun his career as a painter, but, the story goes, keenly felt the problems of communication over distance when his wife took ill and died while he was away from home. By the time he got the hand-delivered note warning him that she was sick, she was already buried.

While it had been a personal crisis that inspired Morse to pursue the telegraph, the importance of the new machine reached far beyond families. The telegraph caused a revolution in the spread of information in America. The information revolution, in turn, changed politics.

Notably, historians have credited the telegraph with hastening the coming of the Civil War. Before the time of fast communication, politicians could cater to different voters by making contradictory promises. Antebellum Democrats and Whigs could endorse slavery in the South and attack it in the North. Since news rarely traveled far, their apostasy seldom came back to haunt them.

The telegraph changed all that. It offered voters a new, clear window on politics. Now reporters could follow politicians and send messages to editors back home.

But faster communication did not necessarily mean accuracy. On the contrary, partisan editors tried to position their journalists in critical spots so they could control the spin about what was going on. They happily spun stories that would discomfit politicians they opposed. As the sectional crisis heated up, the telegraph enabled partisan editors to portray far away events in ways that bolstered their own prejudices.

On-the-spot reporting took away politicians’ ability to ignore the gulf between North and South. It forced white Southerners to defend slavery, and made Northerners sensitive to the growing Southern power over the government. The political parties could not remain competitive nationally, partisanship rose, and the country split. The result was bloody.

Information might come faster with the telegraph, but it was not necessarily more accurate. The same could be said about radio and television, which provided more information than ever, but still used a strong editorial filter.

Now a new tool has the potential to deliver the accuracy the telegraph promised. The internet provides even faster and more thorough information, with far less editorial filtering than ever before. This has given us instant fact-checking, in which politicians who vehemently deny saying something often find one of their statements with those very words posted to YouTube. It also gives us immediate commentary by specialists on a subject under discussion, judging the value of a proposed policy.

The internet has also given us a sea of bloggers who follow local developments and produce verifiable information that would never make it onto an editorial desk but that might, in fact, turn out to be part of a larger pattern. Joshua Marshall at www.talkingpointsmemo.com put such information together during President G. W. Bush’s second term to uncover the U.S. Attorney removals, a story the mainstream press initially missed altogether.

The web has the potential to break down editorial partisanship, but this accuracy has an obvious stumbling block. Will readers be willing to investigate politics beyond their initial biases to entertain a range of ideas and reach clear-eyed decisions about policies? Sadly, studies so far indicate the opposite, that people use the internet to segregate themselves along partisan lines and reinforce their prejudices rather than to tear them down.

The telegraph initially promised to break the close relationship of politics and the press by giving people access to events unfiltered by partisan editors. It failed. The telegraph only increased the partisanship of the news Americans read. Now the internet has the potential to break the ties between the press and politics for real. But can it, in the face of entrenched political partisanship?

One hundred and sixty seven years after the telegraph tapped out its famous words, we’re still struggling with the same questions.

Monday, May 23, 2011

John Wilson Praises Historically Speaking in Podcast

Randall Stephens

We're thrilled to hear that over at Books and Culture John Wilson has devoted a podcast to the April 2011 issue of Historically Speaking. Wilson, as many of our readers will know, is the editor of the intellectual feast Books and Culture. Launched in 1995, this literary magazine "contains in-depth reviews of books that merit critical attention, as well as shorter notices of significant new titles." You may notice that some of the authors you see now and then in the pages of HS also appear in B&C: Paul Harvey, Don Yerxa, Peter Coclanis, Tal Howard, Tommy Kidd, Bruce Kuklick.

Wilson highlights the forum on Daniel Rodgers's book, Age of Fracture, and the forum on John Lukacs's The Future of History. He also zeroes in on the lead essay by Jane Kamensky on lessons learned from fiction writing and the interview with Kamensky's co-author, Jill Lepore.

According to Wilson, Historically Speaking is "not at all written only for people in the profession. It's written for anyone who is genuinely interested in the whole range of history." We could not have put it better.

He gives the publication high marks for the degree of diversity in its pages. "It's very rare in a publication of that kind," says Wilson,"to get this kind of [political] diversity, despite the constant lip service paid to the notion diversity as a matter of fact in an awful lot of intellectual discussion you get a boring monotony of a certain range of narrow views."

Friday, May 20, 2011

Notes from Grad School: The Coming College Crisis

Dan Allosso

The two wide spots in the 2010 US “Population Pyramid” reflect college-age Americans and their parents. For yearssince I was an undergrad, reallyI’ve suspected that when a graph of the annual cost of college crossed the annual income of average Americans, there would be a problem. It recently occurred to menow that I’m a parent of two high-schoolersthat the real issue is slightly more complicated.

I’m not a statistician, but I suspect that if you were to look at the numbers, you’d see some interesting things. First, the average US family size (2000 census) was 3.14, with an average of .90 children per family. But wait! 52% of US households had no children at all. The average number of children, in families that have children, is 1.86for convenience let’s call that 2. This means there are really two groups of people in America: half of us have kids and the other half don't. We probably have different perspectives on education as a social good.

So, in a family with children, the average is two children. They tend to be close in age, which means they tend to get to college age at roughly the same time. So, going back to that population pyramid, on average those college-age Americans in that first wide spot have a college-age sibling. About 60% of American kids go to college, and they’re more likely to go if a sibling also goes. So, in those college-oriented families, the parents in that wide spot that centers on age 45-49, on average have two kids in college.

Add to this the conclusions of studies like the one done by the St. Louis Fed, and the picture becomes even clearer. Their “wealth curve” shows that married couples with children have only half the net worth of married couples without children. And the overwhelming majority of the wealth in America is held by people aged 55 to 75. So what we’re looking at is a big batch of Americans coming to college from families of modest means.

Annual average tuition, fees, room and board (TFRB) at four-year private institutions has grown from $18,312 in 1986, to $30,367. Average TFRB at four-year public institutions has risen from $7,528 to $12,796 over the same period. So if you’re the average family described above, you're looking at over $240,000 to send your two kids to private schools, or $100,000 for public. And the bills come over a six or seven year period.

When I was an undergrad, there was no FAFSA. Parents were not automatically expected to pony up the funds. I saved, took out a small student loan, and worked. That’s not even an option for my kids. But beyond the personal implications, what do these changes mean for the American economy? What are the long-term implications of scooping all the savings, home equity, and retirement nest-eggs of middle class Americans, into this one bucket? Or of this group of Americans taking on huge additional debt? And what will happen to an American higher education industry that has become accustomed to these revenues, when people can no longer afford them, when credit dries up, or when the population pyramid shifts again in ten years, and the narrower bands of parents and children reach college-age?

Thursday, May 19, 2011

The Historical Society Announces a Competitive RFP-Based Research Program

Religion and Innovation in Human Affairs: Exploring the Role of Religion in the Origins of Novelty and the Diffusion of Innovation in the Progress of Civilizations

With generous funding from the John Templeton Foundation, the Historical Society is launching a major interdisciplinary grants program in September 2011. It will provide $2.0 million in research support for empirical, conceptual, and interpretive work exploring the role religion may play as a driving force of innovation in human affairs.

The competitive RFP-based research program will award approximately 15 two-year grants of $100,000 each and a few larger grants of $250,000 to support archaeological fieldwork and traditional social science and historical investigation, as well as conceptually-oriented analysis.

Donald A. Yerxa will serve as the Program Leader for Religion and Innovation in Human Affairs. And a distinguished board of advisers will be announced this summer.

A request for proposals will be forthcoming later this summer. For more information, visit the RIHA website (under construction) at www.bu.edu/historic/riha

The John Templeton Foundation (www.templeton.org) serves as a philanthropic catalyst for discoveries relating to the Big Questions of human purpose and ultimate reality. The Foundation supports research on subjects ranging from complexity, evolution, and infinity to creativity, forgiveness, love, and free will. It encourages civil, informed dialogue among scientists, philosophers, and theologians and between such experts and the public at large, for the purposes of definitional clarity and new insights. The Foundation’s vision is derived from the late Sir John Templeton's optimism about the possibility of acquiring “new spiritual information” and from his commitment to rigorous scientific research and related scholarship. Its motto, "How little we know, how eager to learn," exemplifies its support for open-minded inquiry and our hope for advancing human progress through breakthrough discoveries.

Wednesday, May 18, 2011

Music History Roundup

Julie Taboh, "National Jukebox Offers Digital Treasures: Library of Congress unveils website featuring thousands of rare and historic audio recordings," Voice of America, May 13, 2011

. . . . The site, called National Jukebox, is a collaborative project between the Library of Congress and Sony Music Entertainment. It offers online access to a vast selection of music and spoken-word recordings produced in the U.S. between 1901 and 1925.>>>

James Oestreich, "An Exploration of the World That Made Bach and That Bach Remade," NYT, May 17, 2011

Bach, as the culmination of the German Baroque, effectively eclipsed his ancestors, whether actual family members or simply musical forebears.

The family connections are fascinating but still murky. There were so many musical Bachs during the era that the surname became synonymous with “musician” in central Germany, and they continue to trickle to light. A new Harmonia Mundi recording showcases a particularly revealing specimen, the “Trauermusik” of Johann Ludwig Bach, a contemporary cousin of Johann Sebastian based in Meiningen.>>>

Chris Wilson, "The Rosslyn Code: Are the Chapel's Mysterious Stone Symbols a Musical Score?" Slate, May 17, 2011

Rosslyn Chapel was deserted when Tommy Mitchell's son, Stuart, stepped into the small antechamber one frozen afternoon last December. Savage blizzards had blanketed Scotland for the previous two weeks, and much of the chapel's staff had left early to beat the icy conditions. The cold was marshalling its forces against the scattered heat lamps inside as Stuart walked to the back of the room, where 13 stone arches crisscross along the ceiling.

Stuart Mitchell, who is 45, is thin and spry with wavy brown hair and emits a rapid-fire laugh when he finds something amusing, which is often. Stuart first took an interest in the Rosslyn Chapel when his father, a former Royal Air Force cryptographer, invited him along for a visit about 10 years ago. As a professional composer, Stuart was instantly enchanted by Rosslyn's angelic stone musicians. He has come back often in the years since, but he still can't get inside the chapel without buying a ticket, which he does begrudgingly.>>>

John Hacker, "Dance caps off Saturday’s reenactment events," Carthage Press, May 17 2011

CARTHAGE, Mo. An actual 1800s-era dance wouldn’t have had the music speakers or electric lighting found in the Civil War reenactment dance last night. But that didn’t make it seem any different to Chris Palmer.>>>

Lici Beveridge, "Digging up roots: American music history endures," Hattiesburg American, May 13, 2011

I have been to see the New Harmonies exhibit at the historic Train Depot twice already.

The first time I went to see the exhibit.

The second time was to hear musician George Winston talk about harmonica music and perform some songs on harmonica.

About 75 people attended Winston's performance Thursday. I think I learned more about harmonica music than I ever thought possible.>>>

Tuesday, May 17, 2011

Forward to the Past: Debt and Debtor's Prisons

Randall Stephens

How will historians understand the rise in unemployment and the increase in bankruptcy when they look back on our era? (See the graph here from www.uscourts.gov.) What are the historical and cultural ramifications of the economic downturn? Cycles of recession and depression mark major turning points in American history. The panics of 1837 and 1857 upset family life, toppled businesses, and can be charted through the rise in suicide rates and bankruptcies. The depressions of the 1890s and the 1930s shook the world.

In ages past one way to deal with all those folks who could not pay up was to throw them into the clink. If you gambled away your money and lived a profligate life in the 18th century, there was no safety net to catch you.

An entry in Mitchel P. Roth's, ed., Prisons and Prison Systems: A Global Encyclopedia (Greenwood Press, 2006) sheds light on London's famous debtor's prison:

FLEET PRISON. Built in the twelfth century, Fleet Prison became London's most famous debtors' prison and was the first building in London constructed for the specific purpose of being a gaol (jail). 11 was rebuilt numerous times and by the fourteenth century was holding; debtors, individuals convicted in the Court of Star Chamber, and those charged with contempt of the Royal Courts. Among its most distinguished prisoners was the poet John Donne, who spent a stint in Fleet Street in 1601, and later William Penn. It was demolished by the Great Fire of 1666 and then rebuilt. Partly because of its prominence as a jail for debtors and bankrupts, it was burned down once more during the 1780 Gordon Riots.

The Fleet Prison had a well-earned reputation for cruelty and corruption. The office of warden, or keeper, was considered a hereditary position. The position of keeper was a highly lucrative position with opportunities to earn fees for providing prisoners with food, lodging, and even short-term release. In the eighteenth century an individual purchased the office of the Keeper of the Fleet for 5,000 pounds. When he stepped down, he then sold the position to the deputy warden for the same price. During its heyday prisoners of both sexes mingled freely, leading one observer to describe it as the "largest brothel in England." Here women could improve their conditions by selling their bodies. The prison was usually overcrowded. In 1774 it held 243 debtors along with 475 members of their families who had nowhere else to go. The prison was finally closed in 1842. (105-106)

How did all fare in the colonies? Peter J. Coleman writes of the state of things in Pennsylvania in his book Debtors and creditors in America: Insolvency, Imprisonment for Debt, and Bankruptcy, 1607-1900 (Beard Books, 1999)

The early treatment of poor and insolvent debtors was not significantly more liberal or humane than in New York, New Jersey, or in some of the New England colonies. To be sure, the Frame of Government of 1682 embodied enlightened principles-that prisons should be workhouse-reformatories rather than mere places of punishment, and that prisoners should not have to support themselves or pay fees-but the legislature modified these concepts almost immediately (1683 and 1684) by requiring debtors to support themselves and by introducing the system of servitude for debt. Nevertheless, it proved exceedingly difficult to formulate acceptable rules governing debtor-creditor relations. The colonists quarreled among themselves and with the proprietor and his governors, and the Crown disallowed many of the early laws, including the act establishing the support and servitude systems and another of 1700 establishing (141)

Reformers began to challenge the system in force in the 19th century. (Click to enlarge the reformist paper to the right.) The following comes from the fabulously useful Gilder Lehrman Institute of American History. This portion is from "Guided Readings: Pre-Civil War Reform":

Imprisonment for debt also came under attack. As late as 1816, an average of 600 residents of New York City were in prison at any one time for failure to pay debts. More than half owed less than $50. New York's debtor prisons provided no food, furniture, or fuel for their inmates, who would have starved without the assistance of relatives or the charity of humane societies. In a Vermont case, state courts imprisoned a man for a debt of just 54 cents, and in Boston a woman was taken from her three children as a result of a $3 debt.

Increasingly, reformers regarded imprisonment for debt as irrational, since imprisoned debtors were unable to work and pay off their debts. Piecemeal reform led to the abolition of debtor prisons, as states eliminated the practice of jailing people for trifling debts, and then forbade the jailing of women and veterans.

Here's a question to ask history students in the classroom. Could something like a debtors' prison come back in the western world? If so, what social or economic forces could lead to that. Is that an impossibility? If so, why?

Monday, May 16, 2011

The Other Side of the 1960s or "This Is My Country and I Know that I'm RIGHT"

Randall Stephens

There was another side of the turbulent 1960s and 70s era. New conservatism and membership in the Young American for Freedom gained steadily after 1964. The "silent majority" became a slogan for all those Americans who had enough of tear gas, long hair, and endless protests. Yet in the popular imagination this seldom registers. Host a 1960s party and see who comes dressed up as a John Bircher. Who else will arrive sporting beehive wigs and flattops and carrying "Get US Out of the UN" signs or wearing "AUH20" buttons. The popular history of the sixties, still, has probably been told too often in a prescriptive rather than a descriptive fashion. Even historians perhaps have sided with the young marchers who shook their fists at "the man."

However, recent trends in history are starting to change these perceptions and the way of telling the story of the 1960s and 70s. See for example the insightful books that have appeared in recent years on grassroots conservatism, the rise of the new right, and evangelical conservatism:

Darren Dochuk, From Bible Belt to Sunbelt: Plain-Folk Religion, Grassroots Politics, and the Rise of Evangelical Conservatism (W. W. Norton, 2010)

Jefferson R. Cowie, Stayin' Alive: The 1970s and the Last Days of the Working Class (2010)

Daniel Williams, God's Own Party: The Making of the
Christian Right (Oxford, 2010)

Lisa McGirr, Suburban Warriors: The Origins of the New
American Right (Princeton, 2002)

Kevin M. Kruse, White Flight: Atlanta and the Making of Modern Conservatism (Princeton, 2007)

Matthew D. Lassiter, The Silent Majority: Suburban Politics in the Sunbelt South (Princeton, 2007)

Bruce J. Schulman and Julian E. Zelizer, eds., Rightward Bound: Making America Conservative in the 1970s (Harvard, 2008)

Joseph Crespino, In Search of Another Country: Mississippi and the Conservative Counterrevolution (Princeton, 2009)

Kim Phillips-Fein, Invisible Hands: The Making of the Conservative Movement from the New Deal to Reagan (W. W. Norton, 2009)

David T. Courtwright, No Right Turn: Conservative Politics in a Liberal America (Harvard, 2010)

Donald T. Critchlow, The Conservative Ascendancy: How the GOP Right Made Political History (Harvard, 2007)

I've been tooling around with the idea of making a compilation of conservative country/folk songs, sorta anti-protest numbers, that express the other, fightin' side of the 1960s and 1970s. The criteria does not have to do with the quality of the music, but with the tone of the message. Do the lyrics call for flag-waving, god-fearing, patriotism and support for the local
police? Does the singer lament the state of America's cities and shout down the feral hippies running shoeless through the streets clutching their bongs and incense sticks? Long, long before the Tea Party raised its collective complaint and before Lee Greenwood made grammarians sick with lines like "And I'm proud to be an American, where at least I know I'm free," others were charting out a conservative protest genre.

So, here's what I've got so far. (Thanks to Scott Hovey for some wonderful suggestions). This is only a start! There's so much more out there.

Merle Haggard - The Fightin' Side Of Me (1968)

Merle Haggard - Okie from Muskogee (1969)

Jimmy D Bennett - Sapadelic (197?)

Don Hinson - The Protest Singer (197?)

The Goldwaters - Down in Havana (1964)

Up with People - Which Way America? (1966)

Lynyrd Skynyrd - Sweet Home Alabama (1974)

Robin Moore and Staff Sgt. Barry Sadler - The Ballad of The Green Berets (1966)

Friday, May 13, 2011

Eating My Words (while Uncovering My Eyes)

Heather Cox Richardson

A while back, when the Texas curriculum flap was in full swing, I wrote about the Texas legislature’s potential undercutting of the curriculum committee that was pushing Texas history standards so far to the right. The legislature had voted to permit schools to spend money allotted for curriculum materials on non-textbook sources. That might actually help, rather than hurt, American history, I wrote, as schools turned to the free primary sources available on the web rather than being tied to the textbook choices of the radicals on the curriculum committee.

Today I fess up to my naiveté. There was a completely different way to approach this funding change, and it’s now underway. It is a way that raises troubling educational issues, but also, perhaps, even more disturbing political ones.

On Wednesday evening, former Governor of Arkansas Mike Huckabee announced his new educational company, “Learn Our History.” The company’s mission is to “combat major shortcomings in current methods of teaching American history.” It claims it will tell “America’s most influential stories, without the filters or biases that too often infiltrate history and social studies classes across the country today.” The company’s first product is a new video series of cartoons about American history: “The Time Travel Academy Video Series.” The series features time-traveling teenagers who visit important events in the nation’s past.

Governor Huckabee’s promotional materials begin with a full-throated attack on history teachers that echoes the Texas curriculum committee. History today is “force-fed” to students “through dry text books, monotonous lectures and boring lessons,” the website announces. Worse, “our children's classes and learning materials are often filled with misrepresentations, including historical inaccuracies, personal biases and political correctness.” Huckabee’s walk-on website character amplifies: “When it comes to American history, our nation is facing an epidemic. Schools across the country have turned their backs on our children by distorting facts, imposing political biases, and changing the message behind the important lessons of our history. As a result, our children aren’t learning what makes America great.”

The video series will combat this travesty with “unbiased” history, promising “to correct the ‘blame America first’ attitude prevalent in today’s teaching.” It celebrates America’s greatness and recognizes and celebrates “faith, religion and the role of God in America's founding and making our country the greatest place on Earth.” The first video available is “The Reagan Revolution.” The second is about World War II.

It should come as no surprise that the lead scholarly advisor to the series is Larry Schweikart, author of 48 Liberal Lies about American History, and A Patriot’s History of the United States, one of Glenn Beck’s favorite history books.

So my sunny optimism that the Texas curriculum funding law would bring primary sources and honest historical inquiry into Texas classrooms was probably naïve.

But now my training as a political historian makes me see something more than an educational culture war in this struggle. Here’s why: The videos are designed partly for the huge Christian homeschooling market, but also for classroom use. Apparently,* they cost $19.95 (plus $3.95 shipping and handling). The website promises that the company envisions more than seventy-five of them.

If they do become a staple in Texas schools, which make up the nation’s largest curriculum market, Governor Huckabee’s new company looks to do pretty well.

Governor Huckabee, of course, is a current front-runner for the Republican nomination for president in 2012.

While I hasten to say that I can’t imagine that anyone in the Texas legislature foresaw the way this has played out, doesn’t the Texas curriculum law, plus the new video series, plus Governor Huckabee’s candidacy equal a situation in which public school funds can be funneled to a specific political candidate?

Kind of flies in the face of the company’s purported mission to keep public schools from “imposing political biases,” doesn’t it?

* The cost is a little hard to figure out from the website, and calling the company telephone number didn’t yield a person.

Thursday, May 12, 2011

David Barton and Popular, Politicized History

Chris Beneke and Randall Stephens

This cross-posts comes from a blog entry that Chris Beneke and I wrote for Christian Century. I paste here a section of "The Daily Show's limits."

"Open conversation that leads to nothing."

That's how Jon Stewart summed up his interview with popular right-wing historian David Barton. He was right. After 30 minutes of glib back-and-forth with Barton (ten of which made it onto TV), Stewart was flummoxed, worn down, unfunny. [Click on the image to watch.)

As the air left the room, the conversation exposed the gaping ideological divide between Americans--and the challenges we face in bridging it.

Conservatives who go on the Daily Show usually end up looking the fool. But Stewart met his match in Barton, an ideological warrior revered by Glenn Beck and Mike Huckabee. Stewart's razor wit and trademark blue index cards were no match for Barton's prodigious memory and unwavering insistence that America's Christian founding has been erased by secular elites.

The show's staff probably thought Barton could be caricatured as a half-crazed ideologue, unconcerned with larger inconvenient truths. Perhaps they figured that a few well-chosen facts that don't fit his God-and-country narrative would render him speechless, that he would crumble under the relentless ironic jabs. But if it were just a matter of enumerating quotations and dates, members of Congress wouldn't be calling Barton to provide them with the founders' views on deficits, stem cell research and stimulus programs. Barton offers his listeners something much more alluring.

One thing we learned from Stewart's tête-à-tête with Barton is that anecdote-ridden claims can't be countered with more anecdotes. What Stewart never articulated was the essential function of history--using the preponderance of evidence to provide a credible context for understanding the past and the present. Barton presents himself as the high priest of founding texts and the arbiter of honest truth. He's not, of course. But it's going to take patient, gritty work to convince folks otherwise.

Barton's carefully crafted image as a just-the-facts historian is key to his success. He insists again and again that we should read our primary-source documents just as we should read the Bible: unmediated. Too many professional historians, he scoffs, simply cite each other and repeat liberal platitudes. read on>>>

Wednesday, May 11, 2011

Lessons Learned from an Environmental History Course

Dan Allosso

This semester I've been a teaching assistant in an upper-level undergraduate course on American environmental history. At the final class meeting, the professor asked the students to take out a sheet of paper and write down 10 things they’ll take away from this course. I've been reading over their responses, which are an interesting combination of curious facts they picked up in class, and possibly life-changing insights.

The most commonly repeated curious fact was that pigs once roamed the streets and alleys of New York City, which I suppose was a surprise to many of the students. Others wrote they had been surprised that Central Park is completely artificial, that the interstate highway system was sold to the American people as a form of national defense, and that the “whole Earth” photographs taken from Apollo spacecraft were one of the catalysts for environmentalism. And that Mexico City has always been one of the major population centers of the Americas (this is a favorite of mine because it was something that I mentioned in a discussion section).

More interesting, from the perspective of the course’s objectives, were responses about the nature of wilderness, the role of government in environmental change, and the role activists and even extremists have played throughout American history supporting environmental causes. Most of the students successfully absorbed the course theme that not only wilderness but all of our ideas of nature are in fact ideas which change over time. Several students also remarked on how extensively the land patterns we see today are the result of government actions in the past, from the Northwest Ordinance through urban redlining. Many also remarked on their surprise that major environmental damage as well as movements to protect the environment began much earlier in American history than they had believed.

Although several students said they had learned that the government is responsible for creating (or at least not preventing) a lot of the problems we have today, many also recalled positive results of government influence like the national park system, the Civilian Conservation Corps, etc. They seem to have accepted the role of radicals like Henry David Thoreau and John Muir in pushing society in the right direction; one student even went beyond the required 10 things she was supposed to write, remarking that you can't criticize a society from within, but must separate yourself from it as Thoreau tried to do at Walden. One item I was disappointed I did not see was evidence that the students had understood the complicated process of subtle changes in law and social customs described by Ted Steinberg in Nature Incorporated, which they read early in the semester. I think this is crucial to the story of the American environment, so I’ll be spending some time thinking about how to make it a more central part of the students’ experience in my own syllabus.

Probably the biggest action-oriented takeaway in the students’ responses related to meat--and especially to corn-fed beef. Late in the semester, we watched a couple of clips from the documentary King Corn. As a result, many students said they would no longer take what they eat for granted. Food “came from somewhere and is going somewhere too,” said one student. Several said they would begin to look closely at what they eat and would try to eliminate corn sweeteners from their diets (this is something I've actually begun trying to do myself). Perhaps the most satisfying comment came from a student who concluded her list by observing, “I have learned that we must be the change we wish to see!” That makes the semester seem like time well spent.

Tuesday, May 10, 2011

After the Historical Revolutions: Or, When the Tree Falls in the Historical Paradigm Forest, Does Anyone Listen?

Paul Harvey

The following cross-post comes from my fellow blogmeister at our Religion in American History blog. Paul is a professor of history at the University of Colorado Colorado Springs. He's the author or editor of a number of acclaimed books on American religious history, including Redeeming the South: Religious Cultures and Racial Identities Among Southern Baptists, 1865-1925 (UNC, 1997); Freedom's Coming: Religious Culture and the Shaping of the South from the Civil War through the Civil Rights Era (UNC, 2007); and the forthcoming Jesus in Red, White, and Black, co-authored with Edward J. Blum.

A review of (and an announcement of) some challenging new works in earlier American History, and the history of the American West, got me thinking about the changing of the historical paradigm guard–or whether those guards get changed at all by scholarly revolutions. These are questions which affect the course of American religious history, but bear with me for a short detour before discussing that further.

My thoughts first came from reading Charles Mann’s review of Daniel K. Richhter’s new book Before the Revolution: America’s Ancient Past (Harvard University Press) in Saturday’s Wall Street Journal (yes, Virginia, I do read the Wall Street Journal! But I'm not sure if the link will let in non-subscribers; if it doesn't and you want to read, I'm happy to send it to you).

I know nothing about this book other than this review, but I am a big fan of Richter’s earlier work Facing East from Indian Country: A Native History of Early America, a favorite of mine to use in class for its wonderful illustrations of how changing one’s angle of vision creates an entirely different historical sense of a period. Richter also engages in some deft analyses of early American documents of religious history from the European-Native encounter, including John Eliot’s bizarre but fascinating Indian Dialogues, of course the Jesuit Relations, Indian conversion narratives, and various ceremonial encounters at treaty negotiations.

In his review of Richter’s new book, Mann writes:

Every few decades, historians develop a new way of looking at the past. I am not talking about ‘revisionism’ but unifying conceptual schemes, the sort of mental framework that Frederick Jackson Turner created in his argument for the importance of the frontier to our history or that Bernard Bailyn established in his studies of the American Revolution’s ideological origins. Historians debated Turner for a long time and continue to debate Mr. Bailyn. I wouldn’t be surprised if they were arguing with Mr. Richter a decade from today.

When I become King of the World I will permanently ban the word "revisionism" and its variant "historical revisionism" (as I have already banned the words "bias" and "politically correct" in my classrooms, since they have become barriers to thought and discussion), since they have been rendered meaningless for precisely the reason Mann explains there.

But the revolution in understanding is not just in the early America of Richter. My longtime friend Anne Hyde, Professor of History at my sister institution Colorado College, is about to publish her magnum opus for the West of the first half of the nineteenth century, Empires, Nations, Families: A History of the North American West, 1800-1860, part of University of Nebraska’s outstanding series History of the American West series. If this this means anything to you, as it will to some of you, the immediate predecessor to Anne’s book is Colin Calloway’s monumental work One Vast Winter Count: The Native American West Before Lewis and Clark.

(As a brief aside, when Anne and I were in graduate school, she always made it to the library at 8:00 a.m. sharp, while I was lucky to drag myself, half hungover and ears still ringing from some too-loud jazz concert the night before, by 10:00 or 11:00 at best, which explains a lot about the great scholarly discipline it took for Anne to finish this huge work, while I was busy watching the Mavs beat the Lakers).

Anne’s work may suggests a paradigm for understanding the history of the West in that era, in a way such as Richter and Colin Calloway have done for earlier American history. Her work also features chapters fully integrating the Mormon West into the larger picture, and too much else of interest besides to even begin to summarize here.

All of this is exciting as an historian. But all of it also makes me wonder how, whether, and when any of this affects public consciousness of earlier American history. Lately historians have been riled by the amateur scholar David Barton, and of course the bookshelves at your Barnes & Noble are full of everything Founding Father related. I understand that, but considering the impact of the works above–great on historians, perhaps little on anyone else–makes me wonder about similar questions for religious history.

The older paradigm of American religious history will be familiar to a few blog readers, and some are familiar with its more recent challengers, summarized in works such as Tweed’s edited volume Retelling U.S. Religious History. . . .

Much of the newer paradigm seems to come from removing religious history from the specific story of the American nation-state, and using categories that engage religious experience at its own level rather than as some proxy for political parties or current day culture wars. We've blogged at Religion in American History extensively about some recent classics that move American religious history/studies well down this path. Entries on Leigh Schmidt, Robert Orsi, Kathryn Lofton, and numerous others come to mind.

Again, however, for religious history as for the studies of earlier American history mentioned above, one wonders whether and how this affects any sort of public consciousness or discussion, and whether it’s the job of religious historians to evangelize for these perspectives that challenge or disrupt how we perceive the American religious past (and present). Or maybe scholars should just do their work, let popularizers who are good at popularizing disseminate this stuff to the general public (like what happens in science all the time–neuroscientists do their thing, and then someone like Oliver Sacks explains a little bit of it to us), and trust that over a generation or two it will find its way into the more general understandings. That's a comforting and easy role to take, but it leaves not much excuse for complaining about why some pseudo-historians advise presidential candidates while the rest of us get to advise freshmen how to raise their grade from a D to a C.

As usual, I have no answers, only questions.

Monday, May 9, 2011

Standards of Citation and the Internet

Bland Whitley

Why do we cite sources? I imagine that for most of us, annotating work has become second nature to such a degree that we rarely think about why exactly we’re doing it. I’ll stress two main reasons, though I’m sure others could think of different rationales. The first is a kind of reflexive establishment of scholarly bona fides. As undergrad and grad students, we were taught to base our arguments on the sources and authorities we consulted—you may vaguely recall those dreary high school assignments that required some minimum number of sources. All of this remains of course an essential building block in the development of historical understanding. It is through immersion in a variety of sources that we learn to build arguments out of a variety of competing claims and to establish a sense of the relative reliability of different texts and evidence. The second reason grows out of scholars’ relationship with one another. Whether collaborating or arguing, scholars require access to the evidence that informs particular arguments. Although these rationales are not mutually exclusive (they often reinforce one another), the second should command greater respect. Leading other scholars to one’s evidence, so that they can reach similar or very different conclusions, is what citation should deliver. Too often, though, we can all find ourselves practicing a strategy of citation for citation’s sake.

I’ve been thinking about these issues because of an interesting debate that has played out on a couple of listservs during the previous two weeks (H-SHEAR, geared toward historians of the early republic, and SEDIT-L, which serves scholarly editors). Daniel Feller, senior editor of the Papers of Andrew Jackson, kicked things off with an impassioned critique of lazy citations of material culled from the web. Singling out a few different recent works that have quoted passages from important addresses made by Jackson during his presidency, Feller found that the works were citing either internet sites of dubious scholarly quality, one of which was no longer live, or obscure older works that neither improved on contemporary versions of the text nor took advantage of the contextualizing annotations of modern versions. Why should this be the case, Feller asked. It’s not hard to find print versions of the original sources for Jackson’s addresses. Indeed, it’s never been easier, as all can be found either on Google Books, or through the Library of Congress’s American Memory site. Instead of taking a couple of extra minutes to track down better and more useful source material, the authors had stopped searching after finding the desired text on whatever website seemed halfway professional and then cited the link, no matter that such links frequently have the shelf lives of a clementine.

The response to Feller’s post has ranged from attaboys from traditionalists who view the internet as little more than a dumping ground/series of tubes for scholarly quacks, to condemnation of yet another attempt by an academic to marginalize “amateurs.” (Why is it that all listserv conversations seem to devolve into a spat between angry researchers impatient with professional norms and defenders of some mythical historical establishment?) One commentator referred to articles that have analyzed the high percentage of historical citations of websites that have become defunct, a phenomenon known as link rot. Another pointed out that citing a website that may soon go dead isn’t really all that different from citing an unpublished conference paper or oral history—in neither case is the source material truly available to anyone else. Feller, of course, wasn’t really criticizing publishing or citing material on the web. He was warning that the proliferation of source material on the web has degraded historians’ citation standards.

There are two issues at work here. First, how do we handle link rot? This is a conundrum with no easy solution. Increasingly, all people interested in history, scholars and aficionados alike, will be getting much of their information from the web. What is our responsibility for ensuring that others can check our source material? If we have a reasonable expectation that a given website might not be around for very long, should we even bother citing it? If source material becomes problematic simply because of the ephemeral nature of the venue on which it is found, however reputable, how do we convey its legitimacy as evidence? The second issue relates to the question of what constitutes an authoritative text. The web has dramatically expanded researchers’ capacity to obtain and analyze primary and secondary sources—public records, newspapers, transcripts or digitized scans of correspondence, and obscure county histories, formerly accessible to only the most dogged and sophisticated researchers, are now readily available to anyone. But the web has done all this at random. The Eye of Google™ gazes upon some works but not others. Outdated and overly restrictive copyright laws prevent the sharing of many works. Researchers looking for specific texts to buttress their arguments encounter (through the workings of the search engine) sources that they otherwise would never have considered consulting. Before, researchers would have learned what specific sources one needed to look up when seeking the text of, say, the electrifying second annual message of Millard Fillmore. Now, enter a few key words, and voilà: http://www.presidency.ucsb.edu/ws/index.php?pid=29492#axzz1LVft8YVF. Maybe you’re more interested in Fillmore’s controversial 3d annual message and prefer it from a printed work? Boom: http://books.google.com/books?id=muPv6F0gm1kC&pg=PA209&dq=%22millard+fillmore%22+%22annual+message%22&cd=8#v=onepage&q=%22millard%20fillmore%22%20%22annual%20message%22&f=false

Is the above http address a legitimate source for citation? It’s a well-done, university-backed website, and I can only assume (having neither the time nor inclination to verify) that the text is presented accurately. I certainly wouldn’t hesitate to direct students to it. So why not? Well, what if UC-Santa Barbara loses or otherwise decides to pull the site’s funding and it goes dead? Can we depend on other researchers to retrieve it from some archived site (the Internet Archive’s Way Back Machine)? What about the printed source? What of a recent reprint of James D. Richardson (something of the court historian for the nineteenth-century presidency)? Perhaps you’re interested in U.S. relations with Cuba and needed to discuss the Fillmore administration’s rejection of British and French entreaties to forswear annexation of the island. That’s covered in the edition (p. 212), so you could cite it as a source. But beware, Google only offers a summary view of the book. Although you might be accurate in locating Fillmore’s rejection of the British-French tripartite arrangement, you’d be obscuring the incompleteness of the edition you consulted. Rather than helping other researchers, the citation would simply reflect the ease with which specific texts can be found on the web. In cases where the source is not unique (unlike, say, a manuscript letter, diary, or newspaper), citation, when it’s necessary at all, should go beyond merely indicating where one viewed the text. It should point readers to the scholarly apparatus that makes the particular source useful and authoritative.

There’s that word again—authoritative. Now we enter the realm of scholarly editors, who take a special interest in presenting historical and literary texts that are built for the long haul. I’m going to go out on a limb and guess that part of Feller’s justified pique grew out of a realization that not only were the Jacksonian scholars he reviewed citing somewhat dubious sources, they were not consulting The Papers of Andrew Jackson. I experience the same frustration in my work with the Papers of Thomas Jefferson. An all-too standard pet peeve is coming across recent scholarship that cites, not our series, but Paul Leicester Ford’s earlier edition The Works of Thomas Jefferson. Now, there’s nothing wrong with Ford. If one is looking to quote TJ, many of his famous writings are covered in that edition. But Ford’s project was very different from the comprehensive, annotated approach undertaken by modern documentary editions. Not only do modern editions present text more accurately, they present it in context. The primary subjects’ words appear along with the incoming correspondence that might have prompted them. Annotations connect text to other primary sources, as well as to modern scholarship. There is, in short, a wealth of information, both critical and ancillary, that is useful to readers.

So why do so many people continue to rely on Ford? Because his edition has been scanned into Google Books and therefore is convenient for anyone unwilling or unable to search beyond a desktop. Now, I can understand that a lot of researchers out there may not have the institutional support of a major research library and therefore can find it a challenge to get to modern documentary editions. The volumes are expensive, and the work of getting them online (although ongoing) may not occur quickly enough to satisfy everyone—nor does it necessarily lower the price. Still, it seems to me that the facility of the web has encouraged a kind of entitled sensibility among many researchers, who become miffed when something is not available online for free. The kind of scholarship that fills documentary editions costs money, though. Editions may or may not have the ability to publish online with no expectation of remuneration—university presses do, after all, require some return. The internet, however, has untethered the connection between the free consumption of information and its labor-intensive production. Too many researchers, accustomed to getting so much of their information for free from the comfort of the coffee shop, seem increasingly unwilling to do the legwork necessary to gain access to superior sources. Instead they settle for the merely adequate. That’s a shame.

I don’t want to imply that there’s anything wrong with citing material from the web. It’s essential and will increasingly account for much of the information that ends up in our works, particularly as online publication becomes more prominent. We do need to be sensitive to the issue of link rot—the Chicago Manual has some useful hints in this regard, and I am hopeful that archivists and librarians, who are far more advanced in these matters, will come up with some viable solutions. More broadly, the bounty of the internet need not fundamentally alter what we choose to cite as evidence. Standards will and should evolve with the times, but we should not displace one set of works with another simply because the new batch is easily and freely obtainable. Any shift should be based on the responsibility we have to our readers to connect them with the best available sources, print or web-based.