Wednesday, March 30, 2005


Help for a friend

Jason Kuznicki and I have a mutual graduate student friend who could use your help. Jason has the details at Positive Liberty.


Spring mix

Since I skipped last Friday's shuffle, I thought I would join Scrivener and Geeky Mom by offering this Spring Mix, in honor of the first morning that I was able to work with my windows open.

1. "Joy Spring," by Clifford Brown, from Clifford Brown and Max Roach
2. "Will O' The Wisp," by Miles Davis, from Sketches of Spain
3. "It Might as Well Be Spring," by Sarah Vaughan, from Sarah Vaughan
4. "Serenade to a Cuckoo," by Roland Kirk, from I Talk to The Spirits
5. "Sweet Honey Bee," by Duke Pearson, from Sweet Honey Bee
6. "Wild Flower," by Wayne Shorter, from Speak No Evil
7. "I'll Remember April," by Clifford Brown, from At Basin Street
8. "Fleurette Africaine," by Duke Ellington, from Money Jungle
9. "Mating Call," by Tadd Dameron, from Tadd Dameron with John Coltrane
10. "Pennies from Heaven," by J. J. Johnson, from The Eminent J. J., Vol. 2
11. "Some Other Spring," by Roy Haynes, from Out of the Afternoon

It was exceptionally easy to put together a spring list from my jazz collection; this mix was shuffled from a playlist of about two dozen songs. Autumn would also be a cinch ("Autumn Leaves," "Autumn in New York," "September Song," etc.), but winter would be tougher. Ironically, summertime might be hardest of all, aside from endless renditions of the obvious tune. I guess jazz musicians aren't big on summer: even "Summertime" praises the season ironically. Don't believe it when it says that the "living is easy." I wonder if this has anything to do with the fact that most city jazz clubs in the golden age of jazz had to close for parts of the summer for lack of ventilation and air conditioning. On second thought, nah: I'm not a structuralist when it comes to musicology. (Yes, I've managed to ramble for a paragraph about seasons, but Michael Kammen has written a whole book about them. So there.)

Tags: , random10. (These tags provided for the gratification of someone who shall remain unnamed, but whose intials are JM.)

Tuesday, March 29, 2005


On hypocrisy

At various points during the last week, I have felt my sympathies pulled towards every party involved in the Terri Schiavo case, which makes this story a tragedy in the Shakespearean sense: who do you feel most sorry for in Hamlet? Ophelia? Polonious? Gertrude? Hamlet? You can even feel sorry for the bad guy, Claudius, after his "words without thoughts" prayer. Even false piety has pathos, as anyone who has tried to be pious knows.

At various points during the week, I've also felt myself alternately delighted and disillusioned by the blogosphere. One of the entries in my Moleskine was Jeffrey Stout's observation that "the democratic practice of giving and asking for ethical reasons ... is where the life of democracy principally resides." While reading many blog posts on this issue -- too many to catalog exhaustively now -- I have been heartened by this exchange of reasons. Some of the most thoughtful posts I've seen, in no particular order and from a variety of angles, are at Positive Liberty, In Medias Res, Frogs and Ravens, Freespace, Easily Distracted, Gower Street, Michael Bérubé, Cliopatria, and The Weblog. The case has also sparked an interesting debate about "personhood" between Brandon at Siris and Chris at Mixing Memory. I would point to these posts (and others) as examples of the best thing the blogosphere can offer us: a democratic space in which to give and ask for ethical reasons.

But if the life of democracy consists in giving and asking for reasons, the Terri Schiavo case has also highlighted the ways in which this life is jeopardized. There are two principal threats to the life of democratic conversation. The first occurs when interlocutors refuse to give reasons, either by accepting no questions or by begging the questions being asked. The second occurs when interlocutors do not ask for reasons, but instead assume they already know the reasons that their opponents would offer. Whenever either of these wrenches is thrown into the gears of democracy, the ramifications quickly spread and become systemic.

Observe. Person A asks Person B if someone in a persistent vegetative state is alive. Person B responds that life is sacred. Person A is frustrated because this answer does not deal with the original question, but she decides to ask for Person B's reasons for believing that life is sacred. Person B is incensed that Person A would even ask such a question and accuses her of wanting to kill people in persistent vegetative states. Person A has heard it all before and concludes that Person B is a religious wingnut. Meanwhile, the life of democracy ebbs away, as the exchange of questions and answers bleeds into the exchange of epithets and accusations.

Of all the interventions that can fatally interrupt a democratic conversation, one of the most serious is the charge of hypocrisy. I have been thinking about this for the past week, because my first instinct about a week ago was to write a post denouncing the hypocrisy of those who claim to champion the "culture of life." I was outraged by the fact that some people could speak with a straight face about the sanctity of life, while simultaneously supporting preemptive war and capital punishment, or while simultaneously failing to support more progressive health insurance policies and efforts to fight the AIDS epidemic. Fired by righteous indignation, I was prepared to compare the "culture of life" to the proverbial sepulchre: whitewashed on the outside, but inwardly full of bones.

I still feel some of that indignation, but I've also started to realize that such a response, as far as it goes, inhibits the giving and asking of reasons. Instead of opening space for conversation, it declares conversations already closed: positions are ossified and assigned epitaphs. When this happens, the fault can be distributed liberally among various parties. As I mentioned above, if some people ossify their opponents' position by failing to ask them about it, others ossify their own position by refusing to entertain requests for such descriptions. But ultimately, what I have to figure out is how to be a better democratic conversationalist myself, and that means starting with me and assessing blame there. And since my knee-jerk reaction to this conversation was to deliver a pox on the houses of hypocrites, I've been spending some time judging how democratic that reflex was.

Not very, I've concluded. And that concerns me, because I am not alone in resorting almost reflexively to the charge of hypocrisy when I find myself in difficult conversations. I won't provide you with links to prove this, because I'm dealing primarily with my conversational faults. But you probably don't need the links -- probably, you've run across the sweeping suspicions of grandstanding, the unmasking of manufactured sympathy or outrage, the charges and counter-charges of partisan spin. Of course, there may be times when such accusations are part of a democratic conversation: at some point, they may be the reason we have to give when we are asked to defend our belief. So I don't want to suggest that accusations of hypocrisy must be ruled out of democratic discourse; arguably, nothing should be ruled out. But I do want to suggest that charges of hypocrisy must be handled with care, deliberately rather than reflexively. When they are thrown without care into democratic deliberations, they often cause the types of system failure I've outlined above.

Diagnosing a hypocrite depends on knowing motives. Literally, a hypocrite plays a part, feigns an attitude that he does not feel, wears a mask. As William Ian Miller argues in the introduction to his entertaining book on Faking It, we usually recognize hypocrisy in ourselves when we feel split in two. In these moments, there is one "me" that is performing and posing, while another "me" seems to look on in bemusement at how fake the other "me" is being. But as Miller also notes, it is sometimes difficult, even on self-examination, to tell whether "I" am different from what I appear to be, or whether what I appear to be is all "I" am. Diagnosing hypocrisy is difficult even when it comes to my self, whose motives are relatively transparent to me (or at least the "me" that observes the "me" that performs). How much more difficult is it to say with confidence that others are faking it, putting on an act, disfiguring their faces in shows of mock piety! At the very least, we ought to recognize that this is always hard to judge.

Even more importantly, if diagnosing hypocrisy requires knowing motives, the diagnosis cannot be made as a blunt generalization. For example, it certainly may be true that many members of Congress have been putting on a performance with regard to the Schiavo case. But it hardly follows from this that every supporter of Schiavo's parents is hypocritical or insincere. The reverse is also true: if some pro-life protester outside Schiavo's hospice can be proven hypocritical, or even if some Congressional leader dissimulates, neither fact counts as evidence that every member of Congress interested in the case was interested for base reasons.

Sometimes ripping masks off of hypocrites is called for, but sometimes I feel like our national conversations run like bad episodes of Scooby Doo. The episode starts with the unmasking of some villain. A few minutes later, our heroes run into someone with a similar looking mask and cheerily begin trying to rip it off too -- only to discover that this is not a mask, but an actual face. Like Shaggy and Scooby, we react by running away screaming. In some ways, there is nothing more frightening than an antagonist who is both hostile to our position and perfectly sincere. But as usually happens in Scooby Doo episodes, we learn that the proper reaction to this fearsome character was not to run away, as Shaggy and Scooby did, but instead to sit down with the maskless man and find out his story, as Velma, Freddy, and Daphne usually did.

I don't want to be mistaken here: I'm not saying that it harms democratic conversation to point out the inconsistencies in a position. But not all inconsistency is hypocritical; people can be unaware of inconsistencies in their positions, while hypocrites are aware of inconsistency but indifferent to it. We are all at various points unaware of inconsistency in our beliefs, and one goal of democratic conversation should be to lead our interlocutors to greater consistency. But when we blanket a position with the charge of hypocrisy, we simply stop at pointing out the inconsistency and attributing it to false motives. Charges of hypocrisy do not move us towards resolving that inconsistency. For example, suppose someone supports preemptive war but also opposes abortion. It could be that this person is a hypocrite, but it could also be that she does not see any inconsistency in her position. In that case, it behooves us to ask her how she makes those positions cohere. If we sense that those positions do not cohere, we ought to offer our reasons for thinking so. But this conversation will be much more difficult if we simply assume -- before the exchange of reasons -- that our interlocutor is wearing a mask, or deliberately hiding bones in a whitewashed tomb.

What I'm calling for (elliptically) is nothing new. Basically, it's the Socratic method. Socrates could really zing his interlocutors by pointing out inconsistencies in their positions. But the reason his dialogues sound so persuasive is because he took those positions seriously; he asked his interlocutors for reasons and then patiently went about showing that those reasons did not add up. To practice this kind of patient criticism, of course, Socrates sometimes had to suppress the suspicion that his interlocutors were being disingenuous.

I remember when I read Plato's Republic for an undergraduate course, the entire class was enthralled by the way that Socrates thrashed Thrasymachus in Book I. Thrasymachus dared to argue that justice was a weakness and a vice, whereas injustice was a strength and a virtue. It takes some time for Socrates to realize that Thrasymachus really holds this position, and is not just pulling Socrates' chain. Socrates finally says "I certainly ought not to shrink from going through with the argument so long as I have reason to think that you, Thrasymachus, are speaking your real mind; for I do believe that you are now in earnest and are not amusing yourself at our expense." Thrasymachus responds: "I may be in earnest or not, but what is that to you? --to refute the argument is your business." Socrates agrees, and gets right back to asking questions.

Socrates' exchange with Thrasymachus demonstrates, ironically, that hypocrisy itself is not as damaging to democratic conversation as charges of hypocrisy. Even if Thrasymachus is not in earnest, even if he is not speaking his "real mind," the conversation can continue, so long as Thrasymachus is willing to answer Socrates' questions and Socrates is willing to ask them. But if Socrates had thrown up his hands in indignation and refused to "refute the argument" simply because it was not offered in earnest, the conversation would have stopped prematurely. (There are all sorts of similar exchanges about honesty and earnestness in Book I; in every case, the conversation continues despite mutual suspicions on both sides that the other side is dissembling.) Perhaps one reason Socrates keeps the dialogue going is because it is ultimately hard to tell whether Thrasymachus is in earnest or not. In such cases, it is best for the health of democratic conversation and argument to assume that he is.

This post may be annoying or off-putting in the same way that meta-blogging sometimes is. After all, I've been defending the exchange of reasons, yet I have not myself given reasons for my beliefs about the Schiavo case. If you want to know why I have devoted the post to metadiscourse, I would give three answers: First, my argument doesn't depend on the actual positions being taken up by interlocutors; whatever those positions are, blanket accusations of hypocrisy from either side are usually conversation stoppers, unless handled with extreme care and precision. Second, I've spent the whole post pointing out the weakness of my instinctive response to this tragic case, which indicates that I need to think some more before I can have a more thoughtful response. In comments at various other blogs, I've had different conversations about various aspects of this issue, some of which I think have been reasonably Socratic, and others in which I've phoned it in instead of keeping the conversation going. But in general, I'm still mulling my reasons for belief -- the two "me's" are still dialoguing with themselves before either sallies forth to dialogue with you. Finally, most of the thoughts I've had in relation to the specific aspects of this case have been mentioned elsewhere in other conversations. By contrast, there is a pressing need in polarized conversations to step back and think about conversation itself: what is at stake in making sure it survives, and how can we maintain its health.

Perhaps it seems like a cop out to say that I don't have everything figured out in this case, but that is the least hypocritical answer I can think to give you right now. What I can do is link some more. I can point you to two sources that have been useful for me in trying to sort out the various ethical and legal issues involved in the Schiavo case.

First, an ethics program at the University of Miami has an extensive timeline, complete with scanned images of important legal documents. After scouring these documents, it is impossible not to conclude that the court system did its job very well. If you haven't read the 2003 report of Jay Wolfson, a guardian ad litem, I think it offers an exceptional model of judicial reasoning mixed with humane reflectiveness.

I have also thought a lot about an article I read in the February issue of Harper's before the case began receiving round-the-clock media attention. I cannot recommend highly enough Garret Keizer's "Life Everlasting: The Religious Right and the Right to Die." I recommend this article not because I agree with every conclusion it reaches, but because it still has me pondering whether I do. You can judge for yourself which parts of the article are strongest, and which parts slip unnecessarily into charges of hypocrisy.

Friday, March 25, 2005



By George Herbert (1593-1633)
From The Complete English Poems

Death, thou wast once an uncouth hideous thing,
                            Nothing but bones,
            The sad effect of sadder groans:
Thy mouth was open, but thou couldst not sing.

For we considered thee as at some six
                            Or ten years hence,
            After the loss of life and sense,
Flesh being turned to dust, and bones to sticks.

We looked on this side of thee, shooting short;
                            Where we did find
            The shells of fledge souls left behind,
Dry dust, which sheds no tears, but may extort.

But since our Saviour's death did put some blood
                            Into thy face;
            Thou art grown fair and full of grace,
Much in request, much sought for, as a good.

For we do now behold thee gay and glad,
                            As at doomsday;
            When souls shall wear their new array,
And all thy bones with beauty shall be clad.

Therefore we can go die as sleep, and trust
                            Half that we have
            Unto an honest faithful grave;
Making our pillows either down, or dust.

Tuesday, March 22, 2005


My first Moleskine

Recently, Tony and Lorianne have both done homage to their Moleskine notebooks, and I've been meaning to join in the adulation.

Although I've always carried small notebooks for jotting, I'm a recent convert to Moleskine. In fact, I just finished my first one a few weeks ago. My first Moleskine dates back to a research trip in Boston last March, and below I've scanned some of the pages from it. Since starting this small Moleskine, I've also started a larger Moleskine journal which functions like a private blog. And in addition to these two Moleskines, I keep another notebook exclusively as a commonplace book for quotations that I like.

Some of the notes on these pages will hopefully end up on this blog in a more developed form. There are notes on Jeffrey Stout's Democracy and Tradition, which I've been wanting to post about for some time -- at least since I took these notes last summer. There are also some notes on David Miller's On Nationality. There are some thoughts on how we think -- or don't think -- about the future since September 11, inspired by my reading of Wendell Berry's essay, "The Loss of the Future." And there are random dissertation notes, book titles, money notes, etc. Welcome to my disjointed world.

Monday, March 21, 2005


Teaching texts

In a thought-provoking recent post, the Little Professor writes that "literary scholars ... study how texts work." Historians, on the other hand, "study how texts exist." These two scholarly endeavors, of course, overlap and complement each other. But as the post goes on to say, "it's very difficult to make the historian cohabit peacefully with the literary critic."

It is especially difficult to make them cohabit in the classroom.

Discussion-based history classes are usually organized around historical texts -- novels, autobiographies, slave narratives, and so on. We assign such texts to students partly as primary sources. Their existence tells historians something about the times in which they were produced. Yet we also want students to approach these texts like literary scholars, to think about how texts work. The difficulty, for teachers and students alike, is to approach texts in both ways at the same time.

Consider a syllabus favorite like Frederick Douglass' Narrative. On the one hand, the text serves well as a window onto the experience of enslavement in antebellum Maryland. On the other hand, the Narrative is clearly not mere reportage. Douglass is reporting events that actually happened, but he is also engaged in particular rhetorical projects, which are shaped by still other events in his life and other texts he has read. For instance, Douglass foregrounds gruesome examples of slave women being whipped partly because he knows that antislavery readers expect such examples as part of the genre. Douglass also addresses the Narrative to particular defenses of slavery being offered in the North, interjecting at several points that if slaves sometimes seem contented, they are only pretending to be so for their own safety. His examples are selected and presented not just as episodes in a memoir, but as evidence for an argument.

Yet many students are more comfortable thinking of a text like the Narrative as a report rather than as a rhetorical project. How, then, does a teacher help students analyze the rhetorical and argumentative structure of the text without undermining its value as a piece of reportage?

Often the surest way of helping students to read a text as rhetoric is to present it to them as fictional or false. If you posit some disconnection between actual events and a text, it is easier for students to address the question of how the text "works."

Suppose, for example, you are teaching another syllabus favorite: Olaudah Equiano's narrative. In an earlier comment thread, Timothy Burke and Jonathan Dresner had a brief exchange about Vincent Carretta's hypothesis that Equiano was not born in Africa, as his autobiography suggests, but in North America. This hypothesis is still hotly debated by scholars (I've been reading Adam Hochschild's Bury the Chains, which includes a thoughtful appendix on the debate). But from a pedagogical standpoint, Carretta's hypothesis is useful because it unsettles students' expectations about how the text came to be and where Equiano came from. Once we are open to the possibility that Equiano was not born in Africa, it becomes easier to think about how his representations of Africa work. What conventions of abolitionist literature do they follow? How do they reflect Equiano's views as a Christian? How do they address particular arguments being circulated in the Atlantic World about the "savagery" of native Africans? Raise a question about how the text came to exist, and students eagerly discuss how the text works.

For similar reasons, it is easier to ask students about how proslavery texts work than it is to ask how antislavery texts work, because students are (hopefully!) constitutionally skeptical about the former but inclined to trust the latter.

For example, Catherine Clinton's new biography of Harriet Tubman quotes from a Philadelphian, John Bell Robinson, who published a fierce attack on Tubman after she brought her parents to the North in 1857. His "invective became even more lethal when he launched into a diatribe about [Tubman's] removal of her aging parents from a slave state. Robinson's reasoning was that of a quintessential proslavery apologist: 'Now there are no old people of any color more caressed and better taken care of than the old worn-out slaves of the South ...'" (p. 143).

Here students are likely to have no problem seeing that Robinson's text is doing certain kinds of rhetorical "work." At the very least, Robinson's claims are unlikely to be taken as simple reportage about the treatment of elderly slaves, especially once students learn that Tubman's parents were already free when Tubman brought them North. So Robinson has his facts wrong in more than one way. But then Clinton goes on to point out that "it suited both proslavery and abolitionist camps to portray Harriet's parents as an elderly enslaved couple. One side claimed their dependence upon some fictive master's goodwill, while the other painted the harsh cruelties of whips and chains if they did not escape" (p. 144). Even though Tubman's father had been manumitted in 1840 and her mother had been free since 1855, abolitionists sometimes folded their story into Tubman's other heroic rescues of enslaved family members.

Clinton's point would probably help students see how abolitionist texts "worked." But the lesson learned may come at a high cost. For it would now be easy for students to wonder: "If Robinson was lying and had his facts wrong ... did abolitionists also have their facts wrong?" The realization that Robinson had a rhetorical argument to make helps students call into question his facts. But once you point out that abolitionists also had a rhetorical argument to make, students might wonder whether their facts were wrong too. That's certainly not necessarily bad, but it can be if students conclude from this discussion that abolitionists were "as wrong" as Robinson was -- and wrong in the same ways.

What I'm getting at here are old and familiar problems -- about the relationship between authors and audiences, rhetoric and reality, texts and facts. But I'm encountering these problems for the first time from the perspective of a teacher. And I'm worried about the potential pitfalls in the pedagogical methods I've been describing -- using the Caretta hypothesis, for instance, to discuss the rhetorical structure of Equiano's narrative, or pointing out that abolitionists and proslavery apologists alike overlooked the freedom of Tubman's parents because that fact did not serve their arguments.

My worry is that students will learn to associate the idea of "rhetoric" with dissemblance. The strategies I've outlined might reinforce a preexisting sense that rhetoric can be equated with bias, which has an almost universally negative connotation as antithetical to truthfulness.

I remember facing a similar pedagogical challenge when I worked as a tutor in symbolic logic. Any Introduction to Logic course begins by drawing a basic distinction between the validity of an argument and its soundness. An argument is formally valid if the premises entail the conclusion. But a sound argument is a valid argument whose premises are also true. I often found that students had a difficult time understanding the distinction between validity and soundness. The easiest way to help was to present an argument that was valid but clearly unsound. For example ...

If the moon is made of green cheese, then two plus two equals four.
The moon is made of green cheese.
Therefore, two plus two equals four.

Clearly, if the premises to this argument are true, then the conclusion is also true. But in this case, also clearly, the second premise is false. (It throws students for another loop to inform them that the first premise is true, but that's another issue ...) The argument is valid but unsound. Usually such examples help students distinguish between validity and soundness, but inevitably some students will start to think of valid arguments as always unsound. That is, they will associate validity with moons of green cheese.

The analogy isn't exact, but the pedagogical problems with teaching texts are similar. You can show students how Equiano's arguments worked by calling into question whether he was born in Africa. But then you risk encouraging them to associate rhetoric with falsehood. And that would be to fail in your original objective, which was to show how even a report that gets its facts right is structured according to certain rhetorical and narrative conventions.

I talk about this as if it is merely a pedagogical problem, but of course it isn't. There are thorny issues of textual representation and rhetoric here that befuddle all historians and literary scholars. But as a beginning teacher, I'm discovering for the first time how especially thorny these problems can be in the classroom. And although I think one goal of education is to model informed and thoughtful befuddlement, confusion does not always signify an appreciation of complexity. Advice from non-beginning teachers (or swifter beginners) would be very much appreciated.

(Cross-posted at Cliopatria.)

Friday, March 18, 2005


Friday shuffle (or is it?)

Wherein I randomize my MP3 player and share with you the results. Unlike last week's inaugural edition, this week's shuffle is from the "Jazz" folder.

1. "Your Lady," by John Coltrane, from The Classic Quartet: Complete Impulse Studio Recordings, Disc 2.
2. "Central Park West," by John Coltrane, from Coltrane's Sound.
3. "After the Crescent," by John Coltrane, from Classic Quartet, Disc 4.
4. "Mingus Fingers," by Lionel Hampton.
5. "It's Easy to Remember," by John Coltrane, from Classic Quartet, Disc 1.
6. "Someday My Prince Will Come," by Bill Evans, from Portrait in Jazz.
7. "Haunted Heart," by Bill Evans, from Explorations.
8. "Mr. P.C.," by John Coltrane, from Giant Steps.
9. "All of Me," by Louis Armstrong.
10. "Everybody's Somebody's Fool," by Red Garland, from Red Garland Revisited.

Now, I know what you're thinking. "Uh, where's the shuffle?" I should explain that my "Jazz" folder is divided into three alphabetical folders, A-Da (for Davis, Miles), De-L, and M-Z. (You in the back, wake up!) Each folder contains roughly 500 songs, but my Archos Jukebox will only load 999 songs onto a playlist. This means that unless I want to go through and pick out particular albums, I usually just put two of the three folders on a playlist and let it rip. That means, though, that particular artists are often highly overepresented.

Not that I'm complaining. You might gather from this list that I am a connoisseur of Coltrane, and you would be right. And if you like Coltrane too, you'll actually see that this list is still quite a mix, drawing as it does on albums from his early years as a leader on Atlantic (Giant Steps and Coltrane's Sound), as well as albums from the "Classic Quartet" years at Impulse with McCoy Tyner on piano, Elvin Jones on drums, and Jimmy Garrison on bass. By the way, the Classic Quartet box set is worth every penny that you pay for it. It ranges from the early ballad albums on Disc 1 and 2 to the pinnacles of Love Supreme and Crescent. After Love Supreme, of course, Coltrane fans start to divide into those who were disappointed by the dissolution of the Classic Quartet (foreshadowed by aptly named songs like "After the Crescent") and those who stand by their man even though he got a little wiggy with the multi-instrumentalist experiments of his last years.

I certainly favor the Classic Quartet era, but I'm open to the later stuff too. Basically, my feelings about Coltrane are much like Bubba Gump's feelings about shrimp: I'll take it any way you serve it up. Coltrane, Coltrane Jazz, Coltrane's Sound, Coltrane Plays the Blues, Olé Coltrane, Thelonious Monk with John Coltrane, Soultrane, The Gentle Side of John Coltrane, Duke Ellington and John Coltrane, The John Coltrane Quartet Plays ... You get the idea.

By the way, No. 11 on this playlist would have been "The Drum Thing," by John Coltrane, from Crescent, and No. 13 or 14 would have been "Lonnie's Lament" from the same. The latter has perhaps the most soulful piano solo ever recorded. Okay, now I'll stop.

Thursday, March 17, 2005



My first article, "The Fourth and the First: Abolitionist Holidays, Respectability and Race," appears in this month's issue of American Quarterly, the journal of the American Studies Association. If your library has a subscription to Project Muse, you can read the article in HTML or in PDF.

I think the longest I've spent on a blog post is about two or three hours. By contrast, this article has been in the works for about four years now, including press time. Extra time certainly does not entail perfection. (Would that it were so!) But hopefully there is some appreciable difference between the two genres!

In a generic sense, journals are still not as interactive and dialogical as blogs. I think there may be good reasons for that. But this post provides the article with a response thread, and I'd be delighted to hear of comments or criticisms. As I've argued before, I'm aware that scholarship does not end when publication begins. Scholarship is drafts all the way down. (I'll leave you to judge whether that means "drafts" in the sense of corrigible manuscripts, or "drafts" in the sense of sudden feelings that wind has broken loose.)

While you are at American Quarterly, it looks like the issue also contains some great articles on Christian heavy metal, polygamy debates, and transnationalism in American Studies.


More bloggy goodness

First, head on over to the fourth History Carnival, courtesy of Blogenspiel.

Then, ooooh and aaaah at my shiny new "recent comments" feature in the sidebar, courtesy of Blogger Hacks. I'm not sure why right now the recent comments only appear on the main page; I have a request in to Ebenezer Orthodoxy, the genius behind my comment system. Also, I really would like to know if you notice that my page is loading significantly slower as a result of the new feature. Ebenezer warns that speed might be reduced because the script has to search all of the posts and sort out which comments are most recent. If the load time is really slow, it might not be worth keeping the hack.

Wednesday, March 16, 2005


Atlantic bridges falling down, falling down, falling down ...

The Bush administration has demonstrated once again that it will continue to use its political muscle to call the tune in Europe. As Sepoy reports, Paul Wolfowitz will be the next head of the World Bank. It's hard to read the appointment as anything other than a thumbing of the nose at diplomats abroad and Democrats at home.

I mean that. This probably shouldn't be read as anything other than a very undiplomatic and impolitic move. There are two possible ways for antiwar folks like myself to react to this news. One is to see conspiratorial designs in the appointment and prognosticate all kinds of disaster once Wolfowitz gets his hands on the Bank. The other is to see the appointment as another body blow delivered by Bush to international relations between the United States, Europe, and developing countries.

Already these two reactions are being blurred together by clear-eyed people, like Steve Clemons for instance. But I think it's important to keep them separate. If you don't like what the World Bank stands for, then it would have been possible to see any appointment to the post as conspiratorial. Let's face it: unless the Los Angeles Times had gotten its way and Bush had appointed Bono, the World Bank was going to go on doing the kinds of development projects it does. If those projects are bad, then they were bad before Wolfowitz got there and will continue to be bad once he is there. So I don't buy, at least at first glance, that his neoconservative ideology will make him steer the Bank in a significantly different direction. From the viewpoint of progressives, after all, the Bank was already about as conservative as it could be in its approach to development aid and fair trade. So it's fine and good to point out that Wolfowitz had more qualifications from an IR standpoint than other people for the job. As long as the World Bank does what it does, would it have been better for developing countries if the job had gone to someone with an even weaker resume than his?

I do buy, however, that Wolfowitz's appointment is very bad news from a diplomatic standpoint, which is basically what Mark Schmitt said last month when rumors about this first leaked out. The opposition line on this should be: Bush is not a man who means what he says, because he says he believes in diplomacy and working with allies, but he refuses to act that way. That's the main consequence of this announcement: whatever bridges we might have been rebuilding with Europe in negotiating a Syrian pull-out from Lebanon, or in working to disarm Iran, those bridges are immediately weakened if not collapsed. Apart from Wolfowitz's particular beliefs and qualifications, then, the nomination is unwise and, not to put too fine a point on it, mean. Plus, to the extent that Wolfowitz does not have credibility with the international community, he is not a good choice for a job that requires diplomacy.

It's important to hold this line and to keep it separate from the anti-World Bank line. Fox News and the Bush spin doctors would like nothing more than to portray criticism of Wolfowitz as a witch hunt run by anti-globalization "nuts." We can criticize the World Bank and the ideologues behind it, and we can criticize Wolfowitz's appointment to the World Bank, but if we conflate those criticisms, both critiques will suffer for it.

Of course, I also think that the conduct of the war in Iraq should not be rewarded by promoting one of its chief architects to the head position of international organizations. But that move, again, is primarily a sign of the Bush administration's utter disregard for the war's critics. Making that point should be kept separate from the argument that Wolfowitz's neoconservative ideology will spill over into his World Bank management in unique ways.

[I've revised this post at 9:42 p.m. I've been fussing with it because I'm not sure I believe it. Feedback and discussion welcome.]

UPDATES: Others seem to be trying to separate the question of what Wolfowitz would actually do at the World Bank from the question of what his appointment to it does to diplomatic and bipartisan relations. See, for instance, Matthew Yglesias and Daniel Drezner. The New York Times, however, runs an analysis with the headline "Wolfowitz Nod Follows Spread of Conservative Philosophy" -- but without making a very strong case that Wolfowitz's philosophy will be significantly different from that of his predecessor, who was appointed during the Clinton administration. I'm still open to being convinced, though, that Wolfowitz's neoconservatism matters here.

Drezner tries to make the case that Wolfowitz's appointment was not a strongarm move by the Bush administration. That I'm less convinced by. Even if it were true that Bush is not trying to thumb his nose at his critics, that's clearly the perception his critics have, and he had to have known that it would be. In diplomacy, perception matters, which is something that the Bush adminstration in general and Wolfowitz in particular have been signally unaware of.

Monday, March 14, 2005


Minor crisis

For the past couple of months I've let my normally well-kept coiffure grow to great lengths. So today I called to schedule a haircut.

Narrowly-spread panic ensued when I was told that Jason, my regular barber, was gone. "Gone?" I asked, a frog in my throat. "Yes," came the ominous reply of the receptionist, "he left us about four weeks ago." (And yes, it has been more than four weeks since my last trim.) Where had he gone, I demanded to know. The receptionist professed ignorance, perhaps sensing that I was a customer on the verge of being a former customer. "He just up and left us," she said, vaguely hinting at rumors that Jason had gone to work for his father. No specifics were forthcoming.

This has sparked a minor crisis in my chimney corner. I used to be one of those hapless haircut customers who simply threw my ten or twenty bucks on the counter of the neighborhood cuttery, which more often than not was the kind of place that offered two haircuts for the price of one and threw in a frequent-cut card to boot. But because my former practice was so haphazard, getting my hair cut was always too close to an adventure. Every snip of the scissors brought a wince and a prayer. For about three and a half years now, I've at least known that Jason would be cutting my hair. I could know what I was getting.

But it's not my hair that will suffer most now that Jason is apparently gone. Mine is certainly not a particularly difficult wig to work with. What I'm worried about is going through the complicated process of finding another barber who understands the fine balance between awkward silences and too much conversation. Most people think of the dentist's chair as the most uncomfortable place for small talk, but I've never been good at small talk in the barber's chair either. In my days of wilderness wandering from one neighborhood cuttery to the next, I could expect about once a month to cycle through the usual questions, beginning with "What do you do?" and leading inevitably to "So what are you planning to do with that?"

I suppose the ideal hair salon would be something like its Parisian namesake. But in my personal experience, conversation in a salon rarely rises above the level of introductions to regions of the mind -- unless you already know your fellow salonistes. With Jason, conversation usually did rise above small talk, because our introductions were taken care of long ago. Frankly, I feel sort of like I've abruptly lost touch with a friend I knew fairly well. Of course I wish him the best wherever he has gone, but tomorrow at 1:30, when I drag my hirsute self into Gotti's Classic Cut and mumble about my dissertation, I can't say I won't miss the days when I could get a haircut without having to break the ice.

I'm tempted to close by quoting the first line of the Cheers theme, but I'll restrain myself.

UPDATE: I've had confirmed that Jason has left to work for his father, who apparently sells medical supplies. Jason's replacement is a guy who just moved to the United States a few weeks ago and used to fly helicopters for the Russian military. My appointment, though, was with the owner of the shop. I only mention the Russian because the owner spent the first ten minutes of my haircut trying to talk to the new guy about when he should show up. The questions I was asked during the haircut were, in order: "What's your first name?" and "What do you do?" I volunteered a few things about a new coffee shop and restaurant that are opening in the area, which sparked a little conversation, but not much. At the end of the appointment, I discovered for the first time that Jason had been charging me four dollars less than the usual haircut price. I think the sticker shock showed on my face. I think I'll look somewhere else.


Some thoughts on animal rights

Not too long ago, Harbinger posted a famous quote concerning animal rights from Robert Nozick's Anarchy, State, and Utopia. A longer excerpt of the quote and its context can be found here, at the Animal Rights Library.

Nozick asks, in essence, whether animals can be sacrificed for the sake of human pleasure. If snapping your fingers to your favorite jazz tune maximized your happiness, but you also knew that the snap of your fingers instantly killed 10,000 animals, would it be morally wrong to snap? Or suppose that you gained pleasure from swinging a baseball bat, but it so happened that a cow's head was in the path of your bat. Would it be morally wrong to swing?

Some people have trouble with the plausibility of both examples, but Nozick points out that arguments for hunting as pleasurable exercise are not much different from the bat-swinging example. And we could also modify the example so that it put pressure on the moral intuitions of people who do not hunt but who do wear leather or eat meat: Suppose "the animal is killed to get a bone out of which to make the best sort of bat to use; bats made out of other material don't give quite the same pleasure. Is it all right to kill the animal to obstain the extra pleasure that using a bat made out of its bone would bring? Would it be morally permissible if you could hire someone to do the killing for you?"

The direction in which these examples point should be clear: If eating meat is not necessary for health, and only gives "gustatory" pleasure, then are we justified in killing animals in order to maximize that pleasure? And is there any significant moral difference between killing a cow to eat it (for pleasure) and killing a cow to swing a baseball bat (for pleasure)?

I am not a vegetarian. But there was a time in my life when I would have viewed Nozick's arguments more dismissively than I do now.

For one thing, I once thought that these examples were needlessly sensationalistic and maudlin. I basically held, at least subconsciously, that the processes that put meat on my plate were nothing like swinging a baseball bat at a cow's head. That view has changed over the years, especially with my reading of books like Fast Food Nation, a modern-day Jungle which reveals that the treatment of animals in modern slaughterhouses is often not far from the bat-swining example. In terms of sheer emotional pull, the idea of cattle being force-fed grain and excrement, pumped full of antiobiotics, and then killed without being properly anesthetized is as horrific to me as the idea of braining cattle with baseball bats.

So I no longer view Nozick's examples as misleadingly emotional. As for whether eating meat is necessary for health, I also think the evidence supports Nozick's claim that for me, at least, it is possible to obtain all the nutritive value of meat from non-meat sources. (Whether this is true of all people everywhere is another question, which has been recently discussed at Locus Solus.)

Perhaps these changes in my views should have pushed me completely to vegetarianism. I suppose I'm still open to being pushed in that direction. As for now, though, I've adopted what might be called (to borrow from liberation theology) a "preferential option" for plants. I avoid eating meat of uncertain origin, and when buying meat for cooking at home, my wife and I now go to lengths (I won't call them "great lengths") to select organic products that have been produced with attention to animal care. In general, we simply eat less meat than we used to. I suppose that I am the kind of person whom thoroughgoing vegetarians cannot stand, because I perpetuate "the system" while agreeing in principle with many of their claims. I beg their patience and forgiveness, at least for the remainder of this post.

Nozick's discussion of animal rights comes in the context of a larger argument about individual rights. Although the passages I've mentioned above are often quoted as proof that Nozick holds animals to have the same moral status as persons, this is not in fact his intent in Chapter 3 of ASU. There is evidence between the lines of the chapter that Nozick would not adopt such a stringent position (see p. 38 if you have a copy handy). Instead, Nozick's reason for raising questions about animal rights is to put pressure on utilitarian or end-goal arguments for the liberal state.

Before the passage on animals, Nozick has been arguing that utilitarianism -- roughly, the idea that one ought to do what is productive of the greatest possible good, even if that means violating individual rights in the short-run -- is hopelessly confused. And he is equally opposed to a kind of "rights utilitarianism," which holds that the goal of states is to minimize the number of rights violations, even if that means violating some rights. Whether one takes the goal of the state to be the maximization of happiness or the minimization of rights-violations, both of these views allow the violation of individual rights for the sake of some end goal. As such, both views are contrary to Kant's claim that individuals are ends in themselves, and can never be used as means to even the most worthwhile of ends.

Nozick agrees here with Kant: "Individuals are inviolable." Consequently, Nozick insists that we not view individual rights as parts of some end goal, where their maximization in the end excuses their violation in the present. Rather, we should view rights in their classic negative sense as "side-constraints" on action. My rights constrain your action, says Nozick. Rights, viewed as "side-constraints," say "Don't use people in specified ways." On the other hand, viewing rights as parts of an end goal entails the imperative to "Minimize the use in specified ways of persons as means" (p. 32). So in the passages leading up to his discussion of animals, Nozick has been arguing in favor of the first injunction against the latter. He adopts a "side-constraint" view of rights on the grounds that individuals are inviolable, and he imagines an "ultraminimalist" state whose job would be to enforce those side-constraints.

Nozick offers his examples about animals to put further pressure on "end-goal" or utilitarian arguments. His point is that, if we don't accept the idea that animals can be used in any imaginable way in order to maximize our pleasure, how much more should we reject the idea that individuals can be used to obtain end-goals! By and large, Nozick says, most people accept a position that might be characterized as "utilitarianism for animals, Kantianism for humans" -- animals can be used as means to ends, but human persons cannot. Through a variety of intricate arguments, Nozick even calls this position into question, showing that it rests on a questionable hierarchy of beings, in which animals lack certain characteristics that qualify them for the full rights of humans. Nozick asks what would happen if human beings came into contact with higher life forms somewhere in the universe who argued that because they were more developed than we, they were justified in having us for lunch. We would not accept their reasoning in our case, presumably, so why do our moral views accept a similar argument about animals?

Because his primary concerns are elsewhere, Nozick does not pursue this argument to the conclusion that animals, like human individuals, are inviolable. But he has at least raised the question of whether arguments against that claim can be based on "hierarchical" distinctions between animals and humans.

It strikes me, however, that there is a potentially deep inconsistency in the views of even the most comprehensive and Nozickian animal rights activists. For throughout the chapter, Nozick deals only with the question of how human beings treat animals. He never deals with the implications of his arguments for the treatment of animals by animals. He does deal briefly with the argument made by hierarchical thinkers that side-constraints apply only to the treatment of beings within one's own category, but he raises this argument for the sake of rejecting it.

My question, though, is this: If someone follows Nozick's arguments and examples to the conclusion that Kantianism should apply equally to human beings and animals, would one then need to accept that Nozick's ultraminimalist state should protect animals from animals? If a pack of wolves kills a cow, aren't they using that cow as a means? And, on the putative view that "utilitarianism for animals" is wrong, wouldn't that mean that the wolves are violating the cow's rights?

Perhaps that seems like a laughable question, but it is worth raising because it asks animal rights activists to push their arguments to their most logical conclusion. If animals have rights which should act as side-constraints on humans, why should they also not act as side-constraints on animals?

I can imagine, off the cuff, two responses: One, someone could argue that carnivorous animals cannot get the nutritive values of meat from other sources. But if that argument does not relax the side-constraints for human carnivores, why should it for animals?

Second, perhaps one could argue that animals do not possess the sentience or intelligence necessary to understand or accept ethical side-constraints on their action. It seems to me, though, that this (natural) move merely resurrects hierarchical distinctions between animals and human beings, which would undermine Nozick's original arguments against "utilitarianism for animals, Kantianism for humans."

Are there animal rights theorists who have dealt with the question of side-constraints on animal action, as opposed to merely side-constraints on human action? And if not, why not?

(Incidentally, I find Nozick's argumentative use of animals -- is that using them as a means?! -- to probe our moral intuitions about rights to be a brilliant move, and I think it could be equally illuminating in probing the strengths and weaknesses of other political and moral philosophies. Take Rawls' theory of the original position, for instance. Do moral agents behind the veil of ignorance know that they are human? Do they know their species in the original position? If not, why not? On p. 441ff in A Theory of Justice, Rawls essentially argues that moral personhood is a prerequisite for equal treatment, which presumably is the tack that a Nozickian animal rights theorist would also take. As I've said before, though, why does this distinction between animals and humans on the basis of their capacity for moral reasoning not fall prey to Nozick's critique of hierarchical exceptions? Remember the aliens: if they told us that we were not as capable of moral reasoning as they were, would our moral views require us to bare our necks voluntarily for their gustatory delight?)

Saturday, March 12, 2005


Burke on Wolfowitz and war

Timothy Burke has a moving essay up at Easily Distracted, which is appropriately sober in light of the newest revelations of prison abuse by soldiers in Afghanistan. Burke ends his essay with these impassioned paragraphs:
If there is anyone who ought to be deeply, gravely concerned about unwarranted shootings at checkpoints, accidental deaths of civilians, torture in US prisons, killings of surrendered prisoners, it’s the advocates of the war, at least the ones who believe in the Wolfowitz vision as it is represented by Brooks, Hitchens and others. ...

Wolfowitz and his defenders want to convince us that humanity is united by its universal thirst for liberal democratic freedoms, well then, how can they possibly fail to react to injustice or error in Iraq with anything less than the grave and persistent concern they might exhibit in a domestic US context? Where’s the genuine regret, the mourning, the persistent and authentic sympathy? I don’t mean some bullshit one-liner you toss off before moving on to slam Michael Moore again for three or four paragraphs, I mean the kind of consistent attention and depth of compassion that signals that you take the humanity and more signally the rights of Iraqis as seriously as you take the humanity of your neighbors. Only when you’ve got that concern credibly in place, as a fundamental part of your political and moral vision, do you get to mournfully accept that some innocents must die in the struggle to achieve freedom.

The Wolfowitzian defenders of the war want to skip Go and collect $200.00 on this one, go straight to the day two centuries hence when the innocent dead recede safely into the bloody haze of anonymous tragedy. Sorry, but this is not on offer, least of all for them. If they can’t find the time, emotion and intellectual rigor to be as consumed by the case of a blameless mother and father turned into gore and sprayed on their children as they are by what Sean Penn had to say about the war last week, then their entire argument about the war is nothing more than the high-minded veneer of a more bestial and reasonless fury. If Brooks or anyone else wants to rise to toast Paul Wolfowitz, then they’ll have to live up to the vision they attribute to him, and meet the real problems and failures of that vision honestly and seriously.
It seems to me, though, that Burke's challenge can be levelled equally at all defenders of war, and not just neoconservative fans of Wolfowitz. Every decision to wage war is made on the basis of a universally questionable choice to pass over the death of innocents. There is, I agree, a way to pass over these deaths with more pause, with more sobriety, with more honesty and mourning. But why should we be convinced that these expressions of humanity would make the fury of war less bestial and reasonless? If Wolfowitz showed appropriate grief; if Cheney wore the dark coat instead of the parka; if Bush showed up at more funerals; if Hitchens criticized the checkpoint shootings; if Rumsfeld lamented prisoner abuse as the sign of structural problems ... would this signify anything more than stopping for a moment of silence at Go before jumping ahead to the "two centuries hence," when the innocent dead will still have died? Does making their deaths less anonymous make them more acceptable?

In a limited sense, yes. I don't want to underestimate the significance of the "emotion and intellectual rigor" that Burke is calling for, or to dismiss the sincerity of those who both defend the justice of war and genuinely regret its horrors. At the same time, to simply call for more emotional and intellectual engagement with the tragedies of war is ultimately another way of distracting ourselves from the surfeit of suffering that is always caused by war, even when it is conducted as sanely and softly as possible. I favor calling for Wolfowitz's defenders to be saner and to speak in softer tones, but what prevents us from going farther, and calling for the truly sane silence of a world without war?

Perhaps Burke would respond that such a cavalier call for world peace exemplifies the same kind of intellectual sloppiness that characterizes cavalier attitudes towards war. But how can one "rigorously" contemplate the dissolution of bodies into gore without coming to the conclusion that this should never happen, no matter how humanely we lament it when it does?

Not for the first time on this blog, I find myself voicing rather utopian views about war and peace. But if such views strike you as too utopian to take seriously, let me submit this more limited conclusion: Burke's essay suggests that if anyone should be concerned about the injustices of well-intentioned war, it should be Wolfowitz and his defenders. I would suggest, instead, that all wars fought by liberal democracies are usually undertaken for reasons that their defenders see as well-intentioned. The gauntlet that Burke throws down for Wolfowitz, then, lies at the feet of anyone who believes that wars are justified.

Friday, March 11, 2005


Friday shuffle

Can't ... resist ... peer ... pressure. Must ... post ... random playlist on Fridays. Two reasons: I like music. You probably do, too. Maybe there's music you like that I would like, and vice versa. (Okay, that's more like three or four reasons, if you count technically.)

My MP3 player has two main folders: "Jazz" and "Not Jazz." (Is there anything else?) The latter is the only one my wife will listen to. (What can I say? Opposites attract.) So whether you get a jazz list or a non-jazz list will largely depend on whether she's home. (Of course for that reason I prefer the latter).

I'm adding my own emergent feature to this meme. At the bottom of the playlist, I might include some "Stream O' Consciousness" associations inspired by these songs. I associate music very intimately with periods of my life. Last week, I was reading through some of Ralph Waldo Emerson's journals and ran across this, from Journal O (1846): "I should say of the memorable moments of my life that I was in them & not they in me. I found myself by happy fortune in an illuminated portion or meteorous zone, & passed out of it again." I often feel that way about the songs that I have passed through. So, without further adieu ...

1. "Sinai to Canaan - Part I," by Chris Thile, from Not All Who Wander Are Lost
2. "There," by The Innocence Mission, from Glow
3. "Poor Places," by Wilco, from Yankee Hotel Foxtrot
4. "Story of My Life," by Loretta Lynn, from Van Lear Rose
5. "Tangent," by Beth Orton, from Trailer Park
6. "He's Gone," by Leona Naess, from Leona Naess
7. "I Am Trying to Break Your Heart," by Wilco, from Yankee Hotel Foxtrot
8. "Don't Panic," by Coldplay, from Parachutes
9. "Song We Used to Sing," by Edie Brickell, from Volcano
10. "Dirt Road Blues," by Bob Dylan, from Time Out of Mind

Stream O' Consciousness: Riding in the car through Montana at night. Driving to Houston about once a week during a summer in which my air-conditioning was broken; only one reason why I would do that. Hearing "Ballerina" on Late Night with Conan O'Brien and buying the CD the next day. Tutoring logic students at Coffee Station; coming home in the rain to the house on Haines.

Thursday, March 10, 2005


And I'm back

Sorry that Spring Break for Mode came a little early this week. It wasn't intentional, but I got snowed by stuff to do. Looking around the neighborhood, I see I'm not alone.


Antislavery scripts: Part III

About six weeks has passed since I posted Antislavery scripts: Part I, a notice of Marilynne Robinson's review of Adam Hochschild's new book on British abolitionism, Bury the Chains. I can hardly believe that about four weeks has passed since I posted Part II. One of the reasons I have been dragging my virtual feet on this series is that I have purchased Hochschild's book, and before concluding these posts I want to have read it.

Since it's obviously been a while, let me review what I've said (for my benefit as much as for yours). In the first post, I noticed that the moral authority of abolitionism is now frequently invoked by contemporary political movements. Hochschild has explicitly said, for instance, that he was drawn to British abolitionists because they seemed to fit the kind of story that he wanted to tell, which was How a Small Group of Embattled Activists Can Change the World. For reviewers like Robinson, this forthright presentism raises the specter of Whiggish history. It seems to shoehorn the history of the antislavery movement into a teleological, hagiographical, and compelling script.

Robinson attempts to complicate this Whiggish history in two different ways. On the one hand, her review of Hochschild's book impeaches the motives of abolitionists by questioning why they were not as active in alleviating the distress of British workers. On the other hand, her review points out the irony that after emancipation, antislavery ideology underwrote British imperialism in Africa, forging or ignoring chains there in the name of knocking off chains elsewhere.

Each of these more complex scripts of British abolitionism represents a different trajectory within academic scholarship on the subject. In Part II, I traced Robinson's first alternate script back to Eric Williams, the brilliant and influential Caribbean scholar who argued in 1944 that the primary causes for British abolitionism were economic, rather than humanitarian. The vestiges of Williams' thesis can be glimpsed in Robinson's suggestion that "one might, without cynicism, look to the economic considerations in play."

But in this post, Part III in the series, I will begin sketching what I see as a major shift in emphasis in antislavery historiography, the influence of which can also be seen in Robinson's somewhat schizophrenic review. This major shift has occurred because most historians have abandoned the major premises of Williams' argument, and have therefore had to find slightly different scripts with which to organize narratives about the rise and fall of British slavery. Finally, in a future post (or posts -- who am I kidding), I think I want to make a point about the present. Hochschild's presentism is not necessarily a problem, I think I will suggest, but the implications of antislavery history for the present are best framed in tales that are both cautionary and complimentary. (I offer these provisionally because, as I said, I have yet to read the book.)

Now that I'm more or less where I started, here's the main point I want to make in this post: Having abandoned Williams' emphasis on economic ideology as the key to understanding British abolition, recent scholars have emphasized nationalist ideology as a motive force, both behind the peculiar development of British slavery and its precipitous decline and death in the 1830s.

While scholars have not abandoned entirely Williams' interest in the relationship between capitalism and slavery (and rightly so), they have shifted their interests more to what we might call the moral geography of slavery and abolition. They have become less interested in the economic relationships between industrial capitalism and abolition, and more interested in the political and cultural relationships between the British metropolis and its colonies that made slavery both possible and ultimately objectionable.

Williams argued that slavery was abolished because British industry was weaned off of its profits; capitalists used and then abused the colonial slave economy. Subsequent historians, however, have challenged this thesis, not least of all on economic grounds. Whereas Williams contended that the slave economy was tanking on the eve of abolition, Seymour Drescher and others have argued that it was thriving, and that the gross cost of abolition for the British economy was incredibly high. Many now see British abolition as an act of "econocide," not as an act of economic amputation.

If Williams' critics are right, though, then historians need some new way of explaining how an economically profitable institution came to be seen as problemtic -- problematic enough to spark an ultimately successful popular movement against it. In other words, historians have had to view British slavery and its abolition not only through a material lens, as Williams had urged, but also as a cultural, social, and political "problem." This post-Williams direction in antislavery studies owes much to the work of David Brion Davis, who has argued that by the early modern era "slavery had always been more than an economic institution; in Western culture it had long represented the ultimate limit of dehumanization, of treating and regarding a man as a thing" (10). Davis, who is this generation's leading scholar on slavery and antislavery, has successfully encouraged historians to think more about what slavery "represented," instead of just about its economic role. Thanks to his efforts, British slavery is now studied both as a symbol and as a social system.

From 1660 to 1860, slavery functioned as a powerful foil for the collective identity of Britons, whose sense of themselves as a nation was under construction during the same centuries in which British slavery was first raised and then razed. But its functions as a foil changed. From roughly 1660 to 1760, colonial slavery could be seen by most Britons as a powerful foil for the liberties of the metropolis. Slavery was seen as a peculiarity of colonies at the periphery of the empire, and rarely as a characteristic of the empire itself -- it was something "over there" as opposed to "over here." By the end of the 1700s, however, this juxtaposition was difficult to maintain, especially as colonial politics and slavery intruded more directly on life in the metropolis. Changes in British culture and society, beginning in the second half of the eighteenth century, made it seem impossible for slavery and freedom to coexist within the empire, a sense culminating in acts of abolition in the early nineteenth century.

But the destruction of slavery did not destroy its usefulness as a foil. Slavery's expurgation from the colonies transformed it into a foil of Britain entire. No longer was slavery merely the opposite of the metropolis; it was now the opposite of Britain itself. Abolition thus became a potent instrument for British exceptionalism and expansion in the late nineteenth century. Thus, as a broad script for studying British history from 1660 to 1860, we can now follow the dramatic shift from slavery as a foil within the empire to slavery as a foil for the empire.

In the sixteenth and seventeenth centuries, at the same time that African slavery was becoming entrenched in British America, it was becoming virtually extinct within England itself. Villeinage, serfdom and slavery had been waning in Europe since medieval times and were winding down just as colonial slavery took off. Indentured servitude and nascent forms of wage work were slowly deposing labor relationships that bore a closer resemblance to chattel slavery. Meanwhile, as part of a connected cultural trend towards the sacralization of individual action, many political theorists were systematically defending the inviolability of individual liberty. Historians like David Eltis have recently laid a great deal of stress on this ironic problem: Britain was becoming more dependent on slavery in the colonies even as it was professing a stronger commitment to "freedom" in the metropolis.

Clearly, justifications for African slavery could coexist in an English mind with ideas about individual freedom. One reason was because the end of slavery in Europe was driven by opposition to enslaving Europeans, not by insurmountable qualms about coercive labor. As Eltis has argued, New World slavery was rationalized, first of all, by racial conventions which distinguished whites and Christians from non-whites and non-Christians, whose enslavement was thought to be justified. Liberal thinkers could praise autonomy and still endorse human property. As Eltis notes, "early modern Europeans shifted property rights in labor toward the individual," but "this trend was consistent with either free or slave labor. With respect to Europeans it led eventually to the former. As applied to non-Europeans ... it led to the latter" (23).

John Locke's thinking about slavery is a good example of the period's general intellectual ambivalence. As a leading advocate for the sanctity of liberty and ownership, Locke was nonetheless able to condemn the enslavement of Britons without discouraging African slavery. Indeed, he personally profited by investments that reaped profits from colonial slavery. In the first line of Two Treatises of Government, Locke did say that "slavery is so vile and miserable an Estate of Man, and so directly opposite to the generous Temper and Courage of our Nation; that 'tis hardly to be conceived, that an Englishman, much less a Gentleman, should plead for't." But the target of Locke's attack here was absolute monarchism, not plantation slavery in the British West Indies. And what bothered him in the Treatises was not that slavery was always antithetical to freedom, but that it seemed "directly opposite" to the "Temper and Courage of our Nation." It was not indefensible by virtue of one's being human, but indefensible by virtue of one's being an English Gentleman.

Many Britons agreed that slavery was not properly a part of their nation. It was a distant feature of the colonies, out of "Temper" for a freedom-loving country like England. For much of the seventeenth century, the Caribbean colonies were imagined as a place where ordinary rules did not apply. It was a world "beyond the line," a kind of early modern Wild West. Seymour Drescher has argued that "slavery remained far more a geographically than racially conceived system" (16). Even as Britons were repeatedly reminded that slavery was "repugnant to their constitution," they were constantly aware that it existed in a world apart, removed from their constitution's reach by an actual and ethical ocean. According to Drescher, therefore, "a 'Braudelian' sense about the difficulty of overcoming spatial distance is as necessary to understanding the smooth functioning of the slave system as is its economic viability" (23).

The distance between core and periphery was as imaginary as real. Britons conceived of colonial slavery as they conceived of most colonial excess - it was a social excrescence that had little to do with England or life behind "the line." As James Walvin puts it, the "physical divergence between imperial (West Indian) life and domestic British Society" reflected a sense of cultural difference. The colonies were "as different - as unfamiliar and hostile - as any other alien culture could be" (23-24).

Drescher and Walvin may overstate the extent to which colonial slavery was out of sight and out of mind in the metropolis. But the imagined and physical distances between England's core and periphery help us think fruitfully about what Eltis sees as colonial slavery's "insider/outsider" divide. His dichotomy, after all, implies a more basic one between an "inside" and an "outside." The boundaries that separated racial "insiders" from "outsiders" were generic on one level, but Drescher and Walvin remind us that they were also geographic. Metropolitan Britons never (never, never) would be slaves not only because they were European "insiders," but also because they were "inside" England.

Even the metaphors with which Britons talked about their venerated freedoms - the free "air" or the free "soil" of England - referred to a concrete and particular place. Thus, slavery was not just a fate that could only befall "outsiders." It was also a system that could only fall "outside" of England. In the same century that Britons were working out a division between slave and free labor, they were conceptually drawing a parallel line dividing their colonies and the metropolis. Slavery was becoming a foil for metropolitan England as much as a foil for freedom.

But if Drescher is right that slavery was a "geographically" conceived system, then in order to challenge its continuation, abolitionists had to overcome the presumption that slavery could be left out of mind because it was out of sight. In other words, abolitionists had to reimagine slavery not as "beyond the line," but as a blot on England itself -- even when it was in the colonies. To borrow the title of David Brion Davis' most recent book, abolition was not just about challenging slavery; it was often about "challenging the boundaries of slavery." In the next post, I'll sketch some of the ways in which the boundaries of British slavery began to be challenged at the close of the eighteenth century.

[Disclosure: Some parts of this post were excerpted from an unpublished paper I had previously written for my graduate program.]

Thursday, March 03, 2005



These short posts are fun! Don't worry. Mode for Caleb will soon return to its regularly scheduled tortuous verbosity. (Two adjectives and an adverb for one stilted noun! See, I've still got it!)


Weeping and gnashing

Last week, Jason at Gower Street offered some help to anyone who needs something to weep and gnash teeth about. This doesn't come as near to the threshold of pain set by Jason's post, but this comment by Edward Liu at Slacktivist set my teeth on edge.
When Ray [Charles] passed, one of the message boards I post to that tends to skew younger had a number of posts that expressed condolences that the blind guy on the Pepsi commercials had died.
Weep and mourn, my friends, for we are indeed living in the last days.


You know you're a blogger when ...

Bloggers you've never met start appearing in your dreams. Last night I dreamed that I was visiting Adam Kotsko, who was still living at home with his mother. I've never met or seen Kotsko before, but in my dream he looked sort of like Alfred E. Neuman, but with a longer face.

I remember asking him what we were going to do over the weekend. He said, "Read. Read books all day." Thanks to the weird transmigration of souls somehow made possible by rapid eye movement, by the end of my dream Kotsko had changed into Eminem. Don't ask me how or why this happened; I'm a fan of Kotsko's blog but not a fan of Eminem's. And I don't know why Eminem and I were hanging out in his (or Kotsko's?) backyard, with him tossing me a tennis ball as I dove to catch it over a swimming pool.

I hereby release Adam Kotsko from any responsibility to notice this post or comment on it, on the grounds that he is probably really freaked out.

Wednesday, March 02, 2005


Caleb the donkey

Okay, so I'm reading one of my favorite new blogs, Query Letters I Love, which is a running tally of "actual, honest to god query letters I've received in Hollywood." I run across this pitch for an animated movie about the "undersized talking donkey" that Jesus gets to ride on Palm Sunday.

Help me out here: Nowhere in the pitch is the donkey's name mentioned, but in the comments, everyone's referring to the donkey as Caleb. What gives?

If you haven't clicked through yet, here's a teaser that will make you: "However, this interferes with the donkey's plans to prove he is a manly donkey and marry one of the master's girlfriend's donkeys. The donkey's adopted father, the Rooster Red, tells the donkey that his master is just using the donkey for his own selfish ends."



Where else but in the blogosphere do I get to masquerade as an early modernist? Someone (thanks, whoever you are) apparently nominated a post from Mode for Caleb for the Carnivalesque on early modern history, which allowed me to discover what looks like a great carnival! How had I missed this before?


Emma Dunham Kelly-Hawkins

Scott McLemee's newest column at Inside Higher Ed discusses the recent discovery by a Brandeis graduate student (go, grad students, go!) that Emma Dunham Kelly-Hawkins, long presumed to be a turn-of-the-century African American novelist, may have been white. Other good posts on the subject can be found at Crooked Timber, Easily Distracted, and The Reading Experience. Ralph Luker also asks whether there is a history scandal here somewhere.

Most of what I would say about this news has been covered in these places, but there are a few tangents that I haven't seen raised elsewhere.

First (and maybe I'm sticking my neck out for saying this), there seems to have been an enormous rush to judgment that the evidence provided by Holly Jackson, the Brandeis graduate student aforementioned, is straightforward proof about Kelly-Hawkins' race. [P.S. The preceding was a sloppy sentence. See comments for clarification.] But it's interesting to me that almost all of the evidence for her "whiteness" hinges on nineteenth-century census records.

This is not the only evidence, of course: there are documentary records at the end of Kelly-Hawkins' life, along with family memories that identify the family as white. But this later evidence could be explained, conceivably, by the family's conscious adoption of "whiteness," a not uncommon attempt in American history to erase African American ancestry from the family tree. Jackson considers this possibility -- that the family was "passing" -- but rejects it on the slender hypothesis that they could not have fooled the census-takers in a small Massachusetts town, where Kelly-Hawkins' family had lived for more than one generation when she was born.

Before accepting this hypothesis [P.S. Another sloppy phrase; imagine reading the sentence without it. See comments for clarification.], I'd like to know more about the way the census was taken in Massachusetts at the time. In Maryland, for instance, my understanding is that the census was usually recorded in the antebellum period by hired census-takers, who went (more or less) from door to door, asking for names and ages. Presumably, they sometimes also asked for "race," since there was a column on the census for recording this, usually "W" for white, "B" for black, and "M" for "mulatto." But the column was usually labelled "color," not "race," and it's highly probable that white census-takers often simply identified a person's "color" with their own eyes. That is, if a person looked white, the census taker could mark down his "W" and move on, regardless of the person's own identification of himself or herself.

Again, I don't know whether this was the way the census was taken in Kelly-Hawkins' case, but it's a question worth raising. [P.S. The preceding sentence gets closer to the main point I wanted to make.] I also don't know whether census takers necessarily knew the locals, as Jackson seems to assume. But my main point here is to question how easily many people still seem to take the census as a final arbiter of vital statistics or racial identities, when the census itself was a document shot through with ambiguity and power relationships. (Martha Hodes at New York University had an excellent article on these subjects two years ago in the American Historical Review, "The Mercurial Nature and Abiding Power of Race." The link will only work if you have an individual or institutional subscription to the History Cooperative.)

Here I would echo Burke's post that "race" itself is a cultural construction, a marker of identity rather than a signifier of determinate reality. Maybe I'm going out on a limb again by saying that there may have been some value in the scholarship that has tried to deal with Kelly-Hawkins' literature on the assumption that she was a black woman. Most of that scholarship has wrestled with figuring out the "aggressive whiteness" (to use Jackson's phrase) of Kelly-Hawkins' characters. This is the kind of scholarship that has helped us shake free of the idea that "whiteness" is a simple fact that inheres essentially in a person. Rather, "whiteness" could potentially be "aggressive," which means that it is a fluid and shifting concept that can be adopted, to various degrees, by individuals and even groups. Race is, to use Hodes' word, "mercurial." It would be a shame if somehow these new documents about Kelly-Hawkins undid that valuable insight. It would be a step backwards, rather than a step forwards, to reify race again, and make Kelly-Hawkins' (or anyone's) race a simple matter of reading letters in a column.

Second, I agree with McLemee that this new evidence does not mean (as Henry Louis Gates seems to have suggested) that Kelly-Hawkins' mediocre novels are no longer worth studying. McLemee points out that these novels are still worth studying as embodiments of the "banality of evil" and racism at the end of the nineteenth century. I would add, however, that we should not simply assume that because an author was white, he or she had no influence on contemporary African American writers. That seems to me to be a larger problem with the project of constructing a vacuum-sealed canon of African American literature: it assumes that the only tradition that matters for understanding black writers is the tradition built by black writers. Jackson writes, for instance, that Gates originally introduced the series in which Kelly-Hawkins appeared by saying that he had found, in these obscure black writers, "the literary ancestors of Zora Neale Hurston, Alice Walker, and Toni Morrison."

That way of tracing literary ancestry seems to me misguided. Is there any evidence that Alice Walker read Kelly-Hawkins? Probably not. It seems at least more plausible that Kelly-Hawkins' black contemporaries, like Frances E. W. Harper, might have read her work. That would at least be a good question for research. To find out that an author was white does not immediately prove her work's irrelevance to the African American canon, because black authors read white authors, and vice versa. Dickson Bruce has argued, for instance, that the real question for scholars of African American literature is to "investigate the historical conditions for an African American literary enterprise," and that means figuring out the "process in which black and white writers collaborated in the creation of what I call an 'African American literary presence.'" Even if Harper had not read Kelly-Hawkins, her novels probably had to deal in complex ways with the portrayals of African Americans in them. More evidence would be needed to show that Kelly-Hawkins had any influence, whether direct in Gates' sense or indirect in Bruce's sense, on the development of the African American canon. To show that she was white does not automatically prove she had no influence at all.

Tuesday, March 01, 2005


Lebanese protests

Mass protests in Lebanon have led to the resignation of Prime Minister Omar Karami, a Syrian loyalist. Let the credit-claiming begin. There are already ripples in the blogosphere (even Belle Waring!) that will likely build into a wave of praise for the Bush administration's foreign policy in Iraq. Democratization in Lebanon will be chalked up to the example of Iraq's elections, just as Qaddafi's dramatic shifts on nuclear disarmament a few years ago were credited to the example of Saddam Hussein's defeat.

My suspicion is that -- just as Qaddafi's disarmament was the product of longer processes of historical change, rather than a spontaneous reaction to a nearby war -- a showdown over Lebanon's relationship to Syria has been brewing for some time. The lines of causation here are complex. The funny thing about suspicions of complexity, though, is that they are usually felt either by very ill-informed or very well-informed people, and I know I fall into the former category when it comes to Lebanon. But Juan Cole falls into the latter, and he argues (in a very lucid and informative post) that the roots to the current events in Lebanon run much deeper than a month or two. I'm very inclined to believe him.

I'm less inclined to believe David Brooks (via Daniel Drezner), who unsurprisingly attributes Lebanon's democratization to the example of Iraqi elections. He quotes the Lebanese Walid Jumblatt: "It's strange for me to say it, but this process of change has started because of the American invasion of Iraq. I was cynical about Iraq. But when I saw the Iraqi people voting three weeks ago, eight million of them, it was the start of a new Arab world." Cole sets Jumblatt in a much longer context, and suggests that this quote is a strategic move to flatter the Bush administration, much like Chalabi did with great success in Iraq. As Cole says, "Jumblatt has a long history of anti-Israeli and anti-American sentiment that makes his sudden conversion to neoconism likely a mirage. He has wanted the Syrians back out since 1976, so it is not plausible that anything changed for him in 2003." Moreover, if Brooks looked more closely at Jumblatt's statement, he might notice that the putative example of Iraq's election points Jumblatt to the "start of a new Arab world," which might not necessarily be meant to signify a newly democratic and pluralistic Middle East.

I am not cynical about Lebanon; in fact, the news reports strike me as very hopeful. But I am cynical about attempts by Americans to take credit immediately for their hope. I have no problem believing that Iraqis might have inspired some of the Lebanese who have taken to the streets, but I have a problem with coding the Iraqi example immediately as an American example. And I also have a problem with ideological attempts to screen out other potentially instructive examples that have little to do with the war in Iraq. The New York Times reports, for example, that "opposition leaders say they have consciously imitated the popular uprising in Ukraine." And I have a problem ignoring those parts of the Lebanese case that demonstrably depart from the history of Iraq in the last two years. The Times also reports, for instance, that Lebanese protesters handed flowers to soldiers. I think I remember that not happening in Iraq, for the obvious reason that many Iraqis see American troops the way that these Lebanese protesters see Syrian troops.

But even the Times goes on to say that the flower distribution provided "scenes reminiscent of protests in the United States in the 1960's." That historical analogy may well hold (I'm ill-informed on this subject, remember), but I wonder. Why must every step forward in the Middle East now be seen as proof that "freedom is on the march" in American boots? Why must we view these complex events through a backwards telescope, so that everything reduces to American example? Brooks writes that the "tendency to imagine new worlds" is America's unique gift to the world. Balderdash. Brooks writes that America is "destined" to provoke the question "Why not here?" He doesn't say whether that destiny is manifest, but I suspect he thinks it is. Hooey. I use these strong words because it is offensive and misguided for Americans to imagine that we are the only people capable of imagining change and hoping for better. That is truly the cynical position, a holdover from the eras of European imperialism and Cold War politics when "Arab" democracies could only be seen as American or European ones.

There is a deep contradiction in the manifest destiny of Brooks and the Bush administration. On the one hand, they say that freedom springs eternal in every human breast. It is a gift from God. It will rise, it will conquer, it will assure the triumph of democracy. On the other hand, when people in distant places do talk of democracy, it must be because they heard it from us, not because they listened to their hearts. Brooks gives this contradiction its most acute expression, but without realizing it's a contradiction:
For the final thing that we've learned from the papers this week is how thoroughly the Bush agenda is dominating the globe. When Bush meets with Putin, democratization is the center of discussion. When politicians gather in Ramallah, democratization is a central theme. When there's an atrocity in Beirut, the possibility of freedom leaps to people's minds.
I thought the possibility of freedom is always leaping to people's minds, put there by God almighty. That's Bush's position, and when he puts it this way he's closer to being right than wrong. But when Bush/Brooks goes on to claim that the spread of freedom is evidence of "how thoroughly the Bush agenda is dominating the globe," he takes back with one hand what he has just given with the other. And he turns from talking about the universal gift of freedom to the particular domination of American "soft" power.

Be glad that things are going so well and so peacefully in Lebanon. Applaud the people there and root for their success. You can even show me evidence that their success has been helped by American foreign policy, so long as you don't isolate that evidence from longer chains of causation. Because using these events as opportunities simply to applaud the Americans is actually a way of turning one's back on the Lebanese, and failing to see them for who they are and what they want.

UPDATE: I highly recommend Paul Musgrave's rejoinder to this post at In the Agora.

Site Meter