Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Customized pronouns: A good idea that makes no sense (Globe and Mail)

Posted: Oct 24, 2016 13:10;
Last Modified: Oct 24, 2016 13:10

Tags: , , , , ,

---

Originally published as O’Donnell, Daniel Paul. 2016. “Customized Pronouns: A Good Idea That Makes No Sense.” The Globe and Mail, October 15. http://www.theglobeandmail.com/opinion/customized-pronouns-a-good-idea-that-makes-no-sense/article32373933/.


The latest thing on campus is to introduce yourself by name and “preferred pronoun.” “Hello, my name is Dan and I prefer he/him.” Or, short enough for Twitter or the name-sticker at a campus mixer, “Jay (they/them).”

The choice is not limited to “he,” “she,” or “they.” “Ze” or “zie” is common in some communities as an alternative to traditional gender pronouns, as is “ey” and “xe.” Some people even create their own. The performer Justin Vivian Bond prefers “v,” and a student at the University of Michigan chose “his majesty” to protest the fact he was being asked to specify a pronoun at all.

The movement to change pronouns belongs to a history of language engineering that became particularly prominent in the 1970s with the promotion of “Ms.” as an alternative to “Mrs.” or “Miss,” and the development of non-sex-specific terminology such as “firefighter” for “fireman,” and “letter carrier” for “mailman.”

Pronouns, however, are different in that the laws of grammar make them much harder to modify. They do change and they can be engineered (the Swedes recently created a new pronoun, “hen,” to cover “not specifically male or female”). But they cannot be customized.

This is because pronouns are, by their very nature, anti-individual. They are the parts of speech that are used to strip away the semantic qualities that make things, people and ideas unique. Asking people to use a custom pronoun is like asking them to use a custom tense other than past, present, or future. The request simply doesn’t make sense within the rules that govern our language.

This focus on pronouns is odd, moreover, because English already has an egalitarian pronoun for talking to people: “you.” Modern Standard English, unlike many other languages, makes no distinction in direct speech on the basis of age, rank, gender or even whether you are an individual or group. In English, you use “you” to address your employee or your boss, your boyfriend or your girlfriend, an individual or a baseball team. It is the same whether the person you are speaking to is male or female, gender queer, or a group holding a variety of opinions on the question.

A “preferred pronoun,” in contrast, is intended to be used when you are talking about somebody to a third party. But if there is one thing you can’t control, it is how others speak about you. Asking somebody to use a custom pronoun in this context is like asking them to remember that you are a nice person, or getting them to use a preferred nickname. In the end, it’s impossible to control.

The fact that you can’t customize pronouns doesn’t mean that they don’t change. “You,” for example, was not always as universal as it now is. We used to use different forms of “you” to speak to different types of people. “Ye/you” was for when we were talking to a group or wanted to be polite. “Thou/thee” was for individuals when they were our equals or inferiors. Over time, “thou” began to seem a little rude no matter who we were talking to, and we started to use “you” for everybody.

The opposite happened with “he,” “she” and “they.” Originally, all three pronouns began with “h”: “he” for “he,” “heo” for “she,” and “hie” for “they.” Over time, these began to sound alike, until in some dialects you simply couldn’t tell them apart. “He” could mean “he,” “she” or “they.” In this case, we began to reintroduce the lost distinctions of gender and number. We borrowed “they/them” from Norse, and developed a new form, “she,” to distinguish masculine from feminine.

We also began, more than 600 years ago, to use “they” in the singular for situations in which we didn’t know or didn’t want to say whether the person we are talking about is masculine or feminine. Chaucer used “they” this way, as did Shakespeare and Austen. Even The Washington Post began to accept “they” in the singular starting last year.

What this shows is that we don’t really need to engineer pronouns at all. We already have a very neutral one for when we are talking to somebody. And “they” exists to cover cases where the traditional gender binary doesn’t apply.

But even if you feel we do need new forms, it is important to realize that pronouns simply cannot be customized. We might be able to create a few new forms for general use, but they cannot, by definition, be tailored to the individual. Campus regulations that pretend otherwise are simply setting everybody up for disappointment.

----  

Could we design comparative metrics that would favour the humanities?

Posted: Mar 29, 2015 13:03;
Last Modified: Mar 29, 2015 17:03

Tags: , , , , ,

---

A quick, and still partially undigested, posting on metrics that might favour the humanities over the sciences in “open” competitions. I’m working this out in response to a discussion I had recently with a senior administrator who argued that the University’s tendency to channel resources disproportionately to the Natural Sciences was simply the result of their comparative excellence as measured in “open” competitions.

Contents

For a supposed “Liberal Arts” University, the University of Lethbridge is exceptionally bad at supporting the Humanities

As I’ve pointed out before, for a supposed Liberal Arts University, the University of Lethbridge is exceptionally poor in its support for the Humanities. While the Humanities suffer from a lack of resources and attention in comparison to the Social and especially Natural Sciences at all Universities, the University of Lethbridge is a national outlier in the way it has starved its researchers in this area over the last quarter century.

Thus, for example, while our HSS (Humanities and Social Sciences) researchers score at about the 50th percentile on a field normalised basis in terms of their research impact, we come in fourth-last in terms of our funding success compared to other Humanities and Social Science researchers at Canadian Universities (our natural scientists, in contrast, come in at the top of the bottom third in Canada in terms of both impact and funding success).

Poor performance can be attributed in part to administrative monocultures.

There are probably a number of reasons for this mismanagement. But one of them is almost certainly the fact that the University has for the same amount of time been managed almost entirely without participation from Humanists. In the last quarter-century, only two people with a background in the Humanities have been members of our senior administration—and one of these has been a Historian who has been managing our Faculty of Health Sciences. Two years ago, we appointed a classicist as Dean of Arts and Science. This is the first time in 25 years that a Humanist has been in a position to control a budget that actually affects Humanities research.

My argument has been that this lack of disciplinary breadth in our senior administration is largely responsible for our poor support for the Humanities (there have been more administrators from the Social Sciences and, not surprisingly, I would argue, they have tended to do better than the Humanities in terms of gaining resources). It is a natural impulse to find the things you understand more important than the things you do not and an equally natural impulse to unconsciously favour those who share your background and training. Just as our (almost exclusively) male senior administration has tended to find other men to be the most suitably people for vacancies as they have come up, so too an administration that consists (almost exclusively) of natural scientists has tended to think that those are the areas that could make the best use of resources like Canada Research Chairs and Board of Governor Research Chairs (until two years ago, the University of Lethbridge—almost uniquely in Canada—had never appointed a Canada Research Chair in the Humanities and only one in the Social Sciences; it has never appointed a Humanist to a Board of Governors Research Chair).

Or could it be that our Humanists are simply worse than our scientists?

Recently a member of the Senior Administration suggested to me that my analysis of the problem at the U of L was wrong because Research Chairs and similar resources are now being awarded competitively on the basis of open, University-wide, competitions (they used to be simply assigned by the Vice President Academic). If natural scientists are winning these resources, this persons argument went, then it was presumably because they were simply better.

Moreover, the committees that makes these awards are interdisciplinary. So it is no longer the case that these resources are being assigned solely by scientist-administrators who know nothing about the domain. While we may not have that many Humanists in our administration, the scientists we do have are being careful to overcome their bias by allowing the different disciplines to compete against each other.

There is no such thing as a truly “open” cross-disciplinary competition

But is there such a thing as a truly “open” competition across disciplines? The skills and activities that make you a good English professor, for example, may not be the same as those that make you a good Biologist. And within our different disciplines, we reward people for different kinds of activities (for an excellent discussion of this, see How Professors Think: Inside the Curious World of Academic Judgment by Michélle Lamont). Given this, it is an open question to what extent the outcome of these competitions is being shaped by the criteria that are being used to adjudicate them.

And, in fact, the criteria we usually use in these cases tend to favour the sciences: publication and citation counts, impact factors and h-indices are all measurements that are better suited to measuring activity in a field that moves quickly and deals in largely incremental and linear development. While there are problems with the use of such metrics even within the Sciences, there is no indication that they represent an adequate method for identifying excellence in other domains.

Using the wrong criteria can reward sub-optimal behaviour and hide excellence

Indeed, it is even possible that they might hide excellence or reward sub-optimal behaviour in some domains, even as they recognise and reward excellence in others. Many Humanities disciplines, for example, treat “the book” and/or lengthy articles as a measure of scholarly maturity. Publication counts—which reward scholars for avoiding synthesis by dividing work into minimum publishable units—are going to be a very poor measure of success in such fields. In English, for example, we tend to see books as being evidence of excellence; somebody who wants to beat a scientist in an open competition in terms of publication counts, however, would almost certainly be better concentrating on Notes, one of our more minor forms of publication.

Could we reverse the tables and create a structural bias in favour of things that make Humanities research excellent?

All this got me thinking, what would it take to reverse the tables on these “open” competitions? I.e. what metrics could I come up with that, while seeming neutral, might actually provide a structural advantage to Humanists over Natural Scientists in head-to-head competitions. In the spirit of “notes for further research,” here are a couple of guesses:

  1. Average length of contribution (the L-index). Anybody who has ever sat on cross-disciplinary promotion committees knows that page count means different things in different disciplines. In many Humanities disciplines, the best work tends to be synthetic: i.e. things that gather together various views and opinions and construct a larger synthesis. This is opposed to many sciences, where short, actual results are privileged. Our current use of publication counts privileges fields in which it is possible to think in terms of “minimal publishable units.” But what if we came up with a measure that privileged synthesis? A person who has published a few long works (i.e. has a high average length per publication) is probably a poor scientist; but they are also probably a stronger humanist. I’d be interested to see how we’d do if we starting counting length of publication along side their number.
  2. Length of Citation Record. I published my edition of Caedmon’s Hymn 10 years ago. The edition it replaced was published 70 years before that. Both works are still being cited and indeed my edition has recently been the subject of a major review article. This is not the result of any special excellence on my part or the part of the predecessor edition: it is in fact not uncommon in the Humanities to see references to a “recent” study that is ten to fifteen years old. What this suggests, then, is that length of citation record is probably an important measure of Humanities research success. Once again, it is probably a poor measure of scientific research success—except perhaps in the case of a few ground breaking examples—where the research development is more incremental and linear. But this is also why the h-index (which in practice is a measure of speed of citation rather than longevity of citableness) favours scientists over Humanities.
  3. Diachronic citation trend. My edition of Caedmon’s Hymn is also getting cited more now than it was when it was first published. In fact, in work I am planning to present this summer, I will show that it takes about 15-20 years for an edition to become “standard” (i.e. cited by everybody). This is also probably true of our greatest and most important works of literary theory, history, and philosophy: it takes a while for syntheses to catch on and influence thinking. Once again, this is opposed to much of the sciences. While some work, again largely field-changing, fundamental work, probably does have a long and upwardly rising citation trend, I suspect most science publications (including much of the very best work) have a citation half-life—that is to say that their citations fall off with time as the field moves on. In the Humanities, while this is probably common too, it is not a good sign: the best Humanities work gets cited with increasing frequency through time.

Maybe the solution is to compare apples to apples

This is all a thought experiment and is for the most part guesswork rather than research-based. But it is fun to wonder what would happen if the U of L redid some of its recent “open” competitions using criteria like the above as the discriminators. Since, I suspect, these criteria are as unfair to scientists as the ones we currently use are to Humanists, I guess the results would be very different.

Of course the better approach is to avoid “open” competitions at all and instead proceed on a discipline-normalised basis.

----  

Prayer as a management tool

Posted: Feb 28, 2015 13:02;
Last Modified: Feb 28, 2015 13:02

Tags: , , , , ,

---

On my way home now from a fascinating and fun two day visit with Kay Walters to Brigham Young University. I’m going to write more in a little about some of the great ideas I saw there having to do with research and the Digital Humanities. But I also want to comment on something more systemic that I saw there.

BYU, for those who don’t know, is a Mormon University (in Southern Alberta, which also has a lot of Mormons, we tend to prefer saying LDS over “Mormon”; in Utah, “Mormon” was by far the preferred term, as far as I could see). It is a church-owned, private university with a religious as well as an academic mission (this is, of course, not unusual: Western Universities largely began in the same way, except as Catholic universities, and there are still many universities around the world that have strong ties to various religions).

The connection to the church is visible every where on campus. There is a strong dress and conduct code and one occasionally sees signs reminding people about them. There is religious paintings and iconography throughout the campus. The university has a standing devotional time each week (Tuesdays at 11). There is no coffee or alcohol on campus (both are forbidden by the church).

During our visit, however, the biggest evidence of the church connection (other, perhaps, than the lack of coffee) was the practice of beginning meetings with an prayer.

I was raised as a Catholic (and indeed, attended Catholic high schools and a Catholic college at the University of Toronto) and so to my eye, these were what I would have described as Protestant-style prayers: free-form and (a few set phrases aside) ad hoc rather than regular. At each meeting, the chair would call on a participant in the meeting to stand and offer a brief prayer and the person called upon would then mention some of the blessings the assembled hoped to gain from the meeting: increased knowledge, an opportunity to share views, to teach and learn, to enjoy some food together, and so on.

Our hosts had “warned” us about this practice, since it is certainly not what you tend to find in secular universities. But while I am an atheist, I certainly did not find the practice disconcerting. In fact, if anything, I found it invigorating: not because I felt that it brought blessings upon us, but because I found it to be a wonderful way of both reminding ourselves why we were meeting and what at least one person in the room hoped to get out of our gathering.

Academics are an idealistic bunch on the whole, but they are also purists by inclination and critics by training. One of the reason academic politics can be so rough is that a significant percentage of us have strong opinions and sharp tongues to go with them. We are often naive, but we are rarely spontaneous and completely unconsidered: we make our livings poking holes in others’ arguments and defending our own; this results in a kind of controlled discourse that is rarely “heart to heart” and can easily fall into a kind of rhetorical cynicism or performance.

What was really nice about this opening prayer was that it provided a brief opportunity at the beginning of a meeting to cut through that. By calling on somebody to offer a spontaneous request for a blessing, and to have a tradition of phrasing that request in terms of what one hoped the group would gain from the gathering, the BYU opening prayer created a moment in which one person in the room explained, in relatively heart-felt and idealistic terms, what we were all doing there.

This got me thinking how wonderful it would be if we could do something similar in a secular context: begin a meeting by having somebody remind us of the big picture in a way that allowed us to remind ourselves of the idealism of our collective endeavour (because, despite the politics, academia is in the end a very idealistic endeavour).

Of course, in this context, our inability to call on God as an “excuse” for this idealistic expression of goals and purpose is a terrific handicap. God provides the context that allows academics in a religious context to break out of their quotidian concerns and practices, even if the result is to establish a tone and frame for the non-religious activity that is about to follow.

But still, I suspect is it something that could be done (and maybe this is what lies behind those famous Japanese group exercises and the like on the factory floor). It might make a really interesting exercise to decide for a certain period of time that each meeting would begin with an invocation of some kind. A statement from somebody in the room as to what they hoped the collective would gain from the gathering—and preferably somebody other than the chair.

Probably not something we could ever actually do in as decentralised and non-hierarchical a place as a University. But it would be kind of cool if there was a secular equivalent of the opening prayer: a kind of idealism moment to remind us what we are all about in the end.

----  

More on Aauthors and Aalphabetical placement

Posted: Jul 26, 2014 16:07;
Last Modified: Jul 26, 2014 16:07

Tags: , ,

---

In an earlier post today, I discussed some of the economic implications of having a last name beginning early in the alphabet in disciplines that traditionally order the authors on multi-author papers alphabetically.

I’ve since looked up the original paper (Einav, Liran, and Leeat Yariv. 2006. “What’s in a Surname? The Effects of Surname Initials on Academic Success.” The Journal of Economic Perspectives 20 (1): 175–88). This is more startling than I thought.

First of all, from the authors’ own description:

In this paper, we focus on the effects of surname initials on professional outcomes in the academic labor market for economists.

We begin our analysis with data on faculty in all top 35 U.S. economics departments. Faculty with earlier surname initials are significantly more likely to receive tenure at top ten economics departments, are significantly more likely to become fellows of the Econometric Society, and, to a lesser extent, are more likely to receive the Clark Medal and the Nobel Prize. These statistically significant differences remain the same even after we control for country of origin, ethnicity, religion or departmental fixed effects. All these effects gradually fade as we increase the sample to include our entire set of top 35 departments.

We suspect the “alphabetical discrimination” reported in this paper is linked to the norm in the economics profession prescribing alphabetical ordering of credits on coauthored publications. As a test, we replicate our analysis for faculty in the top 35 U.S. psychology departments, for which coauthorships are not normatively ordered alphabetically. We find no relationship between alphabetical placement and tenure status in psychology.

We then discuss the extent to which the effects of alphabetical placement are internalized by potential authors in their choices of the number of coauthors as well as in their willingness to follow the alphabetical ordering norm. We find that the distribution of authors’ surnames in single-authored, double-authored and triple-authored papers does not differ significantly. Nonetheless, authors with surname initials that are placed later in the alphabet are significantly less likely to participate in four- and five-author projects. Furthermore, such authors are also more likely to deviate from the accepted norm, and to write papers in which credits do not follow the alphabetical ordering.

Here are the core figures from the paper, comparing top Economics departments (alphabetical ordering) and top Psychology departments (non-alphabetical ordering):

Figure showing distribution of last initials in Top Economic departments.

Figure showing distribution of last initials in Top Psychology departments.

As Figure 1, shows, Economic departments have a definite tendency towards having tenured members with early last names: about 50% of the tenured faculty in top 5 departments have names beginning with the letters from A-G, and by the time you get to O you’ve accounted for about 70% of the Faculty (Tip for economists planning on having affairs at economics conferences: use “John/Jane Doe” rather than “John/Jane Smith”—I’m guessing you’re likely to pull partners from better schools).

In Psychology, on the other hand, you’re probably at “K,” before you get to 50% of the tenured faculty. (The 50% mark in both sets of departments for untenured faculty comes in at about L, suggesting there isn’t such a bias in the case of pre-tenure hires—perhaps because they publish less before they are hired?)

So all in all, I guess this just means that Paul S. Krugman’s career is even more impressive than it looks: a job at Princeton and a Nobel prize and a last name beginning with “K”? What are the odds. Makes you wonder what’s wrong with Ben S. Bernanke: a last name beginning with “B” and he only makes chairman of the Fed?

Maybe somebody needs to do a study of the influence on academic careers of S. as a middle initial.

----  

A is for Aardvark and author. The economic implications of having a last name with an early letter in the alphabet

Posted: Jul 26, 2014 12:07;
Last Modified: Jul 26, 2014 18:07

Tags: , , ,

---

In many disciplines, when more than one researcher contributes to a paper, the authors are listed in terms of the relative contribution: the first author is assumed to have done the most work, the second the second most, and so on until the last position, which is often as prestigious as first.

In other disciplines, however, the tradition is to order author names alphabetically.

This can be unfair to authors whose names come later in the alphabet, because citation conventions for multiple author contributions usually spell out the names of only the first two or three authors.

But it turns out it can also have career and financial implications. As Marusic, Bosnjak, et al. (see?) report:

Economists calculated that with each letter closer to the front of the alphabet there was an increase in the probability to be tenured at top economy departments and receive professional recognition, as well as a significant increase of 0.41% in estimated salary return for an additional article with alphabetical authorship and a 3.3% chance that 1% lower ranked alphabet letter would increase total and annual publication output in mainstream economics journals (From Marusic, Ana, Lana Bosnjak, and Ana Jeroncic. 2011. “A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines.” PLoS ONE 6 (9). doi:10.1371/journal.pone.0023477).).

On a positive note, it does give us new insults and methods of praise: “he’s so dumb he couldn’t get hired at Princeton even if his name began with double A!” Or “Rarer than a Zwierski in the Ivy league.”

See my followup: More on Aauthors and Aalphabetical placement

----  

The Lethbridge Journal Incubator: A new business model for Open Access journal publication (Elsevier Labs Online Lectures February 18, 2014)

Posted: Feb 19, 2014 16:02;
Last Modified: Feb 19, 2014 16:02

Tags: , , , ,

---

The Lethbridge Journal Incubator: A new business model for Open Access journal publication by Daniel Paul O’Donnell with contributions from Gillian Ayers, Kelaine Devine, Heather Hobma, Jessica Ruzack, Sandra Cowen, Leona Jacobs, Wendy Merkeley, Rhys Stevens, Marinus Swanepoel, and Maxine Tedesco. Elsevier Labs Online Lectures February 18, 2014.

The Lethbridge Journal Incubator: A new business model for Open Access journal publication by Daniel O'Donnell with contributions from Gillian Ayers, Kelaine Devine, Heather Hobma, Jessica Ruzack, Sandra Cowen, Leona Jacobs, Wendy Merkeley, Rhys Stevens, Marinus Swanepoel, and Maxine Tedesco.

----  

Teaching prescriptive grammar hurts student writing

Posted: Jan 22, 2014 14:01;
Last Modified: Mar 04, 2015 05:03

Tags: , , , , , , , , , , ,

---

_Update: Actually, the chart I was really thinking of can be found here.

The other day in my grammar class, I mentioned an article that reviewed years’ worth of controlled studies into methods of composition structure. The article I was thinking about was George Hillocks, Jr., “What Works in Teaching Composition: A Meta-Analysis of Experimental Treatment Studies,” American Journal of Education 93.1 (1984): 133–170.


The table I was thinking of in class is from page 157:



I’d overstated this conclusion a little: while teaching grammar was indeed the only thing people did that made student writing worse, I was wrong when I said it had a greater effect in absolute terms than any other method.


On the more general question of whether teaching grammar is effective, here is Hillock’s conclusion:


Grammar.-The study of traditional school grammer (i.e., the definition of parts of speech, the parsing of sentences, etc.) has no effect on raising the quality of student writing. Every other focus of instruction examined in this review is stronger. Taught in certain ways, grammar and mechanics instruction has a deleterious effect on student writing. In some studies a heavy emphasis on mechanics and usage (e.g., marking every error) results in significant losses in overall quality. School boards, administrators, and teachers who impose the systematic study of traditional school grammar on their students over lengthy periods of time in the name of teaching writing do them a gross disservice that should not be tolerated by anyone concerned with the effective teaching of good writing. Teachers concerned with teaching standard usage and typographical conventions should teach them in the context of real writing problems (160).


Although you need to be careful, because the results are not alway independent, this conclusion has been reached time and time again in different contexts over at least the last forty years. One relatively recent study from an English context is: Dominic Wyse, “Grammar. For Writing? A Critical Review of Empirical Evidence,” British Journal of Educational Studies 49.4 (2001): 411–427.

----  

Schuman is right that we need to get rid of the "College Paper." But wrong when she blames her students for not being able to write them. The "College Paper" has always been an exercise in futility

Posted: Dec 19, 2013 09:12;
Last Modified: Aug 20, 2015 13:08

Tags: , , , , , ,

---

Rebecca Schuman has recently argued in Slate that we should get rid of the College Paper. “Everybody hates college papers,” she writes. “Students hate writing them so much that they buy, borrow, or steal them.”

Instructors, she suggests, hate them even more:

Students of the world: You think it wastes 45 minutes of your sexting time to pluck out three quotes from The Sun Also Rises, summarize the same four plot points 50 times until you hit Page 5, and then crap out a two-sentence conclusion? It wastes 15 hours of my time to mark up my students’ flaccid theses and non sequitur textual “evidence,” not to mention abuse of the comma that should be punishable by some sort of law—all so that you can take a cursory glance at the grade and then chuck the paper forever.

If anything Schuman understates the case. The “college paper” as it is taught in North American universities really is a waste of everybody’s time. It is absolutely time that we got rid of it. Where she is wrong, however, is when she suggests that the problem with the essay is that her students are simply no longer up to writing them. The “college paper” has always been a problem, and students have never been good at them. It is an artificial form that experts and students have been trying to abolish almost from the moment it was first introduced into our universities.

A History of the American “college paper”

Before we get rid of the college paper, we should be clear on what it is we are talking about.

The kind of paper Schuman is describing—the kind of paper that students go through the motions of writing so that instructors can go through the motions of reading—is a relatively recent and very American development. It is a highly formal, template-based approach to argumentation that has its origins in “the daily theme,” a nineteenth-century writing exercise, introduced most influentially at Harvard in 1884, which led ultimately to the development of the famous “five paragraph essay.”

In fact, this form of writing was originally thought of as an introduction to essay writing rather than a form of the essay writing in its own right. In his influential 1853 textbook, A Practical Introduction to English Composition, Robert Armstrong drew a distinction between the “Essay”, “wherein in the writer is a liberty to follow his own inclination as to the arrangement of ideas,” and the “Theme”, “an exercise in which the subject is treated according to a Set of Heads methodically arranged.” The point of asking students to begin by writing the more restrictive theme, Armstrong suggested, was to prepare them for the greater work of actual essays: “It is desirable… that the pupil, before he attempts the writing of Essays, should be trained to the habits of consecutive thinking; and to this end the Theme, as experience has shown, is admirably adapted.”

Ultimately, however, the theme began to be seen as an end in and of itself. The format became increasingly rigid and the goal more and more about forcing students to think and express themselves in a highly standardized, tightly structured, way.

The clearest expression of this new attitude towards composition instruction is found in an astounding 1959 paper by Victor Pudlowski in the English Journal. Identified by Michelle Tremmel as one of the first explicit descriptions of the “five paragraph essay,” Pudlowski’s article explicitly privileges form over all other aspects of his students’ writing. He begins by discussing the “chaotic” attempts typical students produce when asked to write on common composition topics like their pets or their summer vacations. He then explains how he and his colleagues have developed an almost algorithmic template (“we call it a formula”) to force students into writing “well-organized compositions”:

1. We want the student to make one generalization (main point) concerning his composition topic.

2. Then he must reinforce his generalization with three supporting statements. All of this is done in the first (introductory) paragraph.

3. In the next three paragraphs (body), each supporting statement becomes a topic sentence. The first supporting statement is the topic sentence of the second paragraph. The second statement becomes the topic sentence of paragraph three. Supporting statement three is used as the topic sentence of the fourth paragraph.

4. Finally, topic sentences 1-4, when strung together, constitute the concluding paragraph. The generalization from the first paragraph becomes the topic sentence. The other topic sentences are used to support the generalization.

The point, as Pudlowski indicates, is to require students to organise their writing in a specific way. His method has nothing to do with improving their sense of what is interesting or important or to make them better writers in anything but an organizational sense. It does not “pretend to make unimaginative personalities interesting on pieces of paper.” And, as he freely admits, it is both “rather restrictive” and results in work that “tends to be repetitive.”

With his characteristic honesty, moreover, Pudlowski notes that his students, much like those Schuman is reporting on a half-century later, actually hate being assigned these kinds of papers.

The comments students make about the outline are revealing. Some complain: (1) ‘It ties me down too much’. (2) ‘It takes my style away’. (3) ‘I don’t like putting links in’. (4) ‘I can’t express myself’. (5) ‘I want to write my way’.”

But this, he suggests, is also partly the point:

There is, perhaps, a degree of truth to some of these remarks. Students are undoubtedly tied down when they must compose a composition consisting of a specific number of paragraphs. No longer are they able to blunder their way through an assignment by becoming verbose. They can’t shroud themselves in verbiage. This, among other things, is what a student means when he complains that his style is being destroyed. Young people simply do not like being forced to practice the techniques of the craft of writing.

We have always hated them

Almost all North American college students have been taught to write using some version of Pudlowski’s method (the same method is taught outside North America, but far less universally). Not every student is instructed that an essay must have five paragraphs (though experience in the classroom suggests that most students have been told this at some point) and very few seem to be taught his particularly rigid approach to linking the introduction, main body, and conclusion together. All, however, have been drilled on its core elements: an introduction that makes “one generalization (main point),” a main body that is focussed on providing supporting statements for this generalization, and a conclusion that repeats the generalization and supporting statements from the introduction.

The emphasis on this model in composition classes is so strong, indeed, that many students come to see it as being synonymous with writing itself. I have been asked by reporters for the campus newspaper where the thesis has to go in their news articles. And a common question among higher level students who’ve been asked to write longer papers is how many paragraphs of evidence they should include to fill up the extra pages. What was once seen as a training exercise for the more expressive genre of the essay is now usually taught as an end in itself.

Student complaints about the tediousness and irrelevance of theme-based essay writing could be dismissed perhaps if there was any evidence that the method improved their writing. But instructors too have been aware of the problems with this approach to essay writing for almost as long as they have been assigning such papers. In a lecture on [“The Teaching of English Composition: Its Present Status,”] read before the English Department of the Maine Teachers Association in October 1916, William Hawley Davis of Bowdoin College mentions some examples of similar negative student comment about theme-based instruction before going to examine “the very moderate effectiveness of what we are doing in English composition:

Few branches of educative work are attacked more frequently or more bitterly on the score of inefficiency. As a teacher of secondary school English I was always painfully conscious of its shortcomings; I fancy that all conscientious secondary-school teachers are quite as conscious of them as I was. I have observed that my associates in college teaching discharge quite effectively from year to year whatever responsibility rests upon them to be your mentors in this connection. I also observe, however, that the results of English composition work in colleges are likewise far from satisfactory. Only the totally uninitiated any longer suppose that the conversation and the personal or business correspondence of college students is either precise or elegant. At Harvard College, where, I believe, courses in English composition were first instituted and where able teachers have all along been organizing and conducting work in composition, a committee of the faculty is operating under a vote of the Board of Overseers, a vote which states that students “fail to write correct, coherent, and idiomatic English” and demands the formulation of some plan “for bettering the written and spoken English of Harvard students.” If other institutions in Maine and elsewhere are not confessing to themselves a similar condition, who shall say that it is not because they are in that respect far in the rear of Harvard ?

Indeed, the list of problems Davis saw in the writing of the college students of his day closely reflect those seen by Schuman in ours. Reading an advertisement for a composition textbook that was supported by endorsements from professors and students, Davis remarked with sadness on the obviously poor quality of the students’ contributions:

What pierced me through as a co-worker of the instructors of these enthusiastic youths was the crudity of their expression, their appalling awkwardness in using this tool of expression which no student can avoid practice in using and which each has such strong reasons for wanting to use well. Presumably these students had every intention of writing their best in these letters to the editors; presumably each had consciously or unconsciously used the paper as a model-and the paper itself was above reproach. But, you say, they were only students. How expressive that comment! We cannot take pride or even comfort in our position as teachers of a subject recognized everywhere as indispensable until our students write “correct, coherent, and idiomatic English.”

Fixing the college essay

The students Davis is writing about in 1916 belonged to the generation that fought World War I. Few if any are alive today. When modern professors complain about declining educational standards, the golden age they are comparing the current situation to, presumably, is the one Davis and his students inhabited.

But if his students were also making a hash of things then the problem may not lie with the students but the exercise. Indeed, since the 1960s, it has been difficult to find few if any professional educational researchers who are prepared to defend the “college theme” as an adequate approach to teaching composition. In a 2011 review of research since 1907 across thirty academic journals, Michelle Tremmel found three times as many articles arguing against the value of the theme as an exercise as those making the case for its utility.

Indeed, as Pudlowski’s students tried to tell him when he first codifed the five paragraph format back in 1959, this formulaic approach to teaching writing is completely counterproductive. The point of writing essays is to research, discover, and communicate to others the things that interest us most about culture, history, and philosophy. Focussing on how students write rather than what and why they are writing is a terrible way of training to students to engage in the kind of “thoughtful analysis” Schuman wishes saw more of. By breaking their style and forcing them to fit their ideas into a standardized template, what we are really doing is asking students to produce exactly the kind of turgid muck Schuman laments in her essay: “‘arguments’ that are at best tangentially related to the coursework… flaccid theses and non sequitur textual “evidence,” not to mention abuse of the comma that should be punishable by some sort of law.”

The real solution, as Davis argued almost a century ago, is to ask students to write in a context that provides both audience and motivation.

I believe that we do not adopt means ready to hand for making the study seem valuable and important to students. In ordinary composition work, theme writing, I am convinced that the cart is before the horse, or rather the cart is supposed to propel itself. “Write,” we tell the student, and we succeed but poorly in placing any impelling motive behind the effort. If we could first make sure of his desire to communicate something, and should then get him to communicate it appropriately in written composition, much of the labor and drudgery of composing would be unconsciously drawn by the tractor-desire to communicate. The preparation of assembly programs, news as to school and current events, school and community festivals and celebrations, especially those which are historically significant, social events, school notices and advertisements-all of these, skilfully made use of by teachers of English composition, would provide natural, concentrated, and highly effective training in writing. Debating has for some time made use of a powerful artificial incentive to industry and care in composition work; but the field of debating is narrow and by no means free from the perils attaching to intensive cultivation. A very efficient course in composition in Dartmouth College, I am informed, makes use of the journalistic motive throughout its work: the members of the course produce and offer for sale a monthly magazine. On a smaller scale but with, I believe, a satisfactory degree of success I am making use of a similar device at Bowdoin. Now the variety of opportunities for practical motivation of composition work is not so great in college as it is in the secondary school. Surely opportunities are available there in abundance for supplanting the vague motive of learning to avoid errors and to write effectively, with the very real motive of instructing or entertaining others on a given occasion. We cannot wisely neglect these opportunities.

The “college paper,” as it is traditionally taught in North American universities, simply does not provide this type of motivation or opportunity. Students write for a single, private, reader (their instructor) and they do so for a reward (grades) that they fear but do not respect. Far better is to ask students to write for real audiences on things they actually care about. To work on correcting and improving their real writing and argumentation rather than to force them to adopt an artificial style they will only ever use in the college classroom.

Davis, who seems to have been something of a Edwardian nerd, thought that technology might provide opportunities for promoting a suitable kind of publicly engaged writing. In his essay, he asks why Physicists get better equipped classrooms than English professors and suggests that composition instructors should not be afraid to ask for such cutting-edge equipment as “a practical duplicator, with supplies; stereoptican equipment, with projectoscope attachment for throwing on the screen a page of theme writing, etc.; a victorola and educational records… and a printing outfit.” The duplicating machine in particular, he considered essential to the task of letting students see what it is like to write for real audiences:

The uses of a duplicating machine are mainly two: first, to bring typical errors and defects-drawn, not from some strange and remote list in a textbook where they are necessarily mingled with much that is not typical, but from the pulsating product of a known fellow-student-to bring these errors or defects vividly before the entire class; and, secondly, to place good composition work where it will secure the only real reward ever given to good composition work-that of being read.

Davis’s technology may be a little dated, but his approach is as the contemporary as the “flipped” classroom: in his proposal, students learn to write meaningful work by sharing with and reacting to each others’ contributions. With modern technology, we are in a much better position than he ever was to address the kind of deficiencies he (and Schuman) point to in the traditional “paper.” Learning Management Systems allow us to create virtual textual communities in which students can publish and share their analysis and insight—learning, in the process, about why one writes as much as how. Blogging and microblogging applications allow students to participate in global conversations about the subjects they are studying, and, potentially, influence debate far more than was possible with even Davis’s largest and most elaborate printing press.

Not all students will be equally interested in (or capable of) participating in such larger discussions in ever class they take. Preliminary results from research we have been conducting on the use of blogging and other student-centred approaches to teaching composition, however, suggest that a majority do find composition instruction helpful when it begins by focuses on the students’ own interests and ideas and then helps them develop their explication in reaction to the responses to real audiences.

So in the end, Schuman is right about getting rid of the college “paper.” It really is hated by everybody. And students, on the whole, really do tend to produce some godawful examples.

But she is wrong when she argues that this is the students’ fault or that the solution is to give up on asking them to discuss their thinking in written form. The problem with the “paper” is not the students but the exercise itself. The traditional college paper is an artificial project whose flaws have been apparent from almost the moment it was first introduced into American higher education. We have long known what we need to do to get rid of it. The question is how long it will take before we all “finally admit defeat” and start replacing it with more meaningful assignments.

----  

"First thing we do..." Editorial edition

Posted: Sep 27, 2013 12:09;
Last Modified: Jan 25, 2014 14:01

Tags: , , , ,

---

Seymour Hersh in Salon today about the problem with “report the debate” journalism:

“Our job [as journalists] is to find out ourselves, our job is not just to say – here’s a debate’ our job is to go beyond the debate and find out who’s right and who’s wrong about issues,” Hersh said. “That doesn’t happen enough. It costs money, it costs time, it jeopardizes, it raises risks….”

And his solution:

“I’ll tell you the solution, get rid of 90% of the editors that now exist and start promoting editors that you can’t control,” he says. I saw it in the New York Times, I see people who get promoted are the ones on the desk who are more amenable to the publisher and what the senior editors want and the trouble makers don’t get promoted. Start promoting better people who look you in the eye and say ‘I don’t care what you say’.

Mutatis mutandis, I suspect much the same problem and much the same solution is true of Universities as well.

The cause of this is obvious enough: we all believe that the people who believe the same things as us or who agree with us are really quite clever. And when it comes to hiring, we tend to want to hire the clever ones.

What I like about Hersh’s solution, however, is that it proposes an algorithmic solution to our human desire to reward sycophancy: hire somebody who makes you uncomfortable.

This might not be a good way of running an assembly line, hiring an administrative assistant or organising a dentist’s office. And it might not be a good idea to hire nobody but people who make you uncomfortable. But I suspect occasionally taking a flutter on somebody you can’t control is good for the organisational DNA in knowledge enterprises like Newspapers and Universities.

----  

When everyone’s super… On gaming the system

Posted: May 23, 2012 20:05;
Last Modified: May 23, 2012 21:05

Tags: , , , ,

---

note: first published on the dpod blog

Syndrome: Oh, I’m real. Real enough to defeat you! And I did it without your precious gifts, your oh-so-special powers. I’ll give them heroics. I’ll give them the most spectacular heroics the world has ever seen! And when I’m old and I’ve had my fun, I’ll sell my inventions so that everyone can have powers. Everyone can be super! And when everyone’s super… [chuckles evilly] no one will be.

The Incredibles

Here’s a funny little story about how a highly specialised journal gamed journal impact measurements:

The Swiss journal Folia Phoniatrica et Logopaedica has a good reputation among voice researchers but, with an impact factor of 0.655 in 2007, publication in it was unlikely to bring honour or grant money to the authors’ institutions.

Now two investigators, one Dutch and one Czech, have taken on the system and fought back. They published a paper called ‘Reaction of Folia Phoniatrica et Logopaedica on the current trend of impact factor measures’ (H. K. Schutte and J. G. Švec Folia Phoniatr. Logo.59, 281–285; 2007). This cited all the papers published in the journal in the previous two years. As ‘impact factor’ is defined as the number of citations to articles in a journal in the past two years, divided by the total number of papers published in that journal over the same period, their strategy dramatically increased Folia‘s impact factor this year to 1.439.

In the ‘rehabilitation’ category, shared with 26 other journals, Folia jumped from position 22 to position 13.

—“Tomáš Opatrný. Playing the system to give low-impact journal more clout:. Nature 455, 167 (11 September 2008).

Assessing (and hence demonstrating) impact is a difficult but important problem in contemporary academia.

For most of the last century, university researchers have been evaluated on their ability to “write something and get it into print… ‘publish or perish’” (as Logan Wilson put it as early as 1942 in The Academic Man: A Study in the Sociology of a Profession, one of the first print citations of the term).

As you might expect, the development of a reward system built on publication led to a general increase in number of publications. Studies of science publication suggest a growth rate in the number of scientific articles and journals of between 2 and 5% per year since 1907 (a rate that leads to doubling roughly every 15 years). There is also evidence for a particularly marked rise in numbers after the 1950s.

This kind of growth vitiates the original point of the metric. If everybody publishes all the time, then the simple fact of publication is no longer sufficient as a proxy for excellence. You could count the sheer number of publications—a measure that is in fact widely used in popular contexts to imply productivity—were it not so obviously open to abuse: unless you institute some kind of control over the type and quality of publication, a system that simply counts publications will lead inevitably to an increase in number, and a corresponding decrease in quality, originality, and length.

It is perhaps for this reason that modern peer review systems begin to be institutionalised in the course of the second half of the last century. In fact, while peer review is probably understood to be the sine qua non of university research, and while it is possible to trace sporadic examples of activity resembling peer review back into the classical period, peer review in its modern form in fact really only begins to take shape only in the period from the 1940s-1970s. Major scientific journals, including Science and The Journal of the American Medical Association, for example, begin to make systematic use of external reviewers only in the 1940s, partially as an apparent response to the growing number and specialisation of submissions.

As you might expect, the peer review/reward system has itself been gamed. In the same way a reward system built on counting publications leads inevitably to an increase in the number of publications, a reward system build on counting peer-reviewed publications leads, inevitably, to an increase in the number of peer-reviewed publications… and the size and number of the journals that publish them.

Journal impact measurements are a controversial response to the not-surprising fact that peer review has also become an insufficient proxy for excellence. It is still relatively early days in this area (though less so in the natural sciences) and there is as yet not a complete consensus as to how impact should be quantified. As a result, the measures can still take many forms, from lists of ranked journals, to citation counts, to circulation and aggregation statistics, to in the case of on-line journals even more difficult-to-interpret statistics such as bounce and exit rates.

Regardless of how the impact factor debate settles out, however, it is only a matter of time until it too is gamed. Indeed, as the example of Folia Phoniatrica et Logopaedica suggests, it even may not be a matter of time. If you count citations, researchers will start ensuring they get cited. If you rank journals, they will ensure their journals fit your ranking criteria. If you privilege aggregation, the aggregators will be flooded with candidates for aggregation. And it is not clear that commercial understandings of good web analytics are really appropriate for scholarly and scientific publishing.

But the Folia Phoniatrica et Logopaedica example is also interesting because I’m not sure it is a bad thing. I can’t independently assess Opatrný’s claim that the journal is well respected though faring badly in impact measurements, but it wouldn’t surprise me if he was right. And the fact that a single researcher in a single article was able to more than double his journal’s impact score simply by citing every paper published in the journal in the previous two years leaves me… quite happy for him. I doubt there are many people who would consider the article cited by Opatrný to be in some way fraudulent. Instead, I suspect most of us consider it evidence (at best) that there are still some bugs in the system and (at worst) of a successful reductio ad absurdum–similar in a certain sense to Alan Sokol’s submission to Social Text.

None of this means that impact metrics are an intrinsically bad thing. Or that peer review isn’t good. Or that researchers shouldn’t be expected to publish. In fact, in many ways, the introduction of these various metrics, and the emphasis they receive in academia, is very good. Peer review has become almost fully institutionalised in the humanities in the course of my career. When I was a graduate student in the early 1990s, most journals I submitted to did not have formal explanation of their review policies and many were probably not, strictly speaking, peer reviewed. But it was difficult to tell and nobody I knew even attempted to distinguish publications on their CVs on the basis of whether or not they were peer reviewed. We were taught to distinguish publications (and the primary metric was still number of publications) on the basis of genre: you separated reviews from encyclopedia entries from notes from lengthy articles. A review didn’t count for much, even if we could have shown it was peer reviewed, and a lengthy article in what “everybody knew” to be a top journal counted for a lot, whether it was peer reviewed or not.

By the time I was department chair, 10 years later, faculty members were presenting me with CVs that distinguished output on the basis of peer review status. In these cases, genre was less important that peer review status. Reviews that were peer-reviewed were listed above articles that weren’t and journals began being quite explicit about their reviewing policies. The journal I helped found, Digital Medievalist, began from its first issue with what we described as “ostentatious peer review”: we named the referees who recommended acceptance on every article, partially as a way of borrowing their prestige for what we thought was, at the time, a fairly daring experiment in open access publication.

But we did this also because we thought (and think) that peer review is a good thing. My peer reviewed articles are, in almost every case, without a doubt better written and especially better and more carefully argued than my non-peer-reviewed articles. I’ve had stupid comments from referees (though none as stupid as seems to be the norm on grant applications), but there is only one case I can think of where I really couldn’t see how satisfying what the referee wanted wouldn’t improve things.

And the same is true for publication frequency. On the whole, my experience is that people who publish more (within a given discipline) also tend to publish better. I don’t publish too badly for somebody in my discipline. But most of the people who publish more than me in that same discipline are people I’d like to emulate. It is possible to game publication frequency; but on the whole, even the people who (I think) game it are among our most productive and most interesting scholars anyway: they’d still be interesting and productive even if they weren’t good at spinning material for one article into three.

So what does it mean that Schutte and Švec were able to game the impact measure of their journal with such apparent ease? And what should we say in response to the great uproar (much of it in my view well-founded) about the introduction of journal ranking lists by the ESF and Australian governments in recent years? Obviously some journals simply are better than others–more prestigious, better edited, more influential, containing more important papers. And it is difficult to see how frequency of citation is a bad thing, even if its absence is not necessarily evidence something is not good or not important. I would still rather have a heavily cited article in the PMLA than an article nobody read in a journal nobody has ever heard of.

Perhaps the most important thing is that it suggests, as Barbossa says to Miss Turner in Pirates of the Caribbean concerning the “Pirates’ Code,” that these kind of metrics should really be considered “more what you’d call ‘guidelines’ than actual rules.” Journals (and articles), that have a high impact factor, lots of citations, and are heavily read, are probably to be celebrated. But impact, citations, and subscription are not in themselves sufficient proxies for quality: we should expect to find equally good articles, journals, and scholars to exist with lower numbers in all these areas. And more importantly, we should expect to find that any quantifiable criteria we do establish will almost immediately be gamed by researchers in the field: most people with PhD-level research positions got where they are, after all, because they were pretty good at producing what examiners wanted to hear.

The real issue, then, is that metrics like “impact” or “peer review” or even “quantity” are attempts to use quantitative values as substitutes for qualitative assessment. The only real way of assessing quality is through qualitative assessment: that is to say by assessing a work on its own merits in relation to the goals it sets itself in terms of audience, impact, and subject matter, including the reasonableness of these goals. An article by an author who is not famous, in an obscure field, in a on-line journal that has no subscribers, and is not frequently cited may or may not represent poor quality work–in much the same way as might a frequently cited article in a popular field in a journal that is published by a famous academic, in the journal of the main scholarly society in a discipline. What is (or should be) important to the assessor is how reasonably each author has defined his or her goals and how well the resulting work has done in relation to those goals.

And this is where academics’ ability to game any other system becomes a virtue. Since there is no single metric we can create that researchers as a group will not figure out how to exploit (and then in short order), we should accept that we will simply never be able to propose a quantitative measurement for assessing intrinsic quality. What we can rely on, however, is that researchers will, on the whole, try to present their work in its best light. By asking the researchers to explain how their work can be best assessed, and being willing to evaluate that both that explanation and the degree to which the work meets the proposed criteria, we can find a way of comparatively evaluating excellence. Journals, articles, and researchers, that define, then meet or exceed reasonable targets for their disciplines and types of work, are excellent. Those that don’t, aren’t.

And in the meantime, we’ll develop far more innovative measurements of quality.

----  

Byte me: Technological Education and the Humanities

Posted: Dec 20, 2008 14:12;
Last Modified: May 23, 2012 19:05

Tags: , , , , ,

---

Note: Published in "_Heroic Age_ 12":http://www.heroicage.org/issues/12/em.php

I recently had a discussion with the head of a humanities organisation who wanted to move a website. The website was written using Coldfusion, a proprietary suite of server-based software that is used by developers for writing and publishing interactive web sites (Adobe nd). After some discussion of the pros and cons of moving the site, we turned to the question of the software.

Head of Humanities Organisation: We'd also like to change the software.
Me: I'm not sure that is wise unless you really have to: it will mean hiring somebody to port everything and you are likely to introduce new problems.
Head of Humanities Organisation: But I don't have Coldfusion on my computer.
Me: Coldfusion is software that runs on a server. You don't need it on your computer. You just need it on the server. Your techies handle that.
Head of Humanities Organisation: Yes, but I use a Mac.

I might be exaggerating here—I can't remember if the person really said they used a Mac. But the underlying confusion we faced in the conversation was very real: the person I was talking to did not seem at all to understand the distinction between a personal computer and a network server— basic technology by which web pages are published and read.

This is not an isolated problem. In the last few years, I have been involved with a humanities organisation that distributes e-mail by cc:-list to its thirty-odd participants because some members believe their email system can't access listervers. I have had discussions with a scholar working on a very time-consuming web-based research project who was intent on inventing a custom method for indicating accents because they thought Unicode was too esoteric. I have helped another scholar who wrote an entire edition in a proprietary word-processor format and needed to recover the significance of the various coloured fonts and type faces he had used. And I have attended presentations by more than one project that intended to do all their development and archiving in layout-oriented HTML.

These examples all involve basic technological misunderstandings by people actively interested in pursuing digital projects of one kind or another. When you move outside this relatively small subgroup of humanities scholars, the level of technological awareness gets correspondingly lower. We all have colleagues who do not understand the difference between a blog and a mailing list, who don't know how web-pages are composed or published, who can't insert foreign characters into a word-processor document, and who are unable to backup or take other basic precautions concerning the security of their data.

Until very recently, this technological illiteracy has been excusable: humanities researchers and students, quite properly, concerned themselves primarily with their disciplinary work. The early Humanities Computing experts were working on topics, such as statistical analysis, the production of concordances, and building the back-ends for dictionaries, that were of no real interest to those who intended simply to access the final results of this work. Even after the personal computer replaced the typewriter, there was no real need for humanities scholars to understand technical details beyond such basics as turning a computer on and off and starting up their word-processor. The principal format for exchange and storage of scholarly information remained paper and the few areas where paper was superseded—such as in the use of email to replace the memo—the technology involved was so widely used, so robust, and above all so useful and so well supported that there was no need to learn anything about it: if your email and word-processor weren't set up at the store when you bought a computer, you could expect this work to be done for you by the technicians at your place of employment or over the phone by the Help Desk at your Internet Service Provider: nothing about humanities scholars' use of the technology required special treatment or distinguished them from the University President, a lawyer in a one-person law office... or their grandparents.

In the last half-decade, this situation has changed dramatically. The principal exchange format for humanities research is no longer paper but the digital byte—albeit admittedly as represented in PDF and word-processor formats (which are intended ultimately for printing or uses similar to that for which we print documents). State agencies are beginning to require open digital access to publicly-funded research. At humanities conferences, an increasing number of sessions focus on digital project reports and the application. And as Peter Robinson has recently argued, it is rare to discover a new major humanities project that does not include a significant digital component as part of its plans (Robinson 2005). Indeed some of the most interesting and exciting work in many fields is taking advantage of technology such as GPS, digital imaging, gaming, social networking, and multimedia digital libraries that was unheard of or still very experimental less than a decade ago.

That humanists are heavily engaged with technology should come, of course, as no real surprise. Humanities computing as a discipline can trace its origins back to the relatively early days of the computer, and a surprising number of the developments that led to the revolution in digital communication over the last decade were led by people with backgrounds in humanities research. The XML specification (XML is the computer language that underlies all sophisticated web-based applications, from your bank statement to Facebook) was edited under the supervision of C. Michael Sperberg-McQueen, who has a PhD in Comparative Literature from Stanford and was a lead editor of the Text Encoding Initiative (TEI) Guidelines, the long-standing standard for textual markup in the humanities, before he moved to the W3C (Sperberg-McQueen 2007). Michael Everson, the current registrar and a co-author of the Unicode standard for the representation of characters for use with computers, has an M.A. from UCLA in Indo-European linguistics and was a Fulbright Scholar in Irish at the University of Dublin (Evertype 2003-2006). David Megginson, who has also led committees at the W3C and was the principal developer of SAX, a very widely used processor for XML, has a PhD in Old English from the University of Toronto and was employed at the Dictionary of Old English and the University of Ottawa before moving to the private sector (Wikipedia Contributors 2008).

Just as importantly, the second generation of interactive web technology (the so-called "Web 2.0") is causing the general public to engage with exactly the type of questions we research. The Wikipedia has turned the writing of dusty old encyclopedias into a hobby much like ham-radio. The social networking site Second Life has seen the construction of virtual representations of museums, and libraries. Placing images of a manuscript library or museum's holding on the web is a sure way of increasing in-person traffic at the institution. The newest field for the study of such phenomenon, Information Studies, is also one of the oldest: almost without exception, departments of Information Studies are housed in and are extensions of traditional Library science programmes.

The result of this technological revolution is that very few active humanists can now truthfully say that they have absolutely no reason to understand the technology underlying their work. Whether we are board members of an academic society, working on a research project that is considering the pros and cons of on-line publication, instructors who need to publish lecture notes to the web, researchers who are searching JSTOR for secondary literature in our discipline, or the head of a humanities organisation that wants to move its web-site, we are all increasingly involved in circumstances that require us to make basic technological decisions. Is this software better than that? What are the long-term archival implications for storing digital information in format x vs. format y? Will users be able to make appropriate use of our digitally-published data? How do we ensure the quality of crowd-sourced contributions? Are we sure that the technology we are using will not become obsolete in an unacceptably short period of time? Will on-line publication destroy our journal's subscriber base?

The problem is that these are not always questions that we can "leave to the techies." It is true that many universities have excellent technical support and that there are many high-quality private contractors available who can help with basic technological implementation. And while the computer skills of our students is often over-rated, it is possible to train them to carry out many day-to-day technological tasks. But such assistance is only as good as the scholar who requests it. If the scholar who hires a student or asks for advice from their university's technical services does not know in broad terms what they want or what the minimum technological standards of their discipline are, they are likely to receive advice and help that is at best substandard and perhaps even counter-productive. Humanities researchers work on a time-scale and with archival standards far beyond those of the average client needing assistance with the average web-site or multimedia presentation. We all know of important print research in our disciplines that is still cited decades after the date of original publication. Not a few scholarly debates in the historical sciences have hinged on questions of whether a presentation of material adequately represents the "original" medium, function, or intention. Unless he or she has special training, a technician asked by a scholar to "build a website" for an editorial project may very well not understand the extent to which such questions require the use of different approaches to the composition, storage, and publication of data than those required to design and publish the athletic department's fall football schedule.

Even if your technical assistant is able to come up with a responsible solution for your request without direction from somebody who knows the current standards for Digital Humanities research in your discipline, the problem remains that such advice almost certainly would be reactive: the technician would be responding to your (perhaps naive) request for assistance, not thinking of new disciplinary questions that you might be able to ask if you knew more about the existing options. Might you be able to ask different questions by employing new or novel technology like GPS, serious gaming, or social networking? Can technology help you (or your users) see your results in a different way? Are there ways that your project could be integrated with other projects looking at similar types of material or using different technologies. Would your work benefit from distribution in some of the new publication styles like blogs or wikis? These are questions that require a strong grounding in the original humanistic discipline and a more-than-passing knowledge of current technology and digital genres. Many of us have students who know more than than we do about on-line search engines; while we might hire such students to assist us in the compilation of our bibliographies, we would not let them set our research agendas or determine the contours of project we hire them to work on. Handing technological design of a major humanities research project over to a non-specialist university IT department or a student whose only claim to expertise is that they are better than you at instant messaging is no more responsible.

Fortunately, our home humanistic disciplines have had to deal with this kind of problem before. Many graduate, and even some undergraduate, departments require students to take courses in research methods, bibliography, or theory as part of their regular degree programmes. The goal of such courses is not necessarily to turn such students into librarians, textual scholars, or theorists—though I suppose we wouldn't complain if some of them discovered a previously unknown interest. Rather, it is to ensure that students have a background in such fundamental areas sufficient to allow them to conduct their own research without making basic mistakes or suffering unnecessary delays while they discover by trial-and-error things that might far more efficiently be taught to them upfront in the lecture hall.

In the case of technology, I believe we have now reached the stage where we need to be giving our students a similar grounding. We do not need to produce IT specialists—though it is true that a well-trained and knowledgeable Digital Humanities graduate has a combination of technological skills and experience with real-world problems and concepts that are very easily transferable to the private sector. But we do need to produce graduates who understand the technological world in which we now live—and, more importantly, how this technology can help them do better work in their home discipline.

The precise details of such an understanding will vary from discipline to discipline. Working as an Anglo-Saxonist and a textual critic in an English department, I will no doubt consider different skills and knowledge to be essential than I would if I were a church historian or theologian. But in its basic outlines such a orientation to the Digital Humanities probably need not vary too much from humanities department to humanities department. We simply should no longer be graduating students who do not know the basic history and nature of web technologies, what a database is and how it is designed and used, the importance of keeping content and processing distinct from each other, and the archival and maintenance issues involved in the development of robust digital standards like Unicode and the TEI Guidelines. Such students should be able to discuss the practical differences (and similarities) of print vs. web publication; they should be able to assess intelligently from a variety of different angles the pros and cons of different approaches to basic problems involving the digitisation of text, two and three-dimensional imaging, animation, and archival storage and cataloguing; and they should be acquainted with basic digital pedagogical tools (course management and testing software; essay management and plagiarism detection software) and the new digital genres and rhetorics (wikis, blogs, social networking sites, comment boards) that they are likely to be asked to consider in their future research and teaching.

Not all humanists need to become Digital Humanists. Indeed, in attending conferences in the last few years and observing the increasingly diverging interests and research questions pursued by those who identify themselves as "Digital Humanists" and those who define themselves primarily as traditional domain specialists, I am beginning to wonder if we are not seeing the beginnings of a split between "experimentalists" and "theorists" similar to that which exists today in some of the natural sciences. But just as theoretical and experimental scientists need to maintain some awareness of what each branch of their common larger discipline is doing if the field as a whole is to progress, so too must there remain an interaction between the traditional humanistic and digital humanistic domains if our larger fields are also going to continue to make the best use of the new tools and technologies available to us. As humanists, we are, unavoidably, making increasing use of digital media in our research and dissemination. If this work is to take the best advantage of these new tools and rhetorics—and not inadvertently harm our work by naively adopting techniques that are already known to represent poor practice, we need to start treating a basic knowledge of relevant digital technology and rhetorics as a core research skill in much the same way we currently treat bibliography and research methods.

Works Cited

Adobe. nd. "Adobe Coldfusion 8." http://www.adobe.com/products/coldfusion/

Evertype 2003-2006. "Evertype: About Michael Everson." http://www.evertype.com/misc/bio.html

Robinson, Peter. 2005. "Current issues in making digital editions of medieval texts—or, do electronic scholarly editions have a future?" DM 1.1 (2005): http://www.digitalmedievalist.org/journal/1.1/robinson/

Sperberg-McQueen, C. M. 2007. "C.M. Sperberg-McQueen Home Page." http://www.w3.org/People/cmsmcq/

Wikipedia contributors. 2008. "David Megginson." Wikipedia. http://en.wikipedia.org/w/index.php?title=David_Megginson&oldid=257685665

.
----  

If I were “You”: How Academics Can Stop Worrying and Learn to Love “the Encyclopedia that Anyone Can Edit”

Posted: Feb 02, 2007 22:02;
Last Modified: May 23, 2012 19:05

Tags: , , , , , ,

---

Original Publication Information: Forthcoming in Heroic Age (2007). http://www.heroicage.org/.

Time Magazine and the Participatory Web

So now it is official: Time magazine thinks the Wikipedia is here to stay.

In its December 2006 issue, Time named “You” as its “Person of the Year” (Grossman 2006). But it didn’t really mean “you“—either the pronoun or the person reading this article. It meant “us“—members of the participatory web, the “Web 2.0,” the community behind YouTube, FaceBook, MySpace, WordPress,… and of course the Wikipedia.

In its citation, Time praised its person of the year “for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game.” It suggested that the new web represented

an opportunity to build a new kind of international understanding, not politician to politician, great man to great man, but citizen to citizen, person to person.

Actually, as this suggests, Time didn’t really mean “us” either. At least not if by “us” we mean the professional scholars, journalists, authors, and television producers (that is to say the “pros”) who used to have more-or-less sole responsibility for producing the content “you” (that is to say students, readers, and audiences) consumed. In fact, as the citation makes clear, Time actually sees the new web as being really a case of “you” against “us“—a rebellion of the amateurs that has come at the expense of the traditional experts:

It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes.

Academic Resistance

This sense that the participatory web represents a storming of the informational Bastille is shared by many scholars in our dealings with the representative that most closely touches on our professional lives—the Wikipedia, “the encyclopedia that anyone can edit”. University instructors (and even whole departments) commonly forbid students from citing the Wikipedia in their work (Fung 2007). Praising it on an academic listserv is still a reliable way of provoking a fight. Wikipedia founder Jimmy Wales’s suggestion that college students should not cite encyclopaedias, including his own, as a source in their work is gleefully misrepresented in academic trade magazines and blogs (e.g. Wired Campus 2006).

And none of this is having any effect. Eighteen months ago, I had yet to see a citation from the Wikipedia in a student’s essay. This past term, it was rare to find a paper that did not cite it and several of my students asked for permission to research and write new entries for the Wikipedia instead of submitting traditional papers. Other elements of the participatory web mentioned by Time are proving equally successful: politicians, car companies, and Hollywood types now regularly publish material on YouTube or MySpace alongside or in preference to traditional media channels. This past summer, the story of LonelyGirl15 and her doomed relationship to DanielBeast on YouTube became what might be described as the first “hit series” to emerge from the new medium: it attracted millions of viewers on-line, was discussed in major newspapers, and, after it was revealed to be a “hoax” (it was scripted and produced using professional writers, actors, and technicians), its “star” made the requisite appearance on Jay Leno’s Tonight show (see LonelyGirl15).

Why the Participatory Web Works

The participatory web is growing so quickly in popularity because it is proving to be a remarkably robust model. Experiments with the Wikipedia have shown that deliberately planted false information can be corrected within hours (Read 2006). A widely cited comparison of select articles in the Wikipedia and the Encyclopaedia Britannica by the journal Nature showed that the Wikipedia was far more accurate than many had suspected: in the forty-two articles surveyed, the Wikipedia was found to have an average of four mistakes per article to Britannica’s three (Giles 2006). In fact even just Googling web pages can produce surprisingly useful research results—a recent study showed that diagnoses of difficult illness built by entering information about the symptoms into the search engine Google were accurate 60% of the time (Tang and Hwee Kwoon Ng 2006). In some circumstances, the participatory web actually may prove to be more useful than older methods of professional content creation and dissemination: an article in the Washington Post recently discussed how the United States intelligence community is attempting to use blogs and wikis to improve the speed and quality of information reported to analysts, agents, and decision-makers (Ahrens 2006).

Why Don’t We Like It

Given this popularity and evidence of effectiveness both as a channel of distribution and a source of reasonably accurate and self-correcting information, the academic community’s opposition to the Wikipedia may come at first as something of a surprise. What is it that makes “us” so dislike “you”?

One answer is that the Wikipedia and other manifestations of the participatory web do not fit very well with contemporary academic models for quality control and professional advancement. Professional academics today expect quality scholarship to be peer-reviewed and contain a clear account of intellectual responsibility. Authorship attributions are commonly found with forms of intellectual labour, such as book reviews and encyclopaedia entries, that were published without attribution as little as fifty years ago. Some scholarly journals are naming referees who recommend acceptance; readers for journals that have traditionally used anonymous reviews are frequently asking for their names to be revealed.

This emphasis on review and responsibility has obvious advantages. While peer-review is far from a perfect system—there have been enough hoaxes and frauds across the disciplines in the last twenty years to demonstrate its fallibility—it is surely better than self-publication: I imagine most scholars benefit most of the time from the comments of their readers. In my experience, the interest of good acquisition and copy-editor invariably improves the quality of a final draft.

Moreover, peer-review and clear attribution have an important role in the academic economy: they are the main (and usually only) currency with which researchers are paid by the presses and journals that publish them. In the professional academe, our worth as scholars depends very much on where our work appears. A long article in a top journal or a monograph published at a major University press is evidence that our research is regarded highly. Articles in lesser journals, or lesser forms of dissemination such as book reviews, conference papers, and encyclopaedia entries published under our names are less important but can still be used as evidence of on-going professional activity (see, for example, Department of English, University of Colorado [2007]). While it is not quite money in the bank, this transference of prestige and recognition is an important element in most universities’ systems for determining rank and pay.

An article in the Wikipedia is not going to get anybody tenure. Because they are written collectively and published anonymously, Wikipedia articles do not highlight the specific intellectual contributions of individual contributors—although, in contrast to medical and scientific journals with their perennial problem of “co-authors” who lend names to articles without actually contributing any research (for a discussion of one example, see Bails 2006), it is possible to trace specific intellectual responsibility for all contributions to any entry in the Wikipedia using the history and compare features. And while the Wikipedia does have a formal certification process—articles can be submitted for “peer-review” and selected for “feature” status—this process is optional and not very selective: authors or readers nominate articles for peer-review and certification after they have already been published to the web and the reviewing body consists of simply those interested users who happen to notice that an article has been put forward for review and are willing to comment on the relevant discussion page (see Wikipedia: Peer Review). While this body might include respected experts in the field, it also certainly includes amateurs whose main interest is the Wikipedia itself. It also, almost equally certainly, includes people whose knowledge of the topic in question is ill-informed or belongs to the lunatic fringe.

Why We Can’t Do Anything About It

Given these objections, it is not surprising that some of us in the professional academic community are trying to come up with some alternatives—sites that combine desirable aspects of the Wikipedia model (such as its openness to amateur participation) with other elements (such a expert-review and editorial control) taken from the world of the professional academy. One new project that attempts to do this is the Citizendium, a project which, beginning as a fork (i.e. branch) of the original Wikipedia, intends to bring it under more strict editorial control: in this project, “Editors“—contributors with advanced degrees—are to be recruited to serve as area experts and help resolve disputes among contributors while “Constables“—“a set of persons of mature judgment“—will be “specially empowered to enforce rules,… up to and including the ejection of participants from the project” (Citizendium 2006). Other, though far more specialised, attempts to merge the openness of wiki-based software with more strict editorial control and peer-review are also increasingly being proposed by scholarly projects and commercial scholarly publishers.

Few if any of these projects are likely to succeed all that well. While the addition of formal editorial control and an expert-based certification system brings their organisation more closely into line with traditional academic expectations, the economics remain suspect. On the one hand, such projects will find it difficult to generate enough prestige from their peer-review process to compete for the best efforts of professional scholars with more traditional, invitation-only, encyclopaedias such as the Britannica or collections published by the prestigious academic presses. On the other hand, they are also unlikely to be able to match the speed and breadth of content-development found at more free-wheeling, community-developed projects of the participatory web.

In fact, the Wikipedia itself is the successful offshoot of a failed project of exactly this sort. The ancestor of the Wikipedia was the Nupedia, an on-line, open-source (though non-wiki) project whose goal was to develop an on-line, peer-reviewed and professionally written encyclopaedia (see History of Wikipedia, Nupedia, Wikipedia, and Sanger 2005). The editorial board was subject to strict review and most participants were expected to have a Ph.D. or equivalent. The review process involved seven steps: five analogous to those found traditional academic publishing (assigning to an editor, finding a reader, submitting for review, copy-editing, and final pre-publication approval) and two borrowed from the world of open source software (a public call for reviews, and a public round of copy-editing). Begun in March 2000, the project ultimately collapsed in September 2003 due to a lack of participation, slow time-to-publication, and conflicts between professional contributors and editors and members of the public in the open review and copy-editing parts of the review process. In its relatively brief existence, the project managed to approve only twenty-four peer-reviewed articles for publication. At its suspension, seventy-four were still in various stages of review. After the project as a whole was suspended, the successful articles were rolled into the Wikipedia. Relatively few can be found in their original form today.

The Wikipedia was originally established as a support structure for the Nupedia’s open processes—as a place where participants in the larger project could collaborate in the creation of material for the “official” project and contribute to their review and copy-editing. The wiki-based project was proposed on the Nupedia’s mailing list on January 2, 2001 and rejected almost immediately by participants for much the same reasons it is frowned upon by professional academics today. It was reestablished as a separate project with its own domain name by January 10. Almost immediately, it began to best its “mother” project: within a year the Wikipedia had published 20,000 articles and existed in 18 different languages; by the Nupedia’s suspension in the fall of 2003, the Wikipedia had published 152,000 articles in English and was found in twenty-six different languages (Multilingual Statistics). By October 30th, 2006, there were over 1.4 million articles in English alone.

The contrasting fates of the Nupedia and the Wikipedia illustrate the central problem that faces any attempt to impose traditional academic structures on projects designed for the participatory web: the strengths and weaknesses of wiki-based and traditional academic models are almost directly out of phase. The Wikipedia has been successful in its quest to develop a free, on-line encyclopaedia of breadth and accuracy comparable to that of its print-based competitors because the barrier to participation is so low. Because anybody can edit the Wikipedia, millions do. And it is their collective contribution of small amounts of effort that enables the growth and success of the overall project.

The Nupedia, on the other hand, failed because its use of traditional academic vetting procedures raised the bar to mass participation by amateurs but did not make the project significantly more attractive to professionals. Academics who need prestige and authorial credit for their professional lives are still going to find it difficult to use participation in the Nupedia (or, now, the Citizendium) on our CVs. Even in fields where collaboration is the norm, scholars need to be able to demonstrate intellectual leadership rather than mere participation. A listing as first author is far more valuable than second or third. And second or third author in a traditional venue is infinitely preferable to professional academics to membership in an relatively undifferentiated list of contributors to an on-line encyclopaedia to which amateurs contribute. The most prestigious journals, presses, and encyclopaedias all enforce far higher standards of selectivity than the mere evidence of an earned Ph.D. suggested by Nupedia and or “eligibility” for “a tenure track job” preferred by the Citizendium. No project that hopes to remain open to free collaboration by even a select group well-informed amateurs or marginally qualified is going to be able to compete directly with already existing, traditional publications for the best original work of professional scholarly researchers, no matter how stringent the review process. But by raising the bar against relatively casual participation by large numbers of amateurs, such projects also risk vitiating the “many hands make light work” principle that underlies the explosive success of the Wikipedia and similar participatory projects.

A New Model of Scholarship: The Wikipedia as Community Service

If I am correct in thinking that attempts to create alternatives to the Wikipedia by combining aspects of traditional academic selectivity and review with a wiki-based open collaboration model are doomed to failure, then the question becomes what “we” (the professional University teachers and researchers who are so suspicious of the original Wikipedia) are to do with what “you” (the amateurs who contribute most of the Wikipedia’s content) produce.

It is clear that we can’t ignore it: no matter what we say in our syllabi, students will continue to use the Wikipedia in their essays and projects—citing it if we allow them to do so, and plagiarising from it if we do not. Just as importantly, the Wikipedia is rapidly becoming the public’s main portal to the subjects we teach and research: popular journalists now regularly cite the Wikipedia in their work and the encyclopaedia commonly shows up on the first page of Google searches. While it may not be in any specific scholar’s individual professional interest to take time away from his or her refereed research in order to contribute to a project that provides so little prestige, it is clearly in our collective interest as a profession to make sure that our disciplines are well represented in the first source to which our students and the broader public turn when they want to find out something about the topics we actually research.

But perhaps this shows us the way forward. Perhaps what we need is to see the Wikipedia and similar participatory sites less as a threat to our way of doing things than a way of making what we do more visible to the general public. The fictional romance between LonelyGirl15 and DanielBeast on YouTube did not threaten the makers of commercial television. But it did give prominence to a medium that makers of commercial television now use regularly to attract audiences to their professional content in the traditional media. In our case, the Wikipedia is less an alternative to traditional scholarship (except perhaps as this is represented in print encyclopaedias) than it is a complement—something that can be used to explain, show off, and broaden the appeal of the work we do in our professional lives.

In fact, the important thing about the Wikpedia is that it has been built almost entirely through the efforts of amateurs—that is to say people who are not paid to conduct research in our disciplines but do so anyway because it is their hobby. While it can certainly be disheartening to see the occasional elementary mistake or outlandish theory in a Wikipedia entry, we should not ignore the fact that the entry itself exists because people were interested enough in what we do to try and imitate it in their spare time. Given the traditional lack of respect shown scholarly research by governments and funding agencies for much of the last century, we should be rejoicing in this demonstration of interest—in much the same way scientists judging a science fair are able to see past the many relatively trivial experiments on display and recognise the event’s importance as a representation of popular interest in what they do.

This recognition of the extent to which the Wikipedia has engaged the imagination of the general public and turned it to the amateur practice of scholarship suggests what I think may prove to be the best way of incorporating it into the lives of professional academics: since the Wikipedia appears unable to serve as a route to professional advancement for intrinsic reasons, perhaps we should begin to see contributions to it by professional scholars as a different type of activity altogether—as a form community service to be performed by academics in much the same way lawyers are often expected to give back to the public through their pro bono work. A glance at almost any discussion page on the Wikipedia will show that the Wikipedians themselves are aware of the dangers posed to the enterprise by the inclusion of fringe theories, poor research, and contributions by people with insufficient disciplinary expertise. As certified experts who work daily with the secondary and primary research required to construct good Wikipedia entries, we are in a position to contribute to the construction of individual articles in a uniquely positive way by taking the time to help clean up and provide balance to entries in our professional areas of interest. In doing so, we can both materially improve the quality of the Wikipedia and demonstrate the importance of professional scholars to a public whose hobby touches very closely on the work we are paid to do—and whose taxes, by and large, support us.

And who knows, maybe “we” could even join “you” in accepting Time Magazine’s nomination for person of the year.

Works Cited

Ahrens, Frank 2006. “A Wikipedia Of Secrets.” Washington Post. Sunday, November 5: F07. Online edition, URL: http://www.washingtonpost.com/wp_dyn/content/article/2006/11/03/AR2006110302015.html

Bails, Jennifer.2006. “Schatten’s hand in bogus paper detailed.” Pittsburg Tribune-Review, January 11. http://www.pittsburghlive.com/x/tribune-review/trib/regional/s_412326.html

Bergstein, Brian. “Microsoft Offers Cash for Wikipedia Edit.” Washington Post, January 23. http://www.washingtonpost.com/wp-dyn/content/article/2007/01/23/AR2007012301025.html

Citizendium 2006. “Citizendium’s Fundamental Policies.” Citizendium (citation from version 1.4, October 11) http://www.citizendium.org/fundamentals.html

Department of English, University of Colorado [2007]. “Department of English guidelines for promotion.” Department Handbook. http://www.colostate.edu/Depts/English/handbook/guidepro.htm

Fung, Brian, 2007. “Wikipedia distresses History Department.” middleburycampus.com. Online. URL: http://media.www.middleburycampus.com/media/storage/paper446/news/2007/01/24/News/Wikipedia.Distresses.History.Department-2670081.shtml

Giles, Jim. 2005. “Internet encyclopaedias go head to head.” news@nature.com. “ http://www.nature.com/news/2005/051212/full/438900a.html”: http://www.nature.com/news/2005/051212/full/438900a.html

Grossman, Lev. 2006. “Time’s Person of the Year: You.” Time. Wednesday, Dec. 13. Online Edition. URL: http://www.time.com/time/magazine/article/0%2C9171%2C1569514%2C00.html .

History of Wikipedia. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=History_of_Wikipedia&oldid=104389205 (accessed January 31, 2007).

Lonelygirl15. Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=104136723 (accessed January 31, 2007).

Multilingual Statistics. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia:Multilingual_statistics&oldid=97805501 (accessed February 2, 2007).

Nupedia. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Nupedia&oldid=103617050 (accessed January 31, 2007).

Read, Brock. 2006. “Can Wikipedia Ever Make the Grade?” The Chronice of Higher Education October 27. URL: http://chronicle.com/temp/reprint.php?%20id=z6xht2rj60kqmsl8tlq5ltqcshc5y93y

Sanger, Larry J. 2005. “The Early History of Nupedia and Wikipedia: A Memoir.” Part I http://features.slashdot.org/article.pl?sid=05/04/18/164213&tid=95&tid=149&tid=9 Part II: http://features.slashdot.org/article.pl?sid=05/04/19/1746205&tid=95.

Tang, Hangwi and Jennifer Hwee Kwoon Ng. 2006. “Googling for a diagnosis—use of Google as a diagnostic aid: internet based study” BMJ 333)7570): 1143-1145. URL: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1676146: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1676146

Wikipedia: Peer Review. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia:Peer_review&oldid=104637689 (accessed January 31, 2007).

Wikipedia. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia&oldid=104645649 (accessed January 31, 2007).

Wired Campus 2006. “Wikipedia Founder Discourages Academic Use of His Creation.” Chronicle of Higher Education. June 12. URL: http://chronicle.com/wiredcampus/article/1328/wikipedia-founder-discourages-academic-use-of-his-creation

----  

Why should I write for your Wiki? Towards an economics of collaborative scholarship.

Posted: Dec 15, 2006 17:12;
Last Modified: Jan 04, 2017 16:01

Tags: , , , , , , ,

---

Originally presented at the conference of the Renaissance of America. San Francisco, CA. March, 2006.

I’d like to begin today by telling you the story of how I came to write this paper. Ever since I was in high school, I have used a process called “constructive procrastination” to get things done. This system involves collecting a bunch of projects due at various times and then avoiding work on the one that is due right now by finishing something else instead. Or as my wife, who actually teaches this system says: “if you want me to get your project done today, give me something more important to avoid working on.”

In this particular case, the important thing I wanted to avoid doing was this lecture. And the thing I did instead in order to avoid it was work on an article for the Wikipedia. Or rather—and to be honest, worse—work on revising an article I put up on the Wikipedia almost a year ago when I was was trying to avoid working on an article on Fonts for the Digital Medievalist.

The goal of my procrastination this time was to get my entry recognised as a “Featured article”. A “Featured article” at the Wikipedia is one considered suitable for displaying on the site’s front page. Such articles are supposed to represent the very best of the encyclopaedia, and an unofficial policy, frequently cited by reviewers, restricts them to approximately 0.1% of the total database.

Getting an article recognised as a “Feature” turns out to be a remarkably difficult process. You nominate your work for consideration, at which point it is opened for review by the community at large. And they basically tell you to take it back and make it better. Very few articles seem to sail right through. The ones I saw on their way to featured status had all gone through the process at least once before.

In my case the reviewers didn’t like my referencing style, thought the writing was aimed at too specialised an audience, and generally wanted much more background detail. After two weeks of hard work, and about 100 edits, the article is now beginning to get good rather than lukewarm to negative reviews and now seems on its way to getting recognition as a “feature”. I’m debating resubmitting next time I have something else to avoid doing1.

In addition to being surprisingly conscientious, the comments I received on my piece were also remarkably astute. Unbeknownst to the reviewers, indeed, they accurately reflected the article’s history. I the first added the entry—which is on Cædmon, the Anglo-Saxon poet and subject of my recent book from Boydell and Brewer—last year, when, trying to avoid researching an entry on fonts for the Digital Medievalist, I decided to see how the Wikipedia handled something I knew something about. The original entries on Cædmon and his Hymn were quite inaccurate, and relied on very old sources; the one on Cædmon’s Hymn also made an odd claim about hidden paganism in the poem. In the interests of procrastination, I decided to correct the entry on Cædmon’s Hymn, and replace the account of the poet’s life with an entry I had just written for a specialist print encyclopaedia, The Sources of Anglo-Saxon Literary Culture. With my print editor’s permission, I quickly revised the entry I had submitted to him, cutting out unnecessarily detailed information and adding some background material, and pasted the results into the Wikipedia. There were a couple of minor problems—I forgot to remove some books I was no longer citing from the works cited list, and some of the italics and character encoding were messed up—but on the whole the article received good comments on its discussion page, and was left alone for the most part by other users. This is generally a good sign in the Wikipedia, and in fact a criteria for recognition as a featured article.

My entry for Cædmon’s Hymn didn’t fare as well: the author of the original piece kept reversing my edits until others recommended that the piece be merged with the larger Cædmon article. I never did finish my work for the wiki entry on Fonts that I was supposed to be researching for the Digital Medievalist… though I do have an article due at the beginning of May that I’d like to avoid.

I’ve told you the story of how I came to write this paper—or rather avoid doing so—because I think it illustrates several important things about the possibilities and challenges involved in creating an information commons.

Information commons are a relatively hot topic right now. These are collaborative environments in which content is developed and published interactively—by the users of the community for which it is intended. Such communities can take various forms, but the most common are probably blog farms, RSS servers, Wikis, other types of collaborative tools such as Version Control Systems, annotation engines, and the more familiar chat rooms and email lists.

More importantly, such environments are beginning to become more popular in the scholarly world as well. A number of projects, such as STOA, the Digital Medievalist and Digital Classicist Projects, the Virtual Humanities Lab at Brown, and the Text Encoding Initiative are beginning to use tools like Wikis as a central part of their environment, or experiment with more sophisticated types of collaborative tools such as annotation and editing engines.

What my experience with the Wikipedia shows is that these commons can indeed end up with—if I say so myself—detailed and scholarly work of a relatively high standard. I don’t work for the Wikipedia after all, and I have—for whatever twisted psychological reasons—devoted a reasonable amount of time and expertise contributing a thoroughly researched and referenced entry on my subject.

Moreover, my experience also shows that such communities can be collaborative in the best sense: my article is now much better suited for its intended audience—and better written, I think—as a result of the criticism I received from the Wikipedia reviewers after I nominated it for feature status.

And, a final positive point: it shows that such communities can be self-policing. The person who thought Cædmon was really praising Pagan gods in the original entry (a very non-standard view) was eventually outvoted and reined in by a consensus among other users. And to his credit he accepted this consensus. and moved on.

But my experience also shows some of the difficulties involved in running a community of this sort:

First of all, my best and most careful work appeared only with the prospect of a reward. The reward is not great—neither my dean nor my colleagues are going to care if my article is selected as a “Feature.” But it was only once I decided to go for Feature status that I did the kind of detailed slogging that I normally do in my day-to-day research, and indeed had done in the print entry from which I revised my Wikipedia article.

Secondly, while I did contribute up-to-date scholarship to the Wikipedia, I didn’t do any research for the Wikipedia: I contributed my Cædmon article because I had something suitable lying around which I had already researched and written for a different purpose. Nobody—even the hobbyists who contribute most of the Wikipedia’s material—would put the kind of research I did into a piece written exclusively for it. If they did, it is highly doubtful that they would devote the kind of time to checking citation style and the like that print editors demand from professional scholars.

And finally, although the community is self-policing, it is not always safe to walk the streets at night: the person whose work I corrected, did, after all, come back and undo my revisions. Even though he ultimately gave in to the consensus opinion of the users—and what if the consensus had been wrong?—his inaccuracies nevertheless did replace my corrections for a significant amount of time.

I am not the first person to notice these positive and negative aspects of the commons: having used wikis on a number of projects for a couple of years, I can tell you that the problem of quality control is the second thing most academics comment on when they are introduced to wiki software, after first expressing their admiration for the concept of a user-edited environment. But because these environments are becoming more popular in a scholarly context, it is worthwhile revisiting what are in my view the two most important organisational issues facing scholarly intellectual commons:

  1. How do you get knowledgeable people to contribute their best work?
  2. How do you prevent abuse/vandalism/and/nonsense from the well-meaning but incompetent?

For the rest of this paper, I’m going to address these problems in a fairly speculative way. There are some practical steps we can take right now to find solutions to them but it is worthwhile also thinking about how they might be solved given enough time and technical expertise. Indeed in some ways, my goal is to contribute to a debate in the much the same way one contributes to the Wikipedia: throw something out there and hope that somebody can improve on it.

Although these are crucial problems for intellectual commons, they are by no unique to them. The question of how you get good quality work in and keep the bad out is also central to the operation of peer-reviewed journals or, indeed, any kind of organised communication.

These are crucial problems for an intellectual commons, however, because, in its purest state, a commons has no gatekeeper: the Wikipedia is the encyclopaedia that “_anybody_ can edit” (emphasis added). That is what makes it so exciting but also causes all the problems. Traditionally, scholarly journals and academic presses (organisations that rarely pay their authors) have addressed this problem with a combination of carrots and sticks: they encourage people to contribute by providing enough prestige to make it worth their while to submit well researched articles, and they keep the bad stuff out by getting disciplinary referees to review the submitted articles before they are printed.

A true intellectual commons lacks both a system providing rewards and preventing folly. Perhaps for this reason, most academic commons rely on some kind of gatekeeper: you need to be approved by a moderator if you want to join a mailing list; you need to submit a CV if you want to be able to annotate an online edition; you need to have your login approved by a Sysop if you want to contribute to a scholarly wiki. Even beyond this, such projects also usually engage in editorial control: spammers are cut off, trolls and flamers are banned, and wiki or annotation contributions are reviewed for quality by some central person or group.

These approaches are effective on the whole at preventing or mitigating abuse by unqualified or out-of-control people. They do, however, suffer from two main problems:

  1. They scale very badly: while a gate keeper or moderator can vet or edit contributions from a small number of people, this gets progressively more difficult as the community expands.
  2. They represent a compromise on the thing that makes commons different and exciting in the first place: the possibility for unnegotiated collaboration and exchange.

Scaling is probably not an issue for most academic projects. Digital Medievalist is a relatively big project now, for example, and it is only approaching 250 members. Numbers like this are relatively easy to control. The costs one would incur in trying to develop an automatic vetting system for a market this size would vastly outweigh any future benefit.

Other disciplines, however, have been faced by this scaling problem—and managed to find partial solutions that in my opinion do a better job of maintaining the unnegotiated quality that make successful commons what they are.

One solution commonly proposed solution is to rely on distributed moderation—or, in simple terms—allow the users to police themselves. This has the advantage of being completely scalable—the number of moderators increases with the number of users. As we saw in my experience with the Wikipedia, moreover, this system actually actually can work: many (perhaps most) errors on the Wikipedia are corrected after a while and unqualified or insincere contributors often do get reined in.

But of course my experience with the Wikipedia also shows the problem with this approach. If everybody can be a moderator, then the unqualified can be as well. They can, as a result, replace good work with bad as easily as others can replace bad work with good.

A solution to this problem is to allow moderation only by respected members of the community. This is the system at Slashdot.org, a newservice for technological news. There contributors acquire a reputation based on other’s opinions of their contributions; those with high reputation scores are then added to a pool from which moderators are drawn each week (the system is actually much more complex, but the details are not important here).

Systems such as this tend to suffer from complexity: Slashdot also has meta-moderation and nobody seems very happy with anybody else even then. Moreover, members have a tendency both to game the system in order to increase their own reputations and lower those of their “enemies”.

At Digital Medievalist, we have been thinking of a slightly different model of distributed moderation, which we describe as an apprenticeship model: in this solution, newcomers are assigned relatively limited editorial, moderation, and compositional powers. These powers then increase as one’s contributions are allowed to stand by other members of the community. Initially, one might be allowed only to correct typos; as people accept your corrections, you are allowed greater editorial powers—perhaps you can rewrite entire sections or contribute new articles. If, however, your contributions begin to be rolled back, your powers shrink accordingly: the idea is ultimately a version of the Peter Principle: you rise to the point at which you are perceived to become incompetent. The main difference is that we then try to push you back down a step to the last place in the hierarchy in which you knew what you were doing.

This method would require considerable software design, and so, currently, is outside our ability. It would have the advantage over the Slashdot method, however, both of scoring ‘reputation’ on the basis of audience’s real behaviour (reducing your ‘enemy’s’ score requires you to take the time to reverse his or her edits?) and of keeping track of reputation not by points (which encourage people to be competitive) but by permissions. A points system encourages people ask themselves how much they are worth; a permissions system encourages them to take on extra responsibility.

Moderation systems are essentially negative: they exist to prevent people from messing things up. As I noted earlier, however, commons also have the positive problem of trying to encourage good work: the most strictly refereed journal in the world, as the PMLA discovered a few years back, is no good if nobody submits articles to be vetted.

This is an area in which scholarly projects seem to have devoted less attention. While most projects with commons-type environments have explicit moderation policies, few if any I have seen have explicit reward policies. They tend to have gatekeepers but no paymasters. Perhaps as a result most also seem to be the work of a very small number of people—even in the case of organisations with large numbers of members.

Once again, solutions for this problem can be found in other disciplines. The Open Source software movement, for example, relies on high quality contributions from volunteers. Open Source groups often reward significant contributors by treating work on the project as a form of “sweat equity” that allows them special privileges: eligibility for membership on the executive or board, for example, or voting rights or even basic membership.

A second solution commonly used is to give significant contributors some kind of token that sets them apart from others. This can be as valuable as the right to administer or moderate others (Slashdot), or as minor as extra stars beside your user name in the forum (Ubuntu).

Both of these solutions can be adapted to the scholarly commons. At Digital Medievalist, we are currently putting together new bylaws that will treat contributions as a condition of membership: people who contribute to the wiki, mailing list, or journal, will be eligible to assume responsibility as board members or officers of the project (I suspect giving away extra stars for good contributions might not be as effective in the academic world as it seems to be in the Open Source one—though given how important psychologically such distinctions are, perhaps they would). A second possibility for reward—albeit one fraught with difficulties—might be to award named authorship of material on which an contributor has worked: either by naming people at the top of the article or adding them as contributors to a colophon on the project as a whole.

The two easiest solutions to this problem of reward, however, are probably those used by the Wikipedia to get me to revise my article on Cædmon rather than work on this talk: offer special status for particularly well done work, and design the project as a whole so that it is a natural outlet for work that otherwise might not be used. At the Digital Medievalist, we already run a peer-reviewed journal alongside our wiki-based commons. A project with a different focus might certify certain articles in some way: as “refereed” vs. “open forum”, perhaps, and identify the authors and major contributors. Our project, moreover, is set up to provide a forum in which users can publish material they might otherwise find hard to use in furthering their careers: solutions to technical problems in digital humanities such as the development of stylesheets and databases, that are not commonly published by the major disciplinary journals.

The intellectual commons represents a new, purely digital approach to publication and the dissemination of scholarship. It is a model that cannot be replicated in print, and it is a model that many scholars feel intuitively at least will become a major force in the future of scholarly communication. In order for it to realise its potential, however, we must first find an economic model that encourages us to contribute our best work and provides for some kind of quality control—without sacrificing the very spontaneity that defines this new means of communication.

So why should I write for your Wiki? Until we answer this question, the Wiki will not live up to its full scholarly potential.

1 Update: The entry ultimately gained feature status.

----  

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation, citation practice, citations, composition, computers, digital humanities, digital pedagogy, exercises, grammar, history, moodle, old english, pedagogy, research, student employees, students, study tips, teaching, tips, tutorials, unessay, universities, university of lethbridge

See all...

Follow me on Twitter

At the dpod blog