Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Why should I write for your Wiki? Towards an economics of collaborative scholarship.

Posted: Dec 15, 2006 17:12;
Last Modified: Jan 04, 2017 16:01
Keywords:

---

Originally presented at the conference of the Renaissance of America. San Francisco, CA. March, 2006.

I’d like to begin today by telling you the story of how I came to write this paper. Ever since I was in high school, I have used a process called “constructive procrastination” to get things done. This system involves collecting a bunch of projects due at various times and then avoiding work on the one that is due right now by finishing something else instead. Or as my wife, who actually teaches this system says: “if you want me to get your project done today, give me something more important to avoid working on.”

In this particular case, the important thing I wanted to avoid doing was this lecture. And the thing I did instead in order to avoid it was work on an article for the Wikipedia. Or rather—and to be honest, worse—work on revising an article I put up on the Wikipedia almost a year ago when I was was trying to avoid working on an article on Fonts for the Digital Medievalist.

The goal of my procrastination this time was to get my entry recognised as a “Featured article”. A “Featured article” at the Wikipedia is one considered suitable for displaying on the site’s front page. Such articles are supposed to represent the very best of the encyclopaedia, and an unofficial policy, frequently cited by reviewers, restricts them to approximately 0.1% of the total database.

Getting an article recognised as a “Feature” turns out to be a remarkably difficult process. You nominate your work for consideration, at which point it is opened for review by the community at large. And they basically tell you to take it back and make it better. Very few articles seem to sail right through. The ones I saw on their way to featured status had all gone through the process at least once before.

In my case the reviewers didn’t like my referencing style, thought the writing was aimed at too specialised an audience, and generally wanted much more background detail. After two weeks of hard work, and about 100 edits, the article is now beginning to get good rather than lukewarm to negative reviews and now seems on its way to getting recognition as a “feature”. I’m debating resubmitting next time I have something else to avoid doing1.

In addition to being surprisingly conscientious, the comments I received on my piece were also remarkably astute. Unbeknownst to the reviewers, indeed, they accurately reflected the article’s history. I the first added the entry—which is on Cædmon, the Anglo-Saxon poet and subject of my recent book from Boydell and Brewer—last year, when, trying to avoid researching an entry on fonts for the Digital Medievalist, I decided to see how the Wikipedia handled something I knew something about. The original entries on Cædmon and his Hymn were quite inaccurate, and relied on very old sources; the one on Cædmon’s Hymn also made an odd claim about hidden paganism in the poem. In the interests of procrastination, I decided to correct the entry on Cædmon’s Hymn, and replace the account of the poet’s life with an entry I had just written for a specialist print encyclopaedia, The Sources of Anglo-Saxon Literary Culture. With my print editor’s permission, I quickly revised the entry I had submitted to him, cutting out unnecessarily detailed information and adding some background material, and pasted the results into the Wikipedia. There were a couple of minor problems—I forgot to remove some books I was no longer citing from the works cited list, and some of the italics and character encoding were messed up—but on the whole the article received good comments on its discussion page, and was left alone for the most part by other users. This is generally a good sign in the Wikipedia, and in fact a criteria for recognition as a featured article.

My entry for Cædmon’s Hymn didn’t fare as well: the author of the original piece kept reversing my edits until others recommended that the piece be merged with the larger Cædmon article. I never did finish my work for the wiki entry on Fonts that I was supposed to be researching for the Digital Medievalist… though I do have an article due at the beginning of May that I’d like to avoid.

I’ve told you the story of how I came to write this paper—or rather avoid doing so—because I think it illustrates several important things about the possibilities and challenges involved in creating an information commons.

Information commons are a relatively hot topic right now. These are collaborative environments in which content is developed and published interactively—by the users of the community for which it is intended. Such communities can take various forms, but the most common are probably blog farms, RSS servers, Wikis, other types of collaborative tools such as Version Control Systems, annotation engines, and the more familiar chat rooms and email lists.

More importantly, such environments are beginning to become more popular in the scholarly world as well. A number of projects, such as STOA, the Digital Medievalist and Digital Classicist Projects, the Virtual Humanities Lab at Brown, and the Text Encoding Initiative are beginning to use tools like Wikis as a central part of their environment, or experiment with more sophisticated types of collaborative tools such as annotation and editing engines.

What my experience with the Wikipedia shows is that these commons can indeed end up with—if I say so myself—detailed and scholarly work of a relatively high standard. I don’t work for the Wikipedia after all, and I have—for whatever twisted psychological reasons—devoted a reasonable amount of time and expertise contributing a thoroughly researched and referenced entry on my subject.

Moreover, my experience also shows that such communities can be collaborative in the best sense: my article is now much better suited for its intended audience—and better written, I think—as a result of the criticism I received from the Wikipedia reviewers after I nominated it for feature status.

And, a final positive point: it shows that such communities can be self-policing. The person who thought Cædmon was really praising Pagan gods in the original entry (a very non-standard view) was eventually outvoted and reined in by a consensus among other users. And to his credit he accepted this consensus. and moved on.

But my experience also shows some of the difficulties involved in running a community of this sort:

First of all, my best and most careful work appeared only with the prospect of a reward. The reward is not great—neither my dean nor my colleagues are going to care if my article is selected as a “Feature.” But it was only once I decided to go for Feature status that I did the kind of detailed slogging that I normally do in my day-to-day research, and indeed had done in the print entry from which I revised my Wikipedia article.

Secondly, while I did contribute up-to-date scholarship to the Wikipedia, I didn’t do any research for the Wikipedia: I contributed my Cædmon article because I had something suitable lying around which I had already researched and written for a different purpose. Nobody—even the hobbyists who contribute most of the Wikipedia’s material—would put the kind of research I did into a piece written exclusively for it. If they did, it is highly doubtful that they would devote the kind of time to checking citation style and the like that print editors demand from professional scholars.

And finally, although the community is self-policing, it is not always safe to walk the streets at night: the person whose work I corrected, did, after all, come back and undo my revisions. Even though he ultimately gave in to the consensus opinion of the users—and what if the consensus had been wrong?—his inaccuracies nevertheless did replace my corrections for a significant amount of time.

I am not the first person to notice these positive and negative aspects of the commons: having used wikis on a number of projects for a couple of years, I can tell you that the problem of quality control is the second thing most academics comment on when they are introduced to wiki software, after first expressing their admiration for the concept of a user-edited environment. But because these environments are becoming more popular in a scholarly context, it is worthwhile revisiting what are in my view the two most important organisational issues facing scholarly intellectual commons:

  1. How do you get knowledgeable people to contribute their best work?
  2. How do you prevent abuse/vandalism/and/nonsense from the well-meaning but incompetent?

For the rest of this paper, I’m going to address these problems in a fairly speculative way. There are some practical steps we can take right now to find solutions to them but it is worthwhile also thinking about how they might be solved given enough time and technical expertise. Indeed in some ways, my goal is to contribute to a debate in the much the same way one contributes to the Wikipedia: throw something out there and hope that somebody can improve on it.

Although these are crucial problems for intellectual commons, they are by no unique to them. The question of how you get good quality work in and keep the bad out is also central to the operation of peer-reviewed journals or, indeed, any kind of organised communication.

These are crucial problems for an intellectual commons, however, because, in its purest state, a commons has no gatekeeper: the Wikipedia is the encyclopaedia that “_anybody_ can edit” (emphasis added). That is what makes it so exciting but also causes all the problems. Traditionally, scholarly journals and academic presses (organisations that rarely pay their authors) have addressed this problem with a combination of carrots and sticks: they encourage people to contribute by providing enough prestige to make it worth their while to submit well researched articles, and they keep the bad stuff out by getting disciplinary referees to review the submitted articles before they are printed.

A true intellectual commons lacks both a system providing rewards and preventing folly. Perhaps for this reason, most academic commons rely on some kind of gatekeeper: you need to be approved by a moderator if you want to join a mailing list; you need to submit a CV if you want to be able to annotate an online edition; you need to have your login approved by a Sysop if you want to contribute to a scholarly wiki. Even beyond this, such projects also usually engage in editorial control: spammers are cut off, trolls and flamers are banned, and wiki or annotation contributions are reviewed for quality by some central person or group.

These approaches are effective on the whole at preventing or mitigating abuse by unqualified or out-of-control people. They do, however, suffer from two main problems:

  1. They scale very badly: while a gate keeper or moderator can vet or edit contributions from a small number of people, this gets progressively more difficult as the community expands.
  2. They represent a compromise on the thing that makes commons different and exciting in the first place: the possibility for unnegotiated collaboration and exchange.

Scaling is probably not an issue for most academic projects. Digital Medievalist is a relatively big project now, for example, and it is only approaching 250 members. Numbers like this are relatively easy to control. The costs one would incur in trying to develop an automatic vetting system for a market this size would vastly outweigh any future benefit.

Other disciplines, however, have been faced by this scaling problem—and managed to find partial solutions that in my opinion do a better job of maintaining the unnegotiated quality that make successful commons what they are.

One solution commonly proposed solution is to rely on distributed moderation—or, in simple terms—allow the users to police themselves. This has the advantage of being completely scalable—the number of moderators increases with the number of users. As we saw in my experience with the Wikipedia, moreover, this system actually actually can work: many (perhaps most) errors on the Wikipedia are corrected after a while and unqualified or insincere contributors often do get reined in.

But of course my experience with the Wikipedia also shows the problem with this approach. If everybody can be a moderator, then the unqualified can be as well. They can, as a result, replace good work with bad as easily as others can replace bad work with good.

A solution to this problem is to allow moderation only by respected members of the community. This is the system at Slashdot.org, a newservice for technological news. There contributors acquire a reputation based on other’s opinions of their contributions; those with high reputation scores are then added to a pool from which moderators are drawn each week (the system is actually much more complex, but the details are not important here).

Systems such as this tend to suffer from complexity: Slashdot also has meta-moderation and nobody seems very happy with anybody else even then. Moreover, members have a tendency both to game the system in order to increase their own reputations and lower those of their “enemies”.

At Digital Medievalist, we have been thinking of a slightly different model of distributed moderation, which we describe as an apprenticeship model: in this solution, newcomers are assigned relatively limited editorial, moderation, and compositional powers. These powers then increase as one’s contributions are allowed to stand by other members of the community. Initially, one might be allowed only to correct typos; as people accept your corrections, you are allowed greater editorial powers—perhaps you can rewrite entire sections or contribute new articles. If, however, your contributions begin to be rolled back, your powers shrink accordingly: the idea is ultimately a version of the Peter Principle: you rise to the point at which you are perceived to become incompetent. The main difference is that we then try to push you back down a step to the last place in the hierarchy in which you knew what you were doing.

This method would require considerable software design, and so, currently, is outside our ability. It would have the advantage over the Slashdot method, however, both of scoring ‘reputation’ on the basis of audience’s real behaviour (reducing your ‘enemy’s’ score requires you to take the time to reverse his or her edits?) and of keeping track of reputation not by points (which encourage people to be competitive) but by permissions. A points system encourages people ask themselves how much they are worth; a permissions system encourages them to take on extra responsibility.

Moderation systems are essentially negative: they exist to prevent people from messing things up. As I noted earlier, however, commons also have the positive problem of trying to encourage good work: the most strictly refereed journal in the world, as the PMLA discovered a few years back, is no good if nobody submits articles to be vetted.

This is an area in which scholarly projects seem to have devoted less attention. While most projects with commons-type environments have explicit moderation policies, few if any I have seen have explicit reward policies. They tend to have gatekeepers but no paymasters. Perhaps as a result most also seem to be the work of a very small number of people—even in the case of organisations with large numbers of members.

Once again, solutions for this problem can be found in other disciplines. The Open Source software movement, for example, relies on high quality contributions from volunteers. Open Source groups often reward significant contributors by treating work on the project as a form of “sweat equity” that allows them special privileges: eligibility for membership on the executive or board, for example, or voting rights or even basic membership.

A second solution commonly used is to give significant contributors some kind of token that sets them apart from others. This can be as valuable as the right to administer or moderate others (Slashdot), or as minor as extra stars beside your user name in the forum (Ubuntu).

Both of these solutions can be adapted to the scholarly commons. At Digital Medievalist, we are currently putting together new bylaws that will treat contributions as a condition of membership: people who contribute to the wiki, mailing list, or journal, will be eligible to assume responsibility as board members or officers of the project (I suspect giving away extra stars for good contributions might not be as effective in the academic world as it seems to be in the Open Source one—though given how important psychologically such distinctions are, perhaps they would). A second possibility for reward—albeit one fraught with difficulties—might be to award named authorship of material on which an contributor has worked: either by naming people at the top of the article or adding them as contributors to a colophon on the project as a whole.

The two easiest solutions to this problem of reward, however, are probably those used by the Wikipedia to get me to revise my article on Cædmon rather than work on this talk: offer special status for particularly well done work, and design the project as a whole so that it is a natural outlet for work that otherwise might not be used. At the Digital Medievalist, we already run a peer-reviewed journal alongside our wiki-based commons. A project with a different focus might certify certain articles in some way: as “refereed” vs. “open forum”, perhaps, and identify the authors and major contributors. Our project, moreover, is set up to provide a forum in which users can publish material they might otherwise find hard to use in furthering their careers: solutions to technical problems in digital humanities such as the development of stylesheets and databases, that are not commonly published by the major disciplinary journals.

The intellectual commons represents a new, purely digital approach to publication and the dissemination of scholarship. It is a model that cannot be replicated in print, and it is a model that many scholars feel intuitively at least will become a major force in the future of scholarly communication. In order for it to realise its potential, however, we must first find an economic model that encourages us to contribute our best work and provides for some kind of quality control—without sacrificing the very spontaneity that defines this new means of communication.

So why should I write for your Wiki? Until we answer this question, the Wiki will not live up to its full scholarly potential.

1 Update: The entry ultimately gained feature status.

----  

Comment [2]

  1. megan (Wed Feb 27, 2008 (13:33:28)) [PermLink]:

    what should i write in paper if i want to become a model because i signed up for 1 but they said write down y i want to b a model

  2. Wybo Wiersma (Sun Nov 15, 2009 (16:46:43)) [PermLink]:

    Interesting post. Especially the bit about peer-review in commons.

    On the LogiLogi project (http://www.logilogi.org) we reward good contributions with more voting-power across multiple ‘peergroups’

Commenting is closed for this article.

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation practice, composition, computers, digital humanities, digital pedagogy, grammar, history, moodle, old english, pedagogy, research, students, study tips, teaching, tips, tutorials, unessay, universities

See all...

Follow me on Twitter