Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

When everyone’s super… On gaming the system

Posted: May 23, 2012 20:05;
Last Modified: May 23, 2012 21:05

---

note: first published on the dpod blog

Syndrome: Oh, I’m real. Real enough to defeat you! And I did it without your precious gifts, your oh-so-special powers. I’ll give them heroics. I’ll give them the most spectacular heroics the world has ever seen! And when I’m old and I’ve had my fun, I’ll sell my inventions so that everyone can have powers. Everyone can be super! And when everyone’s super… [chuckles evilly] no one will be.

The Incredibles

Here’s a funny little story about how a highly specialised journal gamed journal impact measurements:

The Swiss journal Folia Phoniatrica et Logopaedica has a good reputation among voice researchers but, with an impact factor of 0.655 in 2007, publication in it was unlikely to bring honour or grant money to the authors’ institutions.

Now two investigators, one Dutch and one Czech, have taken on the system and fought back. They published a paper called ‘Reaction of Folia Phoniatrica et Logopaedica on the current trend of impact factor measures’ (H. K. Schutte and J. G. Švec Folia Phoniatr. Logo.59, 281–285; 2007). This cited all the papers published in the journal in the previous two years. As ‘impact factor’ is defined as the number of citations to articles in a journal in the past two years, divided by the total number of papers published in that journal over the same period, their strategy dramatically increased Folia‘s impact factor this year to 1.439.

In the ‘rehabilitation’ category, shared with 26 other journals, Folia jumped from position 22 to position 13.

—“Tomáš Opatrný. Playing the system to give low-impact journal more clout:. Nature 455, 167 (11 September 2008).

Assessing (and hence demonstrating) impact is a difficult but important problem in contemporary academia.

For most of the last century, university researchers have been evaluated on their ability to “write something and get it into print… ‘publish or perish’” (as Logan Wilson put it as early as 1942 in The Academic Man: A Study in the Sociology of a Profession, one of the first print citations of the term).

As you might expect, the development of a reward system built on publication led to a general increase in number of publications. Studies of science publication suggest a growth rate in the number of scientific articles and journals of between 2 and 5% per year since 1907 (a rate that leads to doubling roughly every 15 years). There is also evidence for a particularly marked rise in numbers after the 1950s.

This kind of growth vitiates the original point of the metric. If everybody publishes all the time, then the simple fact of publication is no longer sufficient as a proxy for excellence. You could count the sheer number of publications—a measure that is in fact widely used in popular contexts to imply productivity—were it not so obviously open to abuse: unless you institute some kind of control over the type and quality of publication, a system that simply counts publications will lead inevitably to an increase in number, and a corresponding decrease in quality, originality, and length.

It is perhaps for this reason that modern peer review systems begin to be institutionalised in the course of the second half of the last century. In fact, while peer review is probably understood to be the sine qua non of university research, and while it is possible to trace sporadic examples of activity resembling peer review back into the classical period, peer review in its modern form in fact really only begins to take shape only in the period from the 1940s-1970s. Major scientific journals, including Science and The Journal of the American Medical Association, for example, begin to make systematic use of external reviewers only in the 1940s, partially as an apparent response to the growing number and specialisation of submissions.

As you might expect, the peer review/reward system has itself been gamed. In the same way a reward system built on counting publications leads inevitably to an increase in the number of publications, a reward system build on counting peer-reviewed publications leads, inevitably, to an increase in the number of peer-reviewed publications… and the size and number of the journals that publish them.

Journal impact measurements are a controversial response to the not-surprising fact that peer review has also become an insufficient proxy for excellence. It is still relatively early days in this area (though less so in the natural sciences) and there is as yet not a complete consensus as to how impact should be quantified. As a result, the measures can still take many forms, from lists of ranked journals, to citation counts, to circulation and aggregation statistics, to in the case of on-line journals even more difficult-to-interpret statistics such as bounce and exit rates.

Regardless of how the impact factor debate settles out, however, it is only a matter of time until it too is gamed. Indeed, as the example of Folia Phoniatrica et Logopaedica suggests, it even may not be a matter of time. If you count citations, researchers will start ensuring they get cited. If you rank journals, they will ensure their journals fit your ranking criteria. If you privilege aggregation, the aggregators will be flooded with candidates for aggregation. And it is not clear that commercial understandings of good web analytics are really appropriate for scholarly and scientific publishing.

But the Folia Phoniatrica et Logopaedica example is also interesting because I’m not sure it is a bad thing. I can’t independently assess Opatrný’s claim that the journal is well respected though faring badly in impact measurements, but it wouldn’t surprise me if he was right. And the fact that a single researcher in a single article was able to more than double his journal’s impact score simply by citing every paper published in the journal in the previous two years leaves me… quite happy for him. I doubt there are many people who would consider the article cited by Opatrný to be in some way fraudulent. Instead, I suspect most of us consider it evidence (at best) that there are still some bugs in the system and (at worst) of a successful reductio ad absurdum–similar in a certain sense to Alan Sokol’s submission to Social Text.

None of this means that impact metrics are an intrinsically bad thing. Or that peer review isn’t good. Or that researchers shouldn’t be expected to publish. In fact, in many ways, the introduction of these various metrics, and the emphasis they receive in academia, is very good. Peer review has become almost fully institutionalised in the humanities in the course of my career. When I was a graduate student in the early 1990s, most journals I submitted to did not have formal explanation of their review policies and many were probably not, strictly speaking, peer reviewed. But it was difficult to tell and nobody I knew even attempted to distinguish publications on their CVs on the basis of whether or not they were peer reviewed. We were taught to distinguish publications (and the primary metric was still number of publications) on the basis of genre: you separated reviews from encyclopedia entries from notes from lengthy articles. A review didn’t count for much, even if we could have shown it was peer reviewed, and a lengthy article in what “everybody knew” to be a top journal counted for a lot, whether it was peer reviewed or not.

By the time I was department chair, 10 years later, faculty members were presenting me with CVs that distinguished output on the basis of peer review status. In these cases, genre was less important that peer review status. Reviews that were peer-reviewed were listed above articles that weren’t and journals began being quite explicit about their reviewing policies. The journal I helped found, Digital Medievalist, began from its first issue with what we described as “ostentatious peer review”: we named the referees who recommended acceptance on every article, partially as a way of borrowing their prestige for what we thought was, at the time, a fairly daring experiment in open access publication.

But we did this also because we thought (and think) that peer review is a good thing. My peer reviewed articles are, in almost every case, without a doubt better written and especially better and more carefully argued than my non-peer-reviewed articles. I’ve had stupid comments from referees (though none as stupid as seems to be the norm on grant applications), but there is only one case I can think of where I really couldn’t see how satisfying what the referee wanted wouldn’t improve things.

And the same is true for publication frequency. On the whole, my experience is that people who publish more (within a given discipline) also tend to publish better. I don’t publish too badly for somebody in my discipline. But most of the people who publish more than me in that same discipline are people I’d like to emulate. It is possible to game publication frequency; but on the whole, even the people who (I think) game it are among our most productive and most interesting scholars anyway: they’d still be interesting and productive even if they weren’t good at spinning material for one article into three.

So what does it mean that Schutte and Švec were able to game the impact measure of their journal with such apparent ease? And what should we say in response to the great uproar (much of it in my view well-founded) about the introduction of journal ranking lists by the ESF and Australian governments in recent years? Obviously some journals simply are better than others–more prestigious, better edited, more influential, containing more important papers. And it is difficult to see how frequency of citation is a bad thing, even if its absence is not necessarily evidence something is not good or not important. I would still rather have a heavily cited article in the PMLA than an article nobody read in a journal nobody has ever heard of.

Perhaps the most important thing is that it suggests, as Barbossa says to Miss Turner in Pirates of the Caribbean concerning the “Pirates’ Code,” that these kind of metrics should really be considered “more what you’d call ‘guidelines’ than actual rules.” Journals (and articles), that have a high impact factor, lots of citations, and are heavily read, are probably to be celebrated. But impact, citations, and subscription are not in themselves sufficient proxies for quality: we should expect to find equally good articles, journals, and scholars to exist with lower numbers in all these areas. And more importantly, we should expect to find that any quantifiable criteria we do establish will almost immediately be gamed by researchers in the field: most people with PhD-level research positions got where they are, after all, because they were pretty good at producing what examiners wanted to hear.

The real issue, then, is that metrics like “impact” or “peer review” or even “quantity” are attempts to use quantitative values as substitutes for qualitative assessment. The only real way of assessing quality is through qualitative assessment: that is to say by assessing a work on its own merits in relation to the goals it sets itself in terms of audience, impact, and subject matter, including the reasonableness of these goals. An article by an author who is not famous, in an obscure field, in a on-line journal that has no subscribers, and is not frequently cited may or may not represent poor quality work–in much the same way as might a frequently cited article in a popular field in a journal that is published by a famous academic, in the journal of the main scholarly society in a discipline. What is (or should be) important to the assessor is how reasonably each author has defined his or her goals and how well the resulting work has done in relation to those goals.

And this is where academics’ ability to game any other system becomes a virtue. Since there is no single metric we can create that researchers as a group will not figure out how to exploit (and then in short order), we should accept that we will simply never be able to propose a quantitative measurement for assessing intrinsic quality. What we can rely on, however, is that researchers will, on the whole, try to present their work in its best light. By asking the researchers to explain how their work can be best assessed, and being willing to evaluate that both that explanation and the degree to which the work meets the proposed criteria, we can find a way of comparatively evaluating excellence. Journals, articles, and researchers, that define, then meet or exceed reasonable targets for their disciplines and types of work, are excellent. Those that don’t, aren’t.

And in the meantime, we’ll develop far more innovative measurements of quality.

----  

If I were “You”: How Academics Can Stop Worrying and Learn to Love “the Encyclopedia that Anyone Can Edit”

Posted: Feb 02, 2007 22:02;
Last Modified: May 23, 2012 19:05

---

Original Publication Information: Forthcoming in Heroic Age (2007). http://www.heroicage.org/.

Time Magazine and the Participatory Web

So now it is official: Time magazine thinks the Wikipedia is here to stay.

In its December 2006 issue, Time named “You” as its “Person of the Year” (Grossman 2006). But it didn’t really mean “you“—either the pronoun or the person reading this article. It meant “us“—members of the participatory web, the “Web 2.0,” the community behind YouTube, FaceBook, MySpace, WordPress,… and of course the Wikipedia.

In its citation, Time praised its person of the year “for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game.” It suggested that the new web represented

an opportunity to build a new kind of international understanding, not politician to politician, great man to great man, but citizen to citizen, person to person.

Actually, as this suggests, Time didn’t really mean “us” either. At least not if by “us” we mean the professional scholars, journalists, authors, and television producers (that is to say the “pros”) who used to have more-or-less sole responsibility for producing the content “you” (that is to say students, readers, and audiences) consumed. In fact, as the citation makes clear, Time actually sees the new web as being really a case of “you” against “us“—a rebellion of the amateurs that has come at the expense of the traditional experts:

It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes.

Academic Resistance

This sense that the participatory web represents a storming of the informational Bastille is shared by many scholars in our dealings with the representative that most closely touches on our professional lives—the Wikipedia, “the encyclopedia that anyone can edit”. University instructors (and even whole departments) commonly forbid students from citing the Wikipedia in their work (Fung 2007). Praising it on an academic listserv is still a reliable way of provoking a fight. Wikipedia founder Jimmy Wales’s suggestion that college students should not cite encyclopaedias, including his own, as a source in their work is gleefully misrepresented in academic trade magazines and blogs (e.g. Wired Campus 2006).

And none of this is having any effect. Eighteen months ago, I had yet to see a citation from the Wikipedia in a student’s essay. This past term, it was rare to find a paper that did not cite it and several of my students asked for permission to research and write new entries for the Wikipedia instead of submitting traditional papers. Other elements of the participatory web mentioned by Time are proving equally successful: politicians, car companies, and Hollywood types now regularly publish material on YouTube or MySpace alongside or in preference to traditional media channels. This past summer, the story of LonelyGirl15 and her doomed relationship to DanielBeast on YouTube became what might be described as the first “hit series” to emerge from the new medium: it attracted millions of viewers on-line, was discussed in major newspapers, and, after it was revealed to be a “hoax” (it was scripted and produced using professional writers, actors, and technicians), its “star” made the requisite appearance on Jay Leno’s Tonight show (see LonelyGirl15).

Why the Participatory Web Works

The participatory web is growing so quickly in popularity because it is proving to be a remarkably robust model. Experiments with the Wikipedia have shown that deliberately planted false information can be corrected within hours (Read 2006). A widely cited comparison of select articles in the Wikipedia and the Encyclopaedia Britannica by the journal Nature showed that the Wikipedia was far more accurate than many had suspected: in the forty-two articles surveyed, the Wikipedia was found to have an average of four mistakes per article to Britannica’s three (Giles 2006). In fact even just Googling web pages can produce surprisingly useful research results—a recent study showed that diagnoses of difficult illness built by entering information about the symptoms into the search engine Google were accurate 60% of the time (Tang and Hwee Kwoon Ng 2006). In some circumstances, the participatory web actually may prove to be more useful than older methods of professional content creation and dissemination: an article in the Washington Post recently discussed how the United States intelligence community is attempting to use blogs and wikis to improve the speed and quality of information reported to analysts, agents, and decision-makers (Ahrens 2006).

Why Don’t We Like It

Given this popularity and evidence of effectiveness both as a channel of distribution and a source of reasonably accurate and self-correcting information, the academic community’s opposition to the Wikipedia may come at first as something of a surprise. What is it that makes “us” so dislike “you”?

One answer is that the Wikipedia and other manifestations of the participatory web do not fit very well with contemporary academic models for quality control and professional advancement. Professional academics today expect quality scholarship to be peer-reviewed and contain a clear account of intellectual responsibility. Authorship attributions are commonly found with forms of intellectual labour, such as book reviews and encyclopaedia entries, that were published without attribution as little as fifty years ago. Some scholarly journals are naming referees who recommend acceptance; readers for journals that have traditionally used anonymous reviews are frequently asking for their names to be revealed.

This emphasis on review and responsibility has obvious advantages. While peer-review is far from a perfect system—there have been enough hoaxes and frauds across the disciplines in the last twenty years to demonstrate its fallibility—it is surely better than self-publication: I imagine most scholars benefit most of the time from the comments of their readers. In my experience, the interest of good acquisition and copy-editor invariably improves the quality of a final draft.

Moreover, peer-review and clear attribution have an important role in the academic economy: they are the main (and usually only) currency with which researchers are paid by the presses and journals that publish them. In the professional academe, our worth as scholars depends very much on where our work appears. A long article in a top journal or a monograph published at a major University press is evidence that our research is regarded highly. Articles in lesser journals, or lesser forms of dissemination such as book reviews, conference papers, and encyclopaedia entries published under our names are less important but can still be used as evidence of on-going professional activity (see, for example, Department of English, University of Colorado [2007]). While it is not quite money in the bank, this transference of prestige and recognition is an important element in most universities’ systems for determining rank and pay.

An article in the Wikipedia is not going to get anybody tenure. Because they are written collectively and published anonymously, Wikipedia articles do not highlight the specific intellectual contributions of individual contributors—although, in contrast to medical and scientific journals with their perennial problem of “co-authors” who lend names to articles without actually contributing any research (for a discussion of one example, see Bails 2006), it is possible to trace specific intellectual responsibility for all contributions to any entry in the Wikipedia using the history and compare features. And while the Wikipedia does have a formal certification process—articles can be submitted for “peer-review” and selected for “feature” status—this process is optional and not very selective: authors or readers nominate articles for peer-review and certification after they have already been published to the web and the reviewing body consists of simply those interested users who happen to notice that an article has been put forward for review and are willing to comment on the relevant discussion page (see Wikipedia: Peer Review). While this body might include respected experts in the field, it also certainly includes amateurs whose main interest is the Wikipedia itself. It also, almost equally certainly, includes people whose knowledge of the topic in question is ill-informed or belongs to the lunatic fringe.

Why We Can’t Do Anything About It

Given these objections, it is not surprising that some of us in the professional academic community are trying to come up with some alternatives—sites that combine desirable aspects of the Wikipedia model (such as its openness to amateur participation) with other elements (such a expert-review and editorial control) taken from the world of the professional academy. One new project that attempts to do this is the Citizendium, a project which, beginning as a fork (i.e. branch) of the original Wikipedia, intends to bring it under more strict editorial control: in this project, “Editors“—contributors with advanced degrees—are to be recruited to serve as area experts and help resolve disputes among contributors while “Constables“—“a set of persons of mature judgment“—will be “specially empowered to enforce rules,… up to and including the ejection of participants from the project” (Citizendium 2006). Other, though far more specialised, attempts to merge the openness of wiki-based software with more strict editorial control and peer-review are also increasingly being proposed by scholarly projects and commercial scholarly publishers.

Few if any of these projects are likely to succeed all that well. While the addition of formal editorial control and an expert-based certification system brings their organisation more closely into line with traditional academic expectations, the economics remain suspect. On the one hand, such projects will find it difficult to generate enough prestige from their peer-review process to compete for the best efforts of professional scholars with more traditional, invitation-only, encyclopaedias such as the Britannica or collections published by the prestigious academic presses. On the other hand, they are also unlikely to be able to match the speed and breadth of content-development found at more free-wheeling, community-developed projects of the participatory web.

In fact, the Wikipedia itself is the successful offshoot of a failed project of exactly this sort. The ancestor of the Wikipedia was the Nupedia, an on-line, open-source (though non-wiki) project whose goal was to develop an on-line, peer-reviewed and professionally written encyclopaedia (see History of Wikipedia, Nupedia, Wikipedia, and Sanger 2005). The editorial board was subject to strict review and most participants were expected to have a Ph.D. or equivalent. The review process involved seven steps: five analogous to those found traditional academic publishing (assigning to an editor, finding a reader, submitting for review, copy-editing, and final pre-publication approval) and two borrowed from the world of open source software (a public call for reviews, and a public round of copy-editing). Begun in March 2000, the project ultimately collapsed in September 2003 due to a lack of participation, slow time-to-publication, and conflicts between professional contributors and editors and members of the public in the open review and copy-editing parts of the review process. In its relatively brief existence, the project managed to approve only twenty-four peer-reviewed articles for publication. At its suspension, seventy-four were still in various stages of review. After the project as a whole was suspended, the successful articles were rolled into the Wikipedia. Relatively few can be found in their original form today.

The Wikipedia was originally established as a support structure for the Nupedia’s open processes—as a place where participants in the larger project could collaborate in the creation of material for the “official” project and contribute to their review and copy-editing. The wiki-based project was proposed on the Nupedia’s mailing list on January 2, 2001 and rejected almost immediately by participants for much the same reasons it is frowned upon by professional academics today. It was reestablished as a separate project with its own domain name by January 10. Almost immediately, it began to best its “mother” project: within a year the Wikipedia had published 20,000 articles and existed in 18 different languages; by the Nupedia’s suspension in the fall of 2003, the Wikipedia had published 152,000 articles in English and was found in twenty-six different languages (Multilingual Statistics). By October 30th, 2006, there were over 1.4 million articles in English alone.

The contrasting fates of the Nupedia and the Wikipedia illustrate the central problem that faces any attempt to impose traditional academic structures on projects designed for the participatory web: the strengths and weaknesses of wiki-based and traditional academic models are almost directly out of phase. The Wikipedia has been successful in its quest to develop a free, on-line encyclopaedia of breadth and accuracy comparable to that of its print-based competitors because the barrier to participation is so low. Because anybody can edit the Wikipedia, millions do. And it is their collective contribution of small amounts of effort that enables the growth and success of the overall project.

The Nupedia, on the other hand, failed because its use of traditional academic vetting procedures raised the bar to mass participation by amateurs but did not make the project significantly more attractive to professionals. Academics who need prestige and authorial credit for their professional lives are still going to find it difficult to use participation in the Nupedia (or, now, the Citizendium) on our CVs. Even in fields where collaboration is the norm, scholars need to be able to demonstrate intellectual leadership rather than mere participation. A listing as first author is far more valuable than second or third. And second or third author in a traditional venue is infinitely preferable to professional academics to membership in an relatively undifferentiated list of contributors to an on-line encyclopaedia to which amateurs contribute. The most prestigious journals, presses, and encyclopaedias all enforce far higher standards of selectivity than the mere evidence of an earned Ph.D. suggested by Nupedia and or “eligibility” for “a tenure track job” preferred by the Citizendium. No project that hopes to remain open to free collaboration by even a select group well-informed amateurs or marginally qualified is going to be able to compete directly with already existing, traditional publications for the best original work of professional scholarly researchers, no matter how stringent the review process. But by raising the bar against relatively casual participation by large numbers of amateurs, such projects also risk vitiating the “many hands make light work” principle that underlies the explosive success of the Wikipedia and similar participatory projects.

A New Model of Scholarship: The Wikipedia as Community Service

If I am correct in thinking that attempts to create alternatives to the Wikipedia by combining aspects of traditional academic selectivity and review with a wiki-based open collaboration model are doomed to failure, then the question becomes what “we” (the professional University teachers and researchers who are so suspicious of the original Wikipedia) are to do with what “you” (the amateurs who contribute most of the Wikipedia’s content) produce.

It is clear that we can’t ignore it: no matter what we say in our syllabi, students will continue to use the Wikipedia in their essays and projects—citing it if we allow them to do so, and plagiarising from it if we do not. Just as importantly, the Wikipedia is rapidly becoming the public’s main portal to the subjects we teach and research: popular journalists now regularly cite the Wikipedia in their work and the encyclopaedia commonly shows up on the first page of Google searches. While it may not be in any specific scholar’s individual professional interest to take time away from his or her refereed research in order to contribute to a project that provides so little prestige, it is clearly in our collective interest as a profession to make sure that our disciplines are well represented in the first source to which our students and the broader public turn when they want to find out something about the topics we actually research.

But perhaps this shows us the way forward. Perhaps what we need is to see the Wikipedia and similar participatory sites less as a threat to our way of doing things than a way of making what we do more visible to the general public. The fictional romance between LonelyGirl15 and DanielBeast on YouTube did not threaten the makers of commercial television. But it did give prominence to a medium that makers of commercial television now use regularly to attract audiences to their professional content in the traditional media. In our case, the Wikipedia is less an alternative to traditional scholarship (except perhaps as this is represented in print encyclopaedias) than it is a complement—something that can be used to explain, show off, and broaden the appeal of the work we do in our professional lives.

In fact, the important thing about the Wikpedia is that it has been built almost entirely through the efforts of amateurs—that is to say people who are not paid to conduct research in our disciplines but do so anyway because it is their hobby. While it can certainly be disheartening to see the occasional elementary mistake or outlandish theory in a Wikipedia entry, we should not ignore the fact that the entry itself exists because people were interested enough in what we do to try and imitate it in their spare time. Given the traditional lack of respect shown scholarly research by governments and funding agencies for much of the last century, we should be rejoicing in this demonstration of interest—in much the same way scientists judging a science fair are able to see past the many relatively trivial experiments on display and recognise the event’s importance as a representation of popular interest in what they do.

This recognition of the extent to which the Wikipedia has engaged the imagination of the general public and turned it to the amateur practice of scholarship suggests what I think may prove to be the best way of incorporating it into the lives of professional academics: since the Wikipedia appears unable to serve as a route to professional advancement for intrinsic reasons, perhaps we should begin to see contributions to it by professional scholars as a different type of activity altogether—as a form community service to be performed by academics in much the same way lawyers are often expected to give back to the public through their pro bono work. A glance at almost any discussion page on the Wikipedia will show that the Wikipedians themselves are aware of the dangers posed to the enterprise by the inclusion of fringe theories, poor research, and contributions by people with insufficient disciplinary expertise. As certified experts who work daily with the secondary and primary research required to construct good Wikipedia entries, we are in a position to contribute to the construction of individual articles in a uniquely positive way by taking the time to help clean up and provide balance to entries in our professional areas of interest. In doing so, we can both materially improve the quality of the Wikipedia and demonstrate the importance of professional scholars to a public whose hobby touches very closely on the work we are paid to do—and whose taxes, by and large, support us.

And who knows, maybe “we” could even join “you” in accepting Time Magazine’s nomination for person of the year.

Works Cited

Ahrens, Frank 2006. “A Wikipedia Of Secrets.” Washington Post. Sunday, November 5: F07. Online edition, URL: http://www.washingtonpost.com/wp_dyn/content/article/2006/11/03/AR2006110302015.html

Bails, Jennifer.2006. “Schatten’s hand in bogus paper detailed.” Pittsburg Tribune-Review, January 11. http://www.pittsburghlive.com/x/tribune-review/trib/regional/s_412326.html

Bergstein, Brian. “Microsoft Offers Cash for Wikipedia Edit.” Washington Post, January 23. http://www.washingtonpost.com/wp-dyn/content/article/2007/01/23/AR2007012301025.html

Citizendium 2006. “Citizendium’s Fundamental Policies.” Citizendium (citation from version 1.4, October 11) http://www.citizendium.org/fundamentals.html

Department of English, University of Colorado [2007]. “Department of English guidelines for promotion.” Department Handbook. http://www.colostate.edu/Depts/English/handbook/guidepro.htm

Fung, Brian, 2007. “Wikipedia distresses History Department.” middleburycampus.com. Online. URL: http://media.www.middleburycampus.com/media/storage/paper446/news/2007/01/24/News/Wikipedia.Distresses.History.Department-2670081.shtml

Giles, Jim. 2005. “Internet encyclopaedias go head to head.” news@nature.com. “ http://www.nature.com/news/2005/051212/full/438900a.html”: http://www.nature.com/news/2005/051212/full/438900a.html

Grossman, Lev. 2006. “Time’s Person of the Year: You.” Time. Wednesday, Dec. 13. Online Edition. URL: http://www.time.com/time/magazine/article/0%2C9171%2C1569514%2C00.html .

History of Wikipedia. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=History_of_Wikipedia&oldid=104389205 (accessed January 31, 2007).

Lonelygirl15. Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=104136723 (accessed January 31, 2007).

Multilingual Statistics. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia:Multilingual_statistics&oldid=97805501 (accessed February 2, 2007).

Nupedia. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Nupedia&oldid=103617050 (accessed January 31, 2007).

Read, Brock. 2006. “Can Wikipedia Ever Make the Grade?” The Chronice of Higher Education October 27. URL: http://chronicle.com/temp/reprint.php?%20id=z6xht2rj60kqmsl8tlq5ltqcshc5y93y

Sanger, Larry J. 2005. “The Early History of Nupedia and Wikipedia: A Memoir.” Part I http://features.slashdot.org/article.pl?sid=05/04/18/164213&tid=95&tid=149&tid=9 Part II: http://features.slashdot.org/article.pl?sid=05/04/19/1746205&tid=95.

Tang, Hangwi and Jennifer Hwee Kwoon Ng. 2006. “Googling for a diagnosis—use of Google as a diagnostic aid: internet based study” BMJ 333)7570): 1143-1145. URL: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1676146: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1676146

Wikipedia: Peer Review. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia:Peer_review&oldid=104637689 (accessed January 31, 2007).

Wikipedia. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia&oldid=104645649 (accessed January 31, 2007).

Wired Campus 2006. “Wikipedia Founder Discourages Academic Use of His Creation.” Chronicle of Higher Education. June 12. URL: http://chronicle.com/wiredcampus/article/1328/wikipedia-founder-discourages-academic-use-of-his-creation

----  

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation, citation practice, citations, composition, computers, digital humanities, digital pedagogy, exercises, grammar, history, moodle, old english, pedagogy, research, student employees, students, study tips, teaching, tips, tutorials, unessay, universities, university of lethbridge

See all...

Follow me on Twitter

At the dpod blog