Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Could we design comparative metrics that would favour the humanities?

Posted: Mar 29, 2015 13:03;
Last Modified: Mar 29, 2015 17:03
Keywords:

---

A quick, and still partially undigested, posting on metrics that might favour the humanities over the sciences in “open” competitions. I’m working this out in response to a discussion I had recently with a senior administrator who argued that the University’s tendency to channel resources disproportionately to the Natural Sciences was simply the result of their comparative excellence as measured in “open” competitions.

Contents

For a supposed “Liberal Arts” University, the University of Lethbridge is exceptionally bad at supporting the Humanities

As I’ve pointed out before, for a supposed Liberal Arts University, the University of Lethbridge is exceptionally poor in its support for the Humanities. While the Humanities suffer from a lack of resources and attention in comparison to the Social and especially Natural Sciences at all Universities, the University of Lethbridge is a national outlier in the way it has starved its researchers in this area over the last quarter century.

Thus, for example, while our HSS (Humanities and Social Sciences) researchers score at about the 50th percentile on a field normalised basis in terms of their research impact, we come in fourth-last in terms of our funding success compared to other Humanities and Social Science researchers at Canadian Universities (our natural scientists, in contrast, come in at the top of the bottom third in Canada in terms of both impact and funding success).

Poor performance can be attributed in part to administrative monocultures.

There are probably a number of reasons for this mismanagement. But one of them is almost certainly the fact that the University has for the same amount of time been managed almost entirely without participation from Humanists. In the last quarter-century, only two people with a background in the Humanities have been members of our senior administration—and one of these has been a Historian who has been managing our Faculty of Health Sciences. Two years ago, we appointed a classicist as Dean of Arts and Science. This is the first time in 25 years that a Humanist has been in a position to control a budget that actually affects Humanities research.

My argument has been that this lack of disciplinary breadth in our senior administration is largely responsible for our poor support for the Humanities (there have been more administrators from the Social Sciences and, not surprisingly, I would argue, they have tended to do better than the Humanities in terms of gaining resources). It is a natural impulse to find the things you understand more important than the things you do not and an equally natural impulse to unconsciously favour those who share your background and training. Just as our (almost exclusively) male senior administration has tended to find other men to be the most suitably people for vacancies as they have come up, so too an administration that consists (almost exclusively) of natural scientists has tended to think that those are the areas that could make the best use of resources like Canada Research Chairs and Board of Governor Research Chairs (until two years ago, the University of Lethbridge—almost uniquely in Canada—had never appointed a Canada Research Chair in the Humanities and only one in the Social Sciences; it has never appointed a Humanist to a Board of Governors Research Chair).

Or could it be that our Humanists are simply worse than our scientists?

Recently a member of the Senior Administration suggested to me that my analysis of the problem at the U of L was wrong because Research Chairs and similar resources are now being awarded competitively on the basis of open, University-wide, competitions (they used to be simply assigned by the Vice President Academic). If natural scientists are winning these resources, this persons argument went, then it was presumably because they were simply better.

Moreover, the committees that makes these awards are interdisciplinary. So it is no longer the case that these resources are being assigned solely by scientist-administrators who know nothing about the domain. While we may not have that many Humanists in our administration, the scientists we do have are being careful to overcome their bias by allowing the different disciplines to compete against each other.

There is no such thing as a truly “open” cross-disciplinary competition

But is there such a thing as a truly “open” competition across disciplines? The skills and activities that make you a good English professor, for example, may not be the same as those that make you a good Biologist. And within our different disciplines, we reward people for different kinds of activities (for an excellent discussion of this, see How Professors Think: Inside the Curious World of Academic Judgment by Michélle Lamont). Given this, it is an open question to what extent the outcome of these competitions is being shaped by the criteria that are being used to adjudicate them.

And, in fact, the criteria we usually use in these cases tend to favour the sciences: publication and citation counts, impact factors and h-indices are all measurements that are better suited to measuring activity in a field that moves quickly and deals in largely incremental and linear development. While there are problems with the use of such metrics even within the Sciences, there is no indication that they represent an adequate method for identifying excellence in other domains.

Using the wrong criteria can reward sub-optimal behaviour and hide excellence

Indeed, it is even possible that they might hide excellence or reward sub-optimal behaviour in some domains, even as they recognise and reward excellence in others. Many Humanities disciplines, for example, treat “the book” and/or lengthy articles as a measure of scholarly maturity. Publication counts—which reward scholars for avoiding synthesis by dividing work into minimum publishable units—are going to be a very poor measure of success in such fields. In English, for example, we tend to see books as being evidence of excellence; somebody who wants to beat a scientist in an open competition in terms of publication counts, however, would almost certainly be better concentrating on Notes, one of our more minor forms of publication.

Could we reverse the tables and create a structural bias in favour of things that make Humanities research excellent?

All this got me thinking, what would it take to reverse the tables on these “open” competitions? I.e. what metrics could I come up with that, while seeming neutral, might actually provide a structural advantage to Humanists over Natural Scientists in head-to-head competitions. In the spirit of “notes for further research,” here are a couple of guesses:

  1. Average length of contribution (the L-index). Anybody who has ever sat on cross-disciplinary promotion committees knows that page count means different things in different disciplines. In many Humanities disciplines, the best work tends to be synthetic: i.e. things that gather together various views and opinions and construct a larger synthesis. This is opposed to many sciences, where short, actual results are privileged. Our current use of publication counts privileges fields in which it is possible to think in terms of “minimal publishable units.” But what if we came up with a measure that privileged synthesis? A person who has published a few long works (i.e. has a high average length per publication) is probably a poor scientist; but they are also probably a stronger humanist. I’d be interested to see how we’d do if we starting counting length of publication along side their number.
  2. Length of Citation Record. I published my edition of Caedmon’s Hymn 10 years ago. The edition it replaced was published 70 years before that. Both works are still being cited and indeed my edition has recently been the subject of a major review article. This is not the result of any special excellence on my part or the part of the predecessor edition: it is in fact not uncommon in the Humanities to see references to a “recent” study that is ten to fifteen years old. What this suggests, then, is that length of citation record is probably an important measure of Humanities research success. Once again, it is probably a poor measure of scientific research success—except perhaps in the case of a few ground breaking examples—where the research development is more incremental and linear. But this is also why the h-index (which in practice is a measure of speed of citation rather than longevity of citableness) favours scientists over Humanities.
  3. Diachronic citation trend. My edition of Caedmon’s Hymn is also getting cited more now than it was when it was first published. In fact, in work I am planning to present this summer, I will show that it takes about 15-20 years for an edition to become “standard” (i.e. cited by everybody). This is also probably true of our greatest and most important works of literary theory, history, and philosophy: it takes a while for syntheses to catch on and influence thinking. Once again, this is opposed to much of the sciences. While some work, again largely field-changing, fundamental work, probably does have a long and upwardly rising citation trend, I suspect most science publications (including much of the very best work) have a citation half-life—that is to say that their citations fall off with time as the field moves on. In the Humanities, while this is probably common too, it is not a good sign: the best Humanities work gets cited with increasing frequency through time.

Maybe the solution is to compare apples to apples

This is all a thought experiment and is for the most part guesswork rather than research-based. But it is fun to wonder what would happen if the U of L redid some of its recent “open” competitions using criteria like the above as the discriminators. Since, I suspect, these criteria are as unfair to scientists as the ones we currently use are to Humanists, I guess the results would be very different.

Of course the better approach is to avoid “open” competitions at all and instead proceed on a discipline-normalised basis.

----  

Comment

:
:

:

Textile help

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation practice, composition, computers, digital humanities, digital pedagogy, grammar, history, moodle, old english, pedagogy, research, students, study tips, teaching, tips, tutorials, unessay, universities

See all...

Follow me on Twitter