Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Teaching prescriptive grammar hurts student writing

Posted: Jan 22, 2014 14:01;
Last Modified: Mar 04, 2015 05:03

---

_Update: Actually, the chart I was really thinking of can be found here.

The other day in my grammar class, I mentioned an article that reviewed years’ worth of controlled studies into methods of composition structure. The article I was thinking about was George Hillocks, Jr., “What Works in Teaching Composition: A Meta-Analysis of Experimental Treatment Studies,” American Journal of Education 93.1 (1984): 133–170.


The table I was thinking of in class is from page 157:



I’d overstated this conclusion a little: while teaching grammar was indeed the only thing people did that made student writing worse, I was wrong when I said it had a greater effect in absolute terms than any other method.


On the more general question of whether teaching grammar is effective, here is Hillock’s conclusion:


Grammar.-The study of traditional school grammer (i.e., the definition of parts of speech, the parsing of sentences, etc.) has no effect on raising the quality of student writing. Every other focus of instruction examined in this review is stronger. Taught in certain ways, grammar and mechanics instruction has a deleterious effect on student writing. In some studies a heavy emphasis on mechanics and usage (e.g., marking every error) results in significant losses in overall quality. School boards, administrators, and teachers who impose the systematic study of traditional school grammar on their students over lengthy periods of time in the name of teaching writing do them a gross disservice that should not be tolerated by anyone concerned with the effective teaching of good writing. Teachers concerned with teaching standard usage and typographical conventions should teach them in the context of real writing problems (160).


Although you need to be careful, because the results are not alway independent, this conclusion has been reached time and time again in different contexts over at least the last forty years. One relatively recent study from an English context is: Dominic Wyse, “Grammar. For Writing? A Critical Review of Empirical Evidence,” British Journal of Educational Studies 49.4 (2001): 411–427.

----  

Byte me: Technological Education and the Humanities

Posted: Dec 20, 2008 14:12;
Last Modified: May 23, 2012 19:05

---

Note: Published in "_Heroic Age_ 12":http://www.heroicage.org/issues/12/em.php

I recently had a discussion with the head of a humanities organisation who wanted to move a website. The website was written using Coldfusion, a proprietary suite of server-based software that is used by developers for writing and publishing interactive web sites (Adobe nd). After some discussion of the pros and cons of moving the site, we turned to the question of the software.

Head of Humanities Organisation: We'd also like to change the software.
Me: I'm not sure that is wise unless you really have to: it will mean hiring somebody to port everything and you are likely to introduce new problems.
Head of Humanities Organisation: But I don't have Coldfusion on my computer.
Me: Coldfusion is software that runs on a server. You don't need it on your computer. You just need it on the server. Your techies handle that.
Head of Humanities Organisation: Yes, but I use a Mac.

I might be exaggerating here—I can't remember if the person really said they used a Mac. But the underlying confusion we faced in the conversation was very real: the person I was talking to did not seem at all to understand the distinction between a personal computer and a network server— basic technology by which web pages are published and read.

This is not an isolated problem. In the last few years, I have been involved with a humanities organisation that distributes e-mail by cc:-list to its thirty-odd participants because some members believe their email system can't access listervers. I have had discussions with a scholar working on a very time-consuming web-based research project who was intent on inventing a custom method for indicating accents because they thought Unicode was too esoteric. I have helped another scholar who wrote an entire edition in a proprietary word-processor format and needed to recover the significance of the various coloured fonts and type faces he had used. And I have attended presentations by more than one project that intended to do all their development and archiving in layout-oriented HTML.

These examples all involve basic technological misunderstandings by people actively interested in pursuing digital projects of one kind or another. When you move outside this relatively small subgroup of humanities scholars, the level of technological awareness gets correspondingly lower. We all have colleagues who do not understand the difference between a blog and a mailing list, who don't know how web-pages are composed or published, who can't insert foreign characters into a word-processor document, and who are unable to backup or take other basic precautions concerning the security of their data.

Until very recently, this technological illiteracy has been excusable: humanities researchers and students, quite properly, concerned themselves primarily with their disciplinary work. The early Humanities Computing experts were working on topics, such as statistical analysis, the production of concordances, and building the back-ends for dictionaries, that were of no real interest to those who intended simply to access the final results of this work. Even after the personal computer replaced the typewriter, there was no real need for humanities scholars to understand technical details beyond such basics as turning a computer on and off and starting up their word-processor. The principal format for exchange and storage of scholarly information remained paper and the few areas where paper was superseded—such as in the use of email to replace the memo—the technology involved was so widely used, so robust, and above all so useful and so well supported that there was no need to learn anything about it: if your email and word-processor weren't set up at the store when you bought a computer, you could expect this work to be done for you by the technicians at your place of employment or over the phone by the Help Desk at your Internet Service Provider: nothing about humanities scholars' use of the technology required special treatment or distinguished them from the University President, a lawyer in a one-person law office... or their grandparents.

In the last half-decade, this situation has changed dramatically. The principal exchange format for humanities research is no longer paper but the digital byte—albeit admittedly as represented in PDF and word-processor formats (which are intended ultimately for printing or uses similar to that for which we print documents). State agencies are beginning to require open digital access to publicly-funded research. At humanities conferences, an increasing number of sessions focus on digital project reports and the application. And as Peter Robinson has recently argued, it is rare to discover a new major humanities project that does not include a significant digital component as part of its plans (Robinson 2005). Indeed some of the most interesting and exciting work in many fields is taking advantage of technology such as GPS, digital imaging, gaming, social networking, and multimedia digital libraries that was unheard of or still very experimental less than a decade ago.

That humanists are heavily engaged with technology should come, of course, as no real surprise. Humanities computing as a discipline can trace its origins back to the relatively early days of the computer, and a surprising number of the developments that led to the revolution in digital communication over the last decade were led by people with backgrounds in humanities research. The XML specification (XML is the computer language that underlies all sophisticated web-based applications, from your bank statement to Facebook) was edited under the supervision of C. Michael Sperberg-McQueen, who has a PhD in Comparative Literature from Stanford and was a lead editor of the Text Encoding Initiative (TEI) Guidelines, the long-standing standard for textual markup in the humanities, before he moved to the W3C (Sperberg-McQueen 2007). Michael Everson, the current registrar and a co-author of the Unicode standard for the representation of characters for use with computers, has an M.A. from UCLA in Indo-European linguistics and was a Fulbright Scholar in Irish at the University of Dublin (Evertype 2003-2006). David Megginson, who has also led committees at the W3C and was the principal developer of SAX, a very widely used processor for XML, has a PhD in Old English from the University of Toronto and was employed at the Dictionary of Old English and the University of Ottawa before moving to the private sector (Wikipedia Contributors 2008).

Just as importantly, the second generation of interactive web technology (the so-called "Web 2.0") is causing the general public to engage with exactly the type of questions we research. The Wikipedia has turned the writing of dusty old encyclopedias into a hobby much like ham-radio. The social networking site Second Life has seen the construction of virtual representations of museums, and libraries. Placing images of a manuscript library or museum's holding on the web is a sure way of increasing in-person traffic at the institution. The newest field for the study of such phenomenon, Information Studies, is also one of the oldest: almost without exception, departments of Information Studies are housed in and are extensions of traditional Library science programmes.

The result of this technological revolution is that very few active humanists can now truthfully say that they have absolutely no reason to understand the technology underlying their work. Whether we are board members of an academic society, working on a research project that is considering the pros and cons of on-line publication, instructors who need to publish lecture notes to the web, researchers who are searching JSTOR for secondary literature in our discipline, or the head of a humanities organisation that wants to move its web-site, we are all increasingly involved in circumstances that require us to make basic technological decisions. Is this software better than that? What are the long-term archival implications for storing digital information in format x vs. format y? Will users be able to make appropriate use of our digitally-published data? How do we ensure the quality of crowd-sourced contributions? Are we sure that the technology we are using will not become obsolete in an unacceptably short period of time? Will on-line publication destroy our journal's subscriber base?

The problem is that these are not always questions that we can "leave to the techies." It is true that many universities have excellent technical support and that there are many high-quality private contractors available who can help with basic technological implementation. And while the computer skills of our students is often over-rated, it is possible to train them to carry out many day-to-day technological tasks. But such assistance is only as good as the scholar who requests it. If the scholar who hires a student or asks for advice from their university's technical services does not know in broad terms what they want or what the minimum technological standards of their discipline are, they are likely to receive advice and help that is at best substandard and perhaps even counter-productive. Humanities researchers work on a time-scale and with archival standards far beyond those of the average client needing assistance with the average web-site or multimedia presentation. We all know of important print research in our disciplines that is still cited decades after the date of original publication. Not a few scholarly debates in the historical sciences have hinged on questions of whether a presentation of material adequately represents the "original" medium, function, or intention. Unless he or she has special training, a technician asked by a scholar to "build a website" for an editorial project may very well not understand the extent to which such questions require the use of different approaches to the composition, storage, and publication of data than those required to design and publish the athletic department's fall football schedule.

Even if your technical assistant is able to come up with a responsible solution for your request without direction from somebody who knows the current standards for Digital Humanities research in your discipline, the problem remains that such advice almost certainly would be reactive: the technician would be responding to your (perhaps naive) request for assistance, not thinking of new disciplinary questions that you might be able to ask if you knew more about the existing options. Might you be able to ask different questions by employing new or novel technology like GPS, serious gaming, or social networking? Can technology help you (or your users) see your results in a different way? Are there ways that your project could be integrated with other projects looking at similar types of material or using different technologies. Would your work benefit from distribution in some of the new publication styles like blogs or wikis? These are questions that require a strong grounding in the original humanistic discipline and a more-than-passing knowledge of current technology and digital genres. Many of us have students who know more than than we do about on-line search engines; while we might hire such students to assist us in the compilation of our bibliographies, we would not let them set our research agendas or determine the contours of project we hire them to work on. Handing technological design of a major humanities research project over to a non-specialist university IT department or a student whose only claim to expertise is that they are better than you at instant messaging is no more responsible.

Fortunately, our home humanistic disciplines have had to deal with this kind of problem before. Many graduate, and even some undergraduate, departments require students to take courses in research methods, bibliography, or theory as part of their regular degree programmes. The goal of such courses is not necessarily to turn such students into librarians, textual scholars, or theorists—though I suppose we wouldn't complain if some of them discovered a previously unknown interest. Rather, it is to ensure that students have a background in such fundamental areas sufficient to allow them to conduct their own research without making basic mistakes or suffering unnecessary delays while they discover by trial-and-error things that might far more efficiently be taught to them upfront in the lecture hall.

In the case of technology, I believe we have now reached the stage where we need to be giving our students a similar grounding. We do not need to produce IT specialists—though it is true that a well-trained and knowledgeable Digital Humanities graduate has a combination of technological skills and experience with real-world problems and concepts that are very easily transferable to the private sector. But we do need to produce graduates who understand the technological world in which we now live—and, more importantly, how this technology can help them do better work in their home discipline.

The precise details of such an understanding will vary from discipline to discipline. Working as an Anglo-Saxonist and a textual critic in an English department, I will no doubt consider different skills and knowledge to be essential than I would if I were a church historian or theologian. But in its basic outlines such a orientation to the Digital Humanities probably need not vary too much from humanities department to humanities department. We simply should no longer be graduating students who do not know the basic history and nature of web technologies, what a database is and how it is designed and used, the importance of keeping content and processing distinct from each other, and the archival and maintenance issues involved in the development of robust digital standards like Unicode and the TEI Guidelines. Such students should be able to discuss the practical differences (and similarities) of print vs. web publication; they should be able to assess intelligently from a variety of different angles the pros and cons of different approaches to basic problems involving the digitisation of text, two and three-dimensional imaging, animation, and archival storage and cataloguing; and they should be acquainted with basic digital pedagogical tools (course management and testing software; essay management and plagiarism detection software) and the new digital genres and rhetorics (wikis, blogs, social networking sites, comment boards) that they are likely to be asked to consider in their future research and teaching.

Not all humanists need to become Digital Humanists. Indeed, in attending conferences in the last few years and observing the increasingly diverging interests and research questions pursued by those who identify themselves as "Digital Humanists" and those who define themselves primarily as traditional domain specialists, I am beginning to wonder if we are not seeing the beginnings of a split between "experimentalists" and "theorists" similar to that which exists today in some of the natural sciences. But just as theoretical and experimental scientists need to maintain some awareness of what each branch of their common larger discipline is doing if the field as a whole is to progress, so too must there remain an interaction between the traditional humanistic and digital humanistic domains if our larger fields are also going to continue to make the best use of the new tools and technologies available to us. As humanists, we are, unavoidably, making increasing use of digital media in our research and dissemination. If this work is to take the best advantage of these new tools and rhetorics—and not inadvertently harm our work by naively adopting techniques that are already known to represent poor practice, we need to start treating a basic knowledge of relevant digital technology and rhetorics as a core research skill in much the same way we currently treat bibliography and research methods.

Works Cited

Adobe. nd. "Adobe Coldfusion 8." http://www.adobe.com/products/coldfusion/

Evertype 2003-2006. "Evertype: About Michael Everson." http://www.evertype.com/misc/bio.html

Robinson, Peter. 2005. "Current issues in making digital editions of medieval texts—or, do electronic scholarly editions have a future?" DM 1.1 (2005): http://www.digitalmedievalist.org/journal/1.1/robinson/

Sperberg-McQueen, C. M. 2007. "C.M. Sperberg-McQueen Home Page." http://www.w3.org/People/cmsmcq/

Wikipedia contributors. 2008. "David Megginson." Wikipedia. http://en.wikipedia.org/w/index.php?title=David_Megginson&oldid=257685665

.
----  

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation, citation practice, citations, composition, computers, digital humanities, digital pedagogy, exercises, grammar, history, moodle, old english, pedagogy, research, student employees, students, study tips, teaching, tips, tutorials, unessay, universities, university of lethbridge

See all...

Follow me on Twitter

At the dpod blog