This post was republished as ‘Evaluating Scholarly Digital Outputs: The 6 Layers Approach’, Journal of Digital Humanities, 1:1, Fall, 2012.
The topic of appropriate standards for the evaluation of scholarly digital outputs has come up in conversation at my institution (the University of Canterbury, New Zealand) recently and I’ve realised I haven’t got a ready or simple answer, usually replying that such standards are extremely important because we need to ensure scholarly digital outputs attain to the same standards as, say, monographs, but that they’re evolving. The conversations normally don’t go much further than that. This post, then, is an attempt to get my thoughts down on paper so I can point colleagues to a handy url summarising my thoughts. Much of it will merely repeat common knowledge for digital humanists, but might be of interest.
For a start, as someone employed as a Senior Lecturer in Digital Humanities, standards are of ineffable importance to me. While the ‘big tent’ philosophy of the digital humanities (low barriers to entry and a supportive, welcoming community) is of central importance, it’s becoming almost equally important that we are able to justify our existence within the academy to tenure and hiring committees, and of course funding agencies. For the digital humanities to have a future we need to be able to articulate what is ‘good’ quality. Without that, even ignoring the potential effects on the scholarly reputation of the field, there is a danger that our students will go out into the job market claiming to have digital skills that aren’t recognised as useful, or at the necessary level, by potential employers. In short, while we want to encourage all-comers, we also need to initiate people into an evolving set of standards that mark the difference between ‘welcome contribution’ and ‘scholarly output’. While all content is welcome, and no-one is going to be able to produce high quality scholarly outputs in all the different varieties of the digital humanities, there is an expectation that work claiming to be ‘scholarly grade’ needs to meet certain criteria.
I’ve added some references below to get people started. The special edition of Profession is probably the best place to start. I too have an opinion, though. My feeling is that, in simple terms, there are 5 levels of standards met by most digital humanities projects, and a 6th that doesn’t really make the grade at all. This isn’t a hierarchical scale as much as a classification scheme describing types of projects seen ‘in the wild’. Not all digital humanities outputs are intended to be Category 1, for instance. Some, like this blog post, serve a quite different function. Other projects are produced by people just starting out with a new technology, so there is little chance the product will reach a standard required for tenure or review. They might be experienced digital humanists trying out a new method or experimenting with something likely to fail, or they might a beginner learning the ropes. In short, these are ‘layers’ that all contribute in important ways to the digital humanities ecosystem. Each layer has a function, and is in many ways inter-dependent with the others. To denigrate any layer is to undermine our broader purpose. To paraphrase a friend of mine, ‘they are what they are’.
- Category 1: The scholar has built the output themselves, or been a key driver in the technical design and build of it. The output has been driven and project managed by the scholar, often with external funding, including a high degree of technical input in both the design and build phases. The output is complex and/or wide-ranging (either in terms of project scope or technical complexity) and a highly innovative contribution to the field. It conforms to accepted standards in both the digital humanities and computer science. Significant and robust review milestones have been used during all phases of the project, including international feedback. Usage reports (where relevant or possible) indicate high engagement with the output from an international audience. The output has gained wide-spread recognition in both the scholarly and digital humanities communities, and perhaps broader media. It is sustainable, backed up, and controlled by good data management standards.
- Category 2: The scholar has built the output themselves, or been a key driver in the technical design and build of it (in this category, because the outputs tend to be of smaller scope than Cat.1, the expectation is really that the scholar has built it themselves, or been an integral part of the team that did) . It either conforms to accepted standards in both the digital humanities and computer science, or provides a conscious and challenging departure from them. The product is of limited scope, but represents an innovative contribution to the field and has gained significant recognition in either the scholarly community, digital humanities community, or the broader media. Usage reports (where relevant or possible) indicate high engagement with the output from an international audience.
- Category 3: The output has been built by an external service unit or vendor with no technical input from the scholar, but the scholar has been closely involved in the design and build phases, and contributed high quality content of some form (data or text, perhaps). The product conforms to some standards in either the digital humanities or computer science, but these are loosely applied and/or incompletely implemented.
- Category 4: The output has been built by an external service unit or vendor with no technical input from the scholar. It does not conform to generally accepted standards in either computer science or the digital humanities. The scholar, however, has provided high quality content of some form (data or text, perhaps) and the product is of use to general users and researchers.
- Category 5: This is a catch-all layer for all the wonderful stuff that the digital world enables – the ephemera of digital scholarship. Examples include blog posts, tweets, small contributions to code repositories etc. It’s also the category that suggests a slightly relativistic attitude is needed when considering my categories, because Cat.5 outputs are incredibly important to the digital humanities. They are our flotsam and jetsam, the glue that keeps the community humming.
- Category 6: Rarely seen, and generally politely ignored if they are. This category doesn’t conform to any standards, scholarly or otherwise, indicates little or no understanding of current discourses and practices in the digital humanities, and includes poor quality data or content.
This is only my very broad-brush take on the subject. The reason I present it here is that, like in any other field, the really important thing with digital humanities outputs is that the producer of them understands where their output fits within the broader intellectual context. While this won’t always be the case – we always hope that something will come from left-field – it indicates both an understanding of the field, and respect for it. In general, though, I expect that builders of DH outputs have consciously designed and positioned their product within the broader landscape of DH, and understand that there is a broader matrix of standards and expectations alive in the community. Although I’ve noticed that as the field grows only Cats. 1, 2 and 5 tend to get much airtime, it really doesn’t matter which category the final product falls into….unless it’s Cat.6 and even then people don’t tend to get too bothered: it is what it is. I should also note that I’ve referred to “the scholar” in the singular above, but this is rarely the case in DH projects. For a good example of the growing discourse about collaborative authorship, see http://faircite.wordpress.com/.
For further reading on evaluation of DH projects, and links to other resources, see:
Profession 2011, no. 1 (November 2011).
Modern Language Association, “Guidelines for Evaluating Work in Digital Humanities and Digital Media”, http://www.mla.org/resources/documents/rep_it/guidelines_evaluation_digital.
Modern Language Association, “Guidelines for Editors of Scholarly Editions”, http://www.mla.org/resources/documents/rep_scholarly/cse_guidelines.
U. Nebraska Lincoln, “Recommendations for Digital Humanities Projects”, http://cdrh.unl.edu/articles/best_practices.php.
Todd Presner, Evaluating Digital Digital Scholarship, http://idhmc.tamu.edu/commentpress/digital-scholarship/.