--- layout: post status: publish published: true title: Metrics in the Arts and Humanities alias: "/2015/01/15/metrics-in-the-humanities" wordpress_id: 3352 wordpress_url: https://www.martineve.com/?p=3352 date: !binary |- MjAxNS0wMS0xNSAxOTo1Mjo1NyArMDEwMA== date_gmt: !binary |- MjAxNS0wMS0xNSAxOTo1Mjo1NyArMDEwMA== categories: - Open Access - Academia - Conference Papers tags: - Open Access - metrics - HEFCE comments: [] ---
Tomorrow I will be speaking at the HEFCE Metrics and the assessment of research quality and impact in the Arts and Humanities workshop, commissioned by the independent review panel. Here are some notes on what I am planning to say. These are just brief notes for a ten-minute talk. They're not particularly nuanced but I thought they were worth sharing.
Briefly: Open access means making academic research freely available to read and re-use online, with some provisos. We can achieve this through the gold or green routes. Gold means that the material is free to read/reuse at the publisher (it does not refer to any specific business model but it does imply a different business model to traditional subscription economics). Green means that the researcher deposits a version in an institutional repository to be made openly accessible there. Gratis open access means the material is just free to read. Libre open access means that research work can also be re-used, usually through an open license such as those designed by the Creative Commons foundation.
Many arguments surrounding open access intersect with debates on metrics.
1.) The economic climate is fairly grim for some institutions and funders. Open access is often situated within the economic problems of scholarly communications, such as the so-called serials crisis (in which we cannot afford to subscribe to all the work that would be ideal because of hyper-inflationary price increases triggered by a number of factors). There is an ever-increasing pressure on research funds. Metrics would be used to decide where to assign funds in a time of scarcity. Any metric system must, therefore, be fully piloted to see how it fares compared with previous research assessment findings. This then raises a question: is the point of a new metric system simply to replicate existing mechanisms of research concentration? Or are we measuring and valuing something new?
2.) Advocates of open access frequently point to greater public access as a beneficial outcome of making work freely available online. This coincides with the UK's "impact agenda". We often have no idea where or how work is being used/read, though. There is an increasing portion of population with humanities degrees. But we have no idea how much continued engagement with academy. The OA movement might benefit from metrics that could track such engagement as ammunition for claims of public appetite. Taxpayer arguments for validity will be assessed by such undertakings, though.
3.) Re-use is the other core linchpin of the OA movement. This was introduced to combat a prohibitive copyright situation in which scholars and others cannot re-use the work of salaried academics. The centre-right economic argument here is the potential for economic benefits at sites distant from academy. As per below, this poses a problem for many systems of metrics as they currently stand though.
Metrics are financial power for institutions and publishers. As was seen in the institutional spin on the multi-dimensional dataset of REF 2014, a plurality of metrics will lead to a plurality of competitive/competing value claims. The victors will be awarded funding. Publishers, though, also profit from this. We see strong claims from HEFCE and REF panel members that publication venue/brand are not used as assessment factors in the judgement of REF. At the same time, institutions behave as though HEFCE had said the exact opposite and encourage researchers to publish with known entities, re-enforcing current market logic and, in some cases, monopolistic practice. Metrics, even if targeted at the researcher level, will be the next battleground. Journals in which the majority of authors score highly on author-level metrics will make value claims that causally associate publication within their walls with high scores. This will entail a continued financial contribution to those entities.
Metrics, in other words, are another proxy measure for research. They are a quantification of a symbolic capital that maps onto material capital. Pay careful attention to the entities most keen on them. Publishers will use success in metrics, even if researcher level, to strengthen market positions. We must consider who advocates for metric-driven approaches and what the material consequences are for library budgets.
Populist vs. esoteric research must be considered. Is public reach everything for the contemporary humanities? Is it now more important in an age of privatised HE teaching that we reach the public? What is the contribution from the humanities, otherwise? By contrast, does this yield conservatism in selection of topics? A "canon" of approved areas that are pre-valued? OA extends public reach. If taking this as all, can be problematic. This conversation cannot be divorced from the ongoing debates on the roles and purposes of the humanities.
Derivatives and green OA are badly tracked. Technology is currently poor at accurately tracking citations to derivatives and back-crediting original authors. If we want a system of quantitative accreditation, it must accurately map the network of influence. The green route for OA is also not well tracked for aggregated statistics. DOI uptake among certain fields of humanities publication remains poor, which poses problems. In fields with fewer citations, such as HSS, missing these could be catastrophic. It might even result in lawsuits if it resulted in a funding situation.
Retrospective modelling must be comprehensively evaluated and questions of what is measured should be answered.
The main purpose of metrics is financial savings. Is there evidence that metrics will stop institutional game-playing of the type currently experienced in REF or is the central cost saving at the sub-panel level? (No to former, yes to latter is my suspicion.)
We must beware of the tendencies to objectify and fetishize quantitative values, even when they are proxies for qualitative phenomena, and to use these for inter-disciplinary comparison.