--- title: "Open Access Resources and Evaluation; or: why OA journals might fare badly in terms of conventional usage" layout: post --- I am frequently asked, by libraries, to provide usage statistics for their institutions at the Open Library of Humanities. I usually resist this, since there are a number of ways in which the metrics are not usually a fair comparison to subscription resources. A few notes on this. 1. We do not have or require any login information. This means that the only way that we can provide usage information is by using the institutional IP address. This, in turn, means that we can only capture on-site access. This is not the same for journals that have paywalls. They can capture a login, from off-site, and attribute these views to the institution. Therefore, if you compare usage of OA journals vs. paywalled journals, the paywalled journals will likely have higher usage stats, because they include off-site access, which is not possible for OA journals (though Knowledge Unlatched did some [interesting work on geo-tracking of off-site access](http://www.knowledgeunlatched.org/2016/10/library-usage/)). Further, our authors may deposit copies of their work in institutional repositories or anywhere else -- and we encourage this. Again, though, the decentralization makes it very hard to get any meaningful statistical tracking. 2. Different institutions want us to report on different things. Some want to know "are our academics publishing in OLH journals?" while others want to know "are our academics reading OLH journals?" The reporting requirements for these are different and it seems that OLH is judged differently by different institutional desires. 3. We run a platform that is composed of several different pieces of journal technology: we have journals at Ubiquity Press; we have journals running on Janeway; and we have journals running on proprietary systems at places like Liverpool University Press. These all run on different reporting systems and require us to interact with different vendors for different usage requests. Reporting in this way requires me to take time out of running other parts of the platform. In short: the labour overhead of this type of reporting is fairly large and adds to the overall costs that we have in running the platform. 4. There is a privacy issue in tracking our readers. When the US Government has [banned the use of the term "climate change"](https://www.theguardian.com/commentisfree/2017/aug/08/trump-administration-climate-change-ban-usda), it seems reasonable to worry that tracking users, by IP address, in logs that could be subpoenaed, could genuinely carry some risk. Indeed, as a _library_, it feels important to us to protect our readers. 5. View counts are a terrible proxy for actual reading. 6. Our mission is to change subscription journals to an OA basis. Libraries have been asked, at each stage, to vote on this. They have done so enthusiastically. We hope that, in doing so, libraries recognise what we are doing and will not just resort to crude rankings of usage in continuing to support us (and, indeed, most do). But I can also see the temptation, in the current budget difficulties, to fall back on usage stats as a ranking of where to invest. I am happy to keep having the conversation about metrics with libraries, but I think it is also important to acknowledge all of the above factors that are implicit behind requests for usage metrics. It's also true that we can work out the aggregate metrics on usage for our articles and the figures [look pretty good](https://www.openlibhums.org/news/55/). The challenge comes at the institutional-breakdown level.