--- title: Judging the painting (research) without the frame (the journal) layout: post --- It is a common step in the ongoing reform of research practices to criticize the set of proxy measures that we use to evaluate research. [I've certainly done this](http://www.cambridge.org/gb/academic/subjects/general/open-access-and-humanities-contexts-controversies-and-future?format=PB&isbn=9781107484016). The argument for DORA -- the [San Francisco Declaration on Research Assessment](http://www.ascb.org/dora/) -- is that one should judge the research work, not judge where it was published. Parts of DORA read: > When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics. and > Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs. Worthy sentiments indeed. Every "top" journal has published bad material and every "bad" journal could contain good work. There is, though, a bit of a problem of this judging "the painting without the frame" mentality, namely: we're not very good at it. As [we recently noted](https://doi.org/10.1057/palcomms.2016.105): > In a somewhat controversial work, Peters and Ceci submitted papers in slightly disguised form to journals that had previously accepted them for publication (Peters and Ceci, 1982; see Weller, 2001 for a critique). Only 8% overall of these resubmissions were explicitly detected by the editors or reviewers to which they were assigned. Of the resubmissions that were not explicitly detected, approximately 90% were ultimately rejected for methodological and/or other reasons by the same journals that had previously published them; they were rejected, in other words, for being insufficiently “excellent” by journals that had decided they were “excellent” enough to enter the literature previously. In other words, yes, it is terrible practice to evaluate research work through the framing vehicles in which it is found, but without those frames we seem to fare equally terribly. I am also reminded here of I.A. Richards's work in literary criticism. In _Practical Criticism_, Richards gave his students poems without the titles or authors and, to his surprise (but not to mine), they all responded differently. This is something I've wondered about for the REF. We have peer-review panels for REF to assess the originality, significance, and rigour of the work that is submitted. However, the work is not blinded; it is given with the enframing contexts of publication venue that confer value. I have previously suggested that we "blind" the panel to these frames, although it would be very hard to do so for well-known work. (I have also suggested that we should submit the version that was accepted by the publisher and not the final copyedited and proofed version, unless copyeditors and proofreaders are listed as co-authors and will receive a share of the QR.) Such an exercise, though, would, I suspect, lead to the breakdown of consensus results among panel members. Frames have consequences for how we perceive work. They are not solid ways to judge research. They come with many economic consequences for library budgets. The anarchic alternative would reveal how flaky those grounds for communal value judgements really are but would also cause many of our dubious labour-saving practices (in reading time) to collapse. Perhaps, though, this is the self reflection that we need.