Having recently concluded a long-ongoing saga to get the first ATLAS underlying event study both through the experiment's internal review procedures and then into the perverse format demanded by the academic journal, Physics Review D, to which we submitted it, it seems an apt time to offer a few comments on the state of the sacred academic publishing and peer review process. Over the years I've been in academic research, the tradition -- because it is largely tradition these days -- of academic journals has come to seem more and more perverse with each passing year. It is certainly an odd business: scientists spend months or years doing research, which we eventually write up and (modulo reviews, iteration and approval from colleagues whose names will appear on the author list) send to a journal. This final step is considered to be somehow magical, both individually by other scientists in their treatment of publication lists when hiring staff, and collectively by research councils and other funding bodies when reviewing grants (again, the review panels consisting of scientists, although not necessarily from the same field). The emphasis on publication lists seems to indicate a certain blind faith and lack of imagination in groups of usually contrary people when it comes to such revered procedures as peer review, for these days there is often little value added to publications by the arcane procedures and hoop-jumping required to obtain the mysterious approval that comes of putting an oddly-formatted journal reference in your CV.
I should perhaps give a little background about how research dissemination works in particle physics, for the benefit of any readers who aren't familiar with this field. Outsiders from academia probably assume that all scholarly publishing (and, not disconnectedly, career progression) works the same way, but this is most certainly not true: arts and humanities often hire lecturers immediately out of their PhDs and view teaching as a burden to be loaded upon them for several years while they fit in research in their spare time. Conversely, science departments almost universally hire young people in purely research roles before, when they have proven their worth as researchers, ensuring that they never do any again by means of a tempting promotion to a role more similar to that of early stage humanities lecturers -- with extra admin thrown in for good measure. It's a funny system, for sure, but the message to be conveyed here is that at least between arts and sciences there is a world of difference in how careers evolve and hence on how working output is to be judged. So generalisations are difficult and unwise.
Even within the sciences, for those important early years of research-dominated work, the culture can vary enormously. My own direct experience is limited to (mostly experimental) particle physics, which lies at one extreme of the publishing spectrum with huge collaborations (of order several thousand members in the case of the large LHC experiments), democratically and alphabetically representated on the author list. This is clearly a far cry from e.g. small biological or medical collaborations where the order of appearance on the author list is a covert channel by which to convey the role played by that person -- lab grunt who did the work (ugh, how common), their supervisor, provider of funding, etc.. Such differences, to my mind, must make it extraordinarily difficult for a mixed panel, as in the case of the cross-disciplinary Royal Society Fellowships, to judge the respectability of publication lists from a variety of publishing cultures. (But then I just failed to make their annual shortlist, so maybe I'm nursing a grudge! I would find it hard, for sure, to meaningfully review a biochemist.) It's not clear how well particle physicists do in this system -- one one hand they are likely to have large publication lists including papers not just by themselves but by any group within the mega-collaboration; on the other, all their listed papers have author lists long enough that scientists unfamiliar with our publishing culture could reasonably judge them to all reflect little on the individual being considered. The sanest approach to the use of HEP publication lists in hiring is, to my mind, to largely ignore them: at present we have forced upon ourselves a collectively deceitful and unfortunate prisoners' dilemma where even the most honest and self-deprecating researcher with an interest in career progression has to list all the papers that can be tenuously linked to themself, knowing that everyone else will be doing the same. In this system, people who spend their PhDs and early years working hard on non-running experiments suffer compared to those who do similar work on running ones, simply because the latter start their climb up the academic ladder with a vastly inflated list of superfically compelling research output.
And so to journals -- what do they do for us? In reality, particle physics researchers rarely use journals. The vast majority of real research exchange is via paper "preprints" manually added to the online arXiv system, which dumps a list of newly submitted papers into most of our email inboxes every morning. The "preprint" monicker is intended to imply that these are not "proper" papers, over which magic journal dust has been waved, and accordingly are probably not fully trustworthy. Bollocks! We read, submit and update arXiv PDFs exactly as fully-fledged papers, because we know that's how our science is actually being transmitted to others, as they transmit their findings to us. In the ten years since I began my PhD, I have not once visited a library to check out a dead trees copy of an academic paper in my field, which begs the question of why we pay so handsomely for our shelves of unthumbed manuscripts. When I do look up journal entries online it tends to be as part of one of my exercises in HEP archeology, to obtain copies of papers written before the arXiv era. Even this is rarely necessary, thanks to some superbly obsessive historical scanning by the Japanese KEK lab's library. But despite the fact that few if any particle physicists ever actually use academic journals to read about the state of research -- remember that these institutions originated as a sort of collective excitation of letters exchanged between natural philosophers in the early days of scientific research -- entries in publication lists with those magic journal reference details have disproportionate weight on the professional assessment of scientists. As with any unthinking scalar metric applied to judge progress, this leads to gaming the system: chasing a journal publication becomes more important than working with colleagues and driving forward the state of the art; scientists demand that conferences publish their non-peer-reviewed proceedings contributions in collaboration with a journal so that they look more like the mythical point-scoring type of publication in their CVs; and funding bodies accordingly get more formal in their classification of relevant publications -- it's a good old-fashioned arms race. The tail is most definitely wagging the dog.
Getting a paper into a journal is also a practically time-consuming and awkward process. Take for example the
underlying event paper which I mentioned at the start -- I was one of the editors of this paper within ATLAS, which
itself required several months of iterating with the physics group and subgroup, then with an internal editorial
board, then two stages of whole-collaboration presentation and review (in principle, 3000 people; in practice still
probably 20 or 30 active commenters to respond to), and a final round of sign-off by the experiment management. By the
end of this process we were both exhausted and pretty sure of the academic credibility of the result. I am proud that
in its Standard Model publications ATLAS has not attempted to [hastily put out much less useful papers], but stuck
properly to the process of doing it well -- an approach which I believe comes from ATLAS' physics publications being
driven by grassroots efforts and decisions rather than management diktat. It is unclear to me what an independent peer
review could have done to improve this situation. However, getting the paper to that all-important peer reviewer was
made painful by the requirements to reformat the paper in a less readable format per the journal's regulations; to work
through the text changing the word "Figure" to "Fig." except when the first word in a sentence (what?!); to rename
all our semantically-named plots as
fig1a.eps, etc.; to mash the whole thing into a single TeX file, eliminating our
carefully-tweaked BibTeX setup into the bargain; to mangle nicely optimised figures drawn with LaTeX commands
into sub-optimal EPS files; and a plethora of equally irrelevant but very time-consuming tasks. And then make it
compile on the PRD submission server's antiquated LaTeX installation. Submission to the arXiv is a bit of an art, but
nothing like this... this waste of time and effort is what we scientists refer to as a "massive, pointless ball-ache."
But surely the important thing is not the little-used availability of a paper in dead tree form, but the much more important side-effect of the independent peer review afforded it. So, what did our peer review reveal? They noted that what we did was quite hard, and suggested that we move a label slightly lower in one of our figures (which we'd had to mash for them). Gee, that was worth it.
So does experiment internal review, especially in analyses which originate in this ground-up way, realistically obviate the need for journals? Bearing that caveat strongly in mind, I am coming to the conclusion that the answer is yes... particularly if the arXiv and other self-publishing preprint systems are augmented by social media style comments, reviews and webs of reference from the "user community". But that caveat about grass-roots origins is there for a reason, again based on a recent experience which I think worth recounting: this is the cautionary tale of the ATLAS heavy ion jet quenching paper. (Note: as far as I'm aware there is no reason not to recount this story, and as David Mitchell pointed out in a recent episode of 10 O'Clock Live, the only sort of body that is "on message" all the time is a cult. But anyway, my apologies to any ATLASsians who consider this some sort of breach of trust or violation of our collective voice.) At the end of November, a couple of weeks into the LHC lead ion run, ATLAS called several urgent whole-collaboration meetings to discuss an exciting new result: the observation of obviously asymmetric jet events in lead ion collisions, where the officially-unvoiced hypothesis is that one of the two jets is being "caught" in a quark-gluon plasma. It's a neat result, but -- and this is important -- a wholly expected one. The collaboration management worked itself into a frenzy over this, perhaps driven by the obvious disagreement of our data with simulations which, as several of us pointed out at the time, were never meant to contain this effect (and had other problems, too, which I won't go into here). Several days after the first meeting we were all pointed at a short preliminary note and told that there was a week's worth of consultation time for collaboration members to comment on it. Unlike our underlying event paper, which was forged in 6 months of internal and conference notes plus several extra months of review and editorial honing, this one had been cobbled together in a few days and was full of obvious holes -- this is not to criticise the authors: no- one can produce a really complete and polished academic paper on that timescale. So, accordingly, the online comments system filled up with 70+ substantial sets of comments and criticisms... and it would have been more had the deadline not been brought forward by 3 days and another super-urgent meeting called. In this meeting, on a Thursday just before Thanksgiving, we were told that CMS might try to cobble together a similar result and scoop us on Monday when the arXiv re-opened and therefore we should try to submit a paper to the journal by 8pm that night! And so it came to pass: ATLAS hurried out a paper with the vast majority of its own members' criticisms unaddressed, and locked down further comments. Those in the collaboration who hadn't attended this second meeting weren't even informed or consulted on this drastic change of schedule until the paper was already submitted. This is what happens when fast-tracking publication decisions are taken by a small number of people, in a frenzy of paranoia. As far as I'm aware, CMS never published that so-threatening paper, there was virtually no media mention of the press releases, and the theory community's response was largely a shrug and a "yeah, so?". Certainly not worth the subverting of our own standards -- which were applied pretty ruthlessly to the underlying event and other paper -- but as the collaboration management never replied to my email on the subject, I have no idea how on-message I am on that point.
[Update: since first writing this, I found that CMS did indeed publish a measurement of this effect... in fact, it was submitted a few days after I wrote this rant, a mere two months after ATLAS' paranoia drove us to rush out our imperfect paper. From a first look, they have done a more comprehensive job... after all, they had enough time!]
But back to the journals -- did magic peer review save us from ourselves? That would certainly be the conventional logic, and with so many critical (but pleasant and well-meaning) comments from inside the collaboration, surely an independent reviewer would require a lot of corrections? Nope, it was accepted as soon as the journal re-opened for business after the Thanksgiving holiday. With situations like this, and the similar pro forma acceptance of the pointless "Me first!" ALICE paper a week after first data taking in 2009, my faith in the safety-net of peer review is not so much dented as written-off and fit only for the scrapyard. Driving such flawed decisions is the sad fact that journals -- due more to their traditional status and connection to funding and career evolution -- are extraordinarly profitable businesses, and will make publication decisions influenced as much by "market forces" like the potential for a high-profile, impact-factor-boosting publication (and screw the quality) as based purely on boring old pure scientific merit. It's ironic that a system like peer review, motivated at least partially by the same scientific awareness of personal flaw-blindness and self-deception as motivates double-blinding of experiments, is happily subject to the distorting power of a market for the means of academic dissemination. I don't get asked to review very much, but in the cases where my review has been negative or suggested major changes, the journal response has not been good: I get the impression that for some journals it's all about throughput, and reviewers who get in the way of that by insisting on higher quality are not popular.
Journals are, as commented in a review by Deutsche Bank a unique business in which all the costly aspects of making the product are provided for free or a nominal fee by outside agents. Scientists do the research, write the paper, review it internally, typeset it (certainly in the case of HEP, where most papers are written by at least one person with a lot of LaTeX experience, the typesetting is likely to be of publication standard at the time of submission), and peer review it. The journal only has to print it and stick it on its website (and perhaps reformat into a less convenient form, as per some anachronism or other)... and then scientists have to pay -- a lot -- to read it. Except that we don't -- we read the same paper, but faster, earlier (by months), and for free online at the arXiv. The journal publication is completely secondary to the supposed role of disseminating scientific research: it is nowadays merely an anachronistic status symbol of dubious value.
Journals are an extraordinarily profitable business, since they can charge large fees in exchange for virtually no outlay. But this is apparently not enough: they have been [getting more expensive, at an average rate of 7.6% p.a. since 1986]](http://www.arl.org/bm~doc/arlstat05.pdf) hence outstripping inflation for decades. The resulting trend that academic libraries spend an ever-increasing proportion of their budgets on journals -- 72% at the last count in 1998, and exceedingly likely to have increased since then -- is known as the ("serials crisis"](http://en.wikipedia.org/wiki/Serials_crisis). For years, academic libraries have been struggling to pay the thousands of pounds that journals charge for providing access to their publications, to the point where they have to be selective about what journals they can afford. The bulk of these costs are unjustifiable profiteering and completely disproportionate to the actual costs involved: not only is all the raw material and reviewing provided gratis, but with the dominant mode of consumption being PDFs on the Web the cost of distribution is also virtually zero. In a move purely to protect their business model, journals refuse to sell online-only access accounts, insisting instead that institutions buy bundles of journals they don't want (in unpopular dead tree form) in order to get online access to one which they do. Accordingly, and shamefully, the very institutions charged with disseminating scientific research are stifling it. This is a case of the tail not only wagging the dog, but strangling it.
For some reason, despite in all objective senses holding all the cards, researchers have not responded with an embargo of journals. We should. The reason we don't is fear: research funding bodies are not the most agile of institutions and their metrics are driven by the assumption that journal publications are a valid Gold Standard for measuring academic output. It is a true Prisoners' Dilemma: it is absolutely in our interests to boycott journals, but if we do not do so collectively and with a single voice, we lose individually.
This seems clear-cut to me, but perhaps that is because it gels with my prevailing political views. Privatising and outsourcing can be good ideas in the right places, if contracted with appropriate restrictions to avoid raw profiteering. Academic publication, like coherent public transport, seems in practice to be ill-suited to the market -- especially when taxpayers have already paid for the research itself. There are better ways, such as online collaborative tools, backed by webs of trust: we should be exploiting these -- and more actively than the slothlike Open Access Publication project has managed. Change in such established institutions is unlikely to come top-down from a masterplan between journals and funding bodies: it will come from a disruptive technology such as commenting or trackbacks on arXiv posts. And as the months and years pass, the journals' position will become ever more exposed. I'm looking forward to the future, but I wish it would hurry up and arrive.
Anyway, when hiring a new researcher make sure to look well-beyond the publication list: it may not mean that much, and the bits that are meaningful may not be obvious. Ask around: are they known, respected, influential? Have they got more ideas and potential to develop, or are they simply a worthy research drone who was in the right collaboration at the right time? And if you are a researcher: recognise that the crucial thing is getting your research out there and used, not that it's in a journal. We're in a field of endeavour that (should) reward independent thinking and true achievement over adherence to outdated social conventions... let's live up to that reputation.
Another update: some extra links for your info -