Peer-reviewed writing move science forwards; non-peer-reviewed writing moves science sideways.
That’s my publication philosophy in one sentence. In other words, when scientists write research papers and book chapters that are peer-reviewed, the underlying rationale is that we are adding to the sum total of human knowledge, providing insights into a topic, and moving a field forwards. When we write non-peer-reviewed articles we are generally writing about science for a broader audience, with little original content (though perhaps with some original ideas). This moves concepts out of a narrow subject area and into the purview of wider society, which can be other scientists in different fields, or government agencies or policy makers, or the general public.
There can be exceptions to the rule, such as the IPBES pollinators and pollination report that I’ve been discussing this year. The report was widely peer-reviewed but is intended for a much broader audience than just scientists. Conversely, non-peer-reviewed critiques and responses to published papers can clarify specific issues or challenge findings, which will certainly move science forward (or backwards into muddier waters, depending on how you view it). However, in general, the principle stated above holds true.
This raises the (admittedly clunky) question I’ve posed in the title of this post: just how much non-peer-reviewed publication should a scientist who is an active researcher actually do? How much time should they spend writing for that wider audience?
It’s a question that I’ve given some thought to over the 30 years1 that I’ve been writing and publishing articles and papers. But a couple of posts on other blogs during the past week have crystalised these thoughts and inspired this post. The first was Meghan Duffy’s piece on Formatting a CV for a faculty job application over at the Dynamic Ecology blog. There was some discussion about how to present different types of publications in the publication list, and notions of “sorting the wheat from the chaff” in that list, which seemed to refer to peer-reviewed versus non-peer-reviewed publications.
One of the problems that I and others see is that the distinction is not so clear cut and it’s possible to publish non-peer-reviewed articles in peer-reviewed journals. For example the “commentary” and “news and views” type pieces in Nature, Science, Current Biology, and other journals are generally not peer reviewed. But I’d certainly not consider these to be “chaff”. To reiterate my comment on Meghan’s post, all scientific communication is important. As I’ve discussed in a few places on my blog (see here for example) and plenty of others have also talked about, scientists must write across a range of published formats if they are going to communicate their ideas effectively to a wider audience than just the scientists who are specifically interested in their topic.
Peer-reviewed publication is seen as the gold standard of science communication and it is clearly important (though historically it’s a relatively recent invention and scientific publications were not peer reviewed for most of the history of science). So why, you may be asking, would scientists want to write for that wider audience? One reason is the “Impact Agenda” on which, in Britain at least, there’s been a huge focus from the Research Excellence Framework (REF) and the Research Councils. Grant awarding bodies and university recruitment panels will want to see that scientists are actively promoting their work beyond academia. That can be done in different ways (including blogging!) but articles in “popular” magazines certainly count. I should stress though that this wider, societal impact (as opposed to academic impact, e.g. measures such as the h-index) is not about publishing popular articles, or blogging, or tweeting. Those activities can be part of the strategy towards impact but are not in themselves impactful – the REF would describe this as “Reach”2.
The second recent blog post that relates to the question of peer-reviewed versus non-peer-reviewed publications is Steve Heard’s piece at Scientistseessquirrel on why he thinks it’s still important to consider journal titles when deciding what to read. He makes some important points about how the place of publication says a lot about the type of paper that one can expect to read based just on the title. But the focus of Steve’s post is purely on peer-reviewed journals and (as I said above) it’s possible to publish non-peer-reviewed articles in those. I think that it’s also worth noting that there are many opportunities for scientists to publish articles in non-peer-reviewed journals that have real value. Deciding whether or not to do so, however, is a very personal decision.
Of the 96 publications on my publication list, 65 are peer-reviewed and 31 are not, which is a 68% rate of publishing peer-reviewed papers and book chapters. Some of the peer-reviewed papers are fairly light weight and made no real (academic) impact following publication, and (conversely) some of the non-peer-reviewed articles have had much more influence. The non-peer-reviewed element includes those commentary-type pieces for Nature and Science that I mentioned, as well as book reviews, articles in specialist popular magazines such as New Scientist, Asklepios and The Plantsman, pieces for local and industry newsletters, and a couple of contributions to literary journal Dark Mountain that combine essay with poetry. This is probably a more diverse mix than most scientists produce, but I’m proud of all of them and stand by them.
So back to my original question: is 68% a low rate of peer-reviewed publication? Or reasonable? I’m sure there are scientists out there with a 100% rate, who only ever publish peer-reviewed outputs. Why is that? Do they really attach no importance to non-peer-reviewed publications? I have no specific answer to the question in the title, but I’d be really interested in the comments of other scientists (and non-scientists) on this question.
1 I had to double check that, because it seems inconceivable, but yes, it’s 30 years this year. Gulp.
2 Impact is how society changes as a result of the research undertaken. So, for ecologists, it could be how their research has been translated into active, on-the-ground changes (e.g. to management of nature reserves, or rare or exploited species), or how it’s been picked up by national and international policy documents and then influenced policies on specific issues (invasive species, pollinator conservation, etc.)