Over at the Dynamic Ecology blog there’s an interesting discussion going on about “how to keep up with the literature” that’s relevant to all fields, not just ecology. Spoiler alert: it’s impossible to “keep up” if “keep up” means “read everything”. But do check it out as there’s lots of good advice in that post.
One of the topics that’s arisen in the comments is about the use of reference management systems such as Endnote, Refworks, Zotero, Mendeley, etc. Everyone has their own preferences as to which to use, and there seems to be advantages and disadvantages to all of them. However a minority (so it seems) of us don’t use any kind of reference management system, which strikes those who do as very odd. Personally, I tried Endnote a long time ago, it was ok, then I lost the database when an old computer bit the dust.
I’m not sure how much more efficient/effective I would be as a publishing academic if I was to get back into using a reference management system. One of the supposed advantages of these systems, that they will format references to the specific requirement of a particular journal, seems to me to be a double-edged sword. I actually find re-formatting references quite relaxing and I think (though I may be wrong) that it develops attention-to-detail and accuracy skills that are useful in other contexts.
Also I suspect, but have no proof, that reference management software is responsible for perpetuating errors in the reference lists of papers that then result in mis-citations on Web of Knowledge, etc. My suspicion is that this has got worse over time as people rely more and more on reference management software rather than their brains. These citation errors can have an impact on an individual’s h-index, as I mentioned in a post last year.
By coincidence yesterday I spotted a hilarious example of just this kind of mis-citation that I think can be blamed on a reference management system. This paper of mine:
Ollerton, J., Cranmer, L. (2002) xxxxxxx Oikos xxxxxx
was rendered in the reference list of another paper as:
Ollerton, J., Cranmer, L., Northampton, U.C., Campus, P. (2002) xxxxxxx Oikos xxxxxx
The last two “authors” are actually from the institutional address – University College Northampton, Park Campus! [UCN is the old name for University of Northampton].
Now in theory that shouldn’t happen if an author’s reference management software is doing its job properly, and information has been correctly inputted, but it does happen: errors are not uncommon. In addition (it seems to me) authors often don’t check their reference lists after they have been produced by the reference management software. That’s sloppy scholarship, but I can understand why it happens: people are busy and why bother if the software is (in theory) getting it right every time? It also shouldn’t happen at the editorial production end of things, because references are usually cross-checked for accuracy, but again it does, even for top-end journals (in this case from the Royal Society’s stable!)
Again it’s anecdotal but I’m also noticing that reference lists in PhD theses that I examine are getting sloppier, with species names not in italics, various combinations of Capitalised Names of Articles, unabbreviated and abbrev. journal names, etc. etc.
Does any of this really matter? Isn’t it just pedantry on my part? Whilst the last statement is undoubtedly true, I think it does matter, because attention to detail at this very basic level gives the reader more confidence that attention has been paid at higher levels, such as citing accurate statistics from primary sources to back up statements, rather than relying on secondary sources, as Andrew Gelman discussed in an old blog post on referencing errors.
But maybe I’m a lone voice here, I’d be interested in your thoughts.