Tag Archives: Science career

“I feel thin, sort of stretched, like butter scraped over too much bread” – Why scientists need to learn to say “no” to themselves

20190131_091055.jpg

Earlier this week I had to write an email to some potential collaborators that I really wasn’t looking forward to sending.  I’ve been doing some hard thinking since Christmas and had decided not to go ahead with a grant submission for a project that was my idea, that I had initiated.  I was now pulling back from that and feeling as though I was letting people down.

The fundamental reason is lack of time, of being really over-stretched at the moment.  Just before the Christmas break I received word that two grants that I’m involved with, one funded by NERC, the other by the Australian Research Council (ARC), were both successful. This is on top of four existing projects, funded by NERC, BBSRC, Heritage Lottery Fund and Butterfly Conservation. Plus the non-funded work I’m doing.  One of my tasks this week was to add a Current Projects and Collaborations page to this blog, so I can keep track of what I’m doing as much as anything!  Although I’m a minor partner in many of these projects, it’s still a lot of work to keep on top of everything, plus teaching,  the Research Excellence Framework for which I’m departmental lead, etc.  I’m also trying to complete a book which I’ve promised to deliver to the publisher soon.  And blogging of course….

There’s a line in the Lord of Rings in which Bilbo tells Gandalf that “I feel thin, sort of stretched, like butter scraped over too much bread.” Intellectually that’s how I’m feeling at the moment.

It’s my own fault, I say “yes” to things too readily, something which a lot of academics do and which is being widely discussed on Twitter and in other blogs.  Most of this discussion focuses on saying “no” to other people, to manuscript and grant reviews, to offers of collaborations, and so forth.

But I think it’s just as important that we learn to say “no” to ourselves.  We need to realise that, however great an idea that we’ve had is or however enthusiastic we are about a project or a paper or a book or organising a conference, if we don’t have the time and energy to follow through and do it properly, we are selling ourselves and our collaborators short.

Of course this is easy to say but not so easy to put into practice.  There are a lot of external pressures on academics to write more grant proposals and papers, to do more work on the impact of their research, to take on tasks within and without their institutions, and thus spread themselves too thin.  Being a scientist and teacher in a university is a great job and I feel very fortunate to be doing what I do.  But in the long term we’re doing no one any favours, not least our employers and our families, if we burn out early.

Advertisements

9 Comments

Filed under Biodiversity, University of Northampton

Academic job interviews: don’t feel obliged to do everything you said you’d do

Interview transparencies 2018-07-17 17.14.59

Last month I cleared out my office in preparation for our move to the University of Northampton’s new Waterside Campus.  Going through files I’d not opened in decades was a cathartic and occasionally emotional experience.  In one file I came across a box of OHP transparencies from the presentation I gave at my job interview in 1995!  (For younger readers, OHPs were just like PowerPoint, but you carried them around in a box….)

Anyway, the presentation (see photo above) at what was then Nene College of Higher Education set out what my research plans were going to be if I was offered the job. It’s interesting to look back on these research themes and consider whether I did actually do what I said I was going to do (go to my Publications page for details of the papers I’m referring to):

Flowering phenology” – This was a large part of my PhD, which I had completed two years earlier.  At Northampton I did a bit of work,  including a big meta analysis with Mexican colleagues Miguel Munguia-Rosas and Victor Parra-Tabla, but nothing further, though I do have a lot of unpublished data that one day may see the light of, err, day….

Pollination systems in the Asclepiadaceae” – I’ve done a lot of work on this plant family, including field work in South America and Africa, particularly with my German colleague Sigrid Liede-Schumann.  However Asclepiadaceae no longer exists as a separate family (it’s now a subfamily of Apocynaceae).  I have a large paper in press at the moment which assesses the diversity of pollination systems in the Apocynaceae; more on that when it’s published.

Specialisation and generalisation in pollination systems” – yes, done lots on this too, including contributing to the Waser et al. (1996) Ecology paper that’s now racked up >1550 citations, plus assessing latitudinal trends in specialisation.  Still a major focus of my research, it’s an area where there are lots of questions still to be answered.

Reproductive output [in plants]” – very little done since my doctoral work, though questions of annual variation in reproductive allocation were a big part of my PhD.  Has fallen by the wayside rather.

Seed predation” – ditto – it was a major component of my PhD and I published a couple of things but then hardly touched the topic.  A shame in some ways as I still think it’s a fascinating topic.

Pollinator behaviour” – I’ve done some work, mainly on birds and bees rather than the butterfly model system I proposed at the time, which was to work with Dave Goulson on a follow-up of a paper we published on floral constancy in Small Skipper butterflies.  This field has moved on hugely though, with some extremely sophisticated work being done with captive bumblebee colonies for instance.

Overall I think I’ve worked on about 50% of what I said I would do, which I’m more than comfortable with.  Because I’ve also done a whole bunch of stuff I never mentioned at interview, including work on pollinator conservation and interaction network analyses, both of which were hardly thought about in 1995.  There’s also research on the history of science that I was thinking about in the early 90s but which I didn’t present as a major research theme.

The moral of this story for anyone preparing for a job interview for an academic position is: Don’t think that you have to do all of the research that you say you’re going to do in the presentation.  Opportunities come and go, and interests wax and wane.  What is currently seen as exciting research may well, in 10 years time, be seen as old hat or a dead end, or have evolved in ways that provide you with fewer opportunities to contribute.  Prepare to be flexible, but don’t lie about your intentions.  In fact, as recently highlighted on the Dynamic Ecology blog, don’t lie about any aspect of getting an academic job!

One other thing: be realistic.  In retrospect I was too ambitious in the range of areas in which I wanted to do research, though they were all linked.  But over the course of 23 years it’s impossible to say how your research career will develop.  I’m looking forward to the next 23…. 🙂

 

Leave a comment

Filed under Apocynaceae, Pollination, University of Northampton

A blog post about our new paper about posting blogs: important for the science community as well as science communication

blogheadersnew

Scientists blog for many reasons.  Some of these reasons are highly personal, other reasons are purely professional.  For most of us it’s a mix of the two.  But despite all of the scientific blogging going on there’s actually very little been written in the scientific literature about the advantages of blogging for the professional scientist.  As a step towards remedying that situation a group of co-authors and myself have today published a paper entitled “Bringing ecology blogging into the scientific fold: measuring reach and impact of science community blogs“.  It’s published in the open access journal Royal Society Open Science.  Just follow that link and you will be able to read it for free.

I’m rather proud of this paper as it’s a collaboration between active ecological bloggers, most of whom don’t know each other personally. However we share an interest in blogging and in the belief that blogging is a legitimate scientific medium for communication of ideas, data, and professional advice.  That is, blogging for the science community rather than (just) for science communication to the general public.

One of the most pleasing things about this paper is that it received two of the best reviews any of us have ever had in our careers.  The reviewers were incredibly supportive and complimentary, and asked for virtually no changes.  That’s hugely gratifying and suggests to us that we are saying something important; let’s hope the readership likes it as much!

The co-authors, their Twitter handles and links to their blogs are below.  If you click through you’ll see that we have posted coordinated pieces on our blogs about our own reflections on the collaboration and what the paper means to us.

Manu Saunders (@ManuSaunders)  Ecology Is Not A Dirty Word      

Simon Leather (@EntoProf) Don’t Forget the Roundabouts

Jeff Ollerton (@JeffOllerton) Jeff Ollerton’s Biodiversity Blog

Steve Heard (@StephenBHeard) Scientist Sees Squirrel

Meghan Duffy (@duffy_ma) Dynamic Ecology

Margaret Kosmala (@margaretkosmala) Ecology Bits

Terry McGlynn (@hormiga) & Amy Parachnowitsch (@EvoEcoAmy) Small Pond Science

7 Comments

Filed under Biodiversity

What’s the point of the h-index? UPDATED

UPDATE: I’ve increased the sample size of EEB scientists I used in the analysis.

——————————————————-

Over at the Dynamic Ecology blog yesterday, Jeremy Fox posted an interesting analysis of which metrics correlate with the chances of early career researchers in ecology and evolutionary biology (EEB) gaining an interview for an academic post in North America.   Spoiler alert: none of them correlate, except the number of job applications you submit.

These metrics include number of papers published, number of first author papers, number of large (>$100,000) grants held, number of years post-doc, and h-index.  Nada, zilch, nothing, nowt is significantly correlated.  Which is good: as Jeremy (and the stream of commenters) discuss, it means that interview panels are looking roundly at individuals and what they can offer a university department, and not relying on (sometimes dubious) metrics.

Which brings us to the h-index….  Jeremy linked to an old post of mine called “How does a scientist’s h-index change over time?“, a piece that was far and away my most viewed post last year (and second-most viewed post in 2015).  This suggests that there’s still a huge “appetite” for the h-index, in terms of understanding what it is and how it can/should (or cannot/should not) be used.  Even before the Dynamic Ecology post came out I was planning to update it and give examples where I think it might be useful, so this seems like a good time to do that.

Opinions on the h-index vary hugely.  Some of the links in my original post were to writings by scientists who really like the idea of being able to use it to track the academic impact of an individual (or at least some measure of it).  Others despise it, and indeed all academic metrics, as pernicious and potentially dangerous to science – see David Colquhoun’s video on this topic, for instance.

I’m somewhere in the middle – I recognise the weaknesses of the h-index, but I also think that it’s measuring something, even if the something that it’s measuring may not be directly translatable into a measure of “quality” or “impact”, and especially not “employability” or “worthy of promotion” (and I would certainly never countenance using the h-index as a the sole measure of the latter two).

So when is the h-index useful?  Well one use is as a personal tracker of one’s own standing or contribution within a field, assessing the trajectory of a career, and perhaps gauging when it’s time to apply for promotion (at least in the UK system which is a less transparent process than in North America, or at least that’s my impression).  To illustrate this I’ve collated the h-indexes and years since first publication for 72 EEB scientists using Google Scholar (GS).  I used GS rather than Web of Science (WoS) as, although GS is less conservative, WoS seems to be becoming noticeably less accurate; for example it’s recently assigned to me chapters on which I was not an author but which are included in a book that I co-edited.  Another advantage of GS, of course, is that it’s publicly available and not pay walled.

It’s long been known that a scientist’s h-index should increase over their professional lives, and indeed that’s what we find if we plot number of years since first publication against an individual’s h-index:

h-index-graph

It’s a fairly strong correlation, though with a lot of scatter (something Jeremy noted in his blog) and it suggests that EEB scholars accrue their h-index  at a rate of about 1.6 papers per year, on average, though with a big range (0.3 to 4.2 papers per year).  One (albeit fanciful*) way to think about this graph is that it’s analogous to a Hertzsprung–Russell (HR) diagram in astronomy, where, as they age, stars shift position predictably on a plot of colour versus magnitude.  In a similar way, as EEB scientists age professionally, their position on this plot moves in ways that may be predictable from their scientific output.

There’s a lot of structure in HR diagrams, including the famous Main Sequence, where most stars lie, as well as stellar evolutionary tracks for Giants, Super Giants, White Dwarfs, etc.  In this modest sample I think we’re starting to see similar structure, with individuals lying far above or below the “h-index Main Sequence”, indicating that they are accruing greater or fewer citations than might be expected.  UPDATE:  In particular, three individuals who are “Super Giants” (to use the astronomical terminology) and lie far above the Main Sequence.  Carlos Herrera makes an interesting point in the comments (below) about self-selection in GS which could mean that there are far fewer people with low h-indexes represented than we might expect.

One of the things that could be explored using these type of data is exactly why it is that this is happening: is it a question of where they are based, or their nationality, or where they publish, their sub-field, or what?  One easy analysis to do is to assess whether there is a difference between female and male scientists, as follows:

h-index-graph-mf

Previous research has suggested that women on average receive fewer citations for their papers than men (see this 2013 study in Nature for instance) and this graph gives some support to that idea, though I’ve not formally tested the difference between the two lines. What is also interesting is that the R-squared values are identical, indicating as much variation in female as male career trajectories, at least as measured in this way.

UPDATE:  These additional data suggest that the h-indexes of male and female researchers diverge over time, and that most of the difference is for mid to late career scientists.  It’s unclear to me why this might be the case, but we could speculate about factors such as career breaks to have children.  Note that I struggled to find female EEB scientists with an h-index larger than about 80 – if I’ve missed any please let me know.

The data set I used for this analysis is certainly not random and contains a lot of people I know personally or by reputation, so a larger, more systematic analysis could come to some rather different conclusions.  However I thought this was an interesting starting point and if anyone else wants to play with the data, you can download the anonymised spreadsheet here.

 

*I’m not at all convinced about this analogy myself and am happy for anyone to explain to me why it’s a very poor one 🙂  UPDATE:  Though Stephen Heard seems to like it.

 

 

 

 

23 Comments

Filed under Biodiversity, History of science

How does a scientist’s h-index change over time?

Since its introduction a decade ago the h-index has rapidly become the most frequently used measure of research productivity and citation impact amongst scientists.  It’s far from perfect and has been criticised from a number of perspectives, particularly when used as a blunt tool for assessing a scientist’s “quality”.  Nonetheless it’s a useful measure that allows some comparison within research fields and (I think more importantly) gives individuals one method, amongst any number, of assessing the influence their work is having on their discipline.

Put simply, an individual’s h-index is calculated by ranking their publications by number of citations; the point at which the rank position of a publication is at least equal to the number of citations for that publication is the h-index.  For example, if a scientist has 18 papers all with at least 18 citations, their h-index is 18.  As soon as another publication reaches 19 citations, their h-index will go up to 19, and so forth.

That’s an important point about the h-index (and indeed all other measures of success/impact/whatever) – they are not static and they change over time.  As the Wikipedia entry that I linked to above notes, the originator of the index, Jorge Hirsch, suggested that 20 years after their first publication the h-index of a “successful scientist” will be 20; that of an “outstanding scientist” would be 40; and a “truly unique” scientist would have an h-index of 60. However, this will vary between different fields, so any comparisons are best done within a discipline.

One question that I’ve not seen widely discussed is how an individual’s h-index changes over time (though see Alex Bateman’s old blog post about “Why I love the h-index“, where he refers to the “h-trajectory”).  Does the “successful scientist” typically accrue those 20 h-index points regularly, 1 point per year, over the 20 years?  Or are there years when the h-index remains static and others when it increases by more than the average of 1 point per year?  If the latter, what’s the largest annual leap in an h-index that one could reasonably expect?  Finally, if we were to plot up the h-index over time, what shape curve can we expect from the graph?

On one level these are purely academic questions, the result of some musing and window gazing during a bus ride between campuses a couple of weeks ago.  But there’s also a practical aspect to it, if scientists wish to track this measure of their career progression.  For an early career scientist starting out with their first few publications, it’s easy to record their h-index as it changes over time from this point forward.  But what about a mid- or late-career scientist who started publishing long before the h-index was even thought of?  How do they reconstruct the way in which their h-index has evolved over time, should they be so inclined?

As far as I know there’s no simple, automatic way to do it (but please correct me if I’m wrong).  Indexing and citation systems such as Web of Science and Google Scholar give the current h-index and no indication of past history, you have to work it out for yourself.  Which is what I’ve done, and the procedure below is (I think) the most straightforward* way of reconstructing the evolution of an h-index.

So, pour yourself a cup of coffee** and settle in for a bit of academic archaeology.

I’m going to demonstrate the process using Web of Science (WoS)***, but it should be identical in overall procedure, if not in detail, in Google Scholar, Scopus, etc.  However be aware that Google Scholar is much less conservative in what it counts as a citation, hence h-indexes from that source are typically significantly higher than from others.

The first thing to do after you’ve logged on to WoS is to perform a Basic Search by author name, across all years; I’ve done this for All Databases as some of my**** publications (specifically peer-reviewed book chapters) are not listed in the WoS Core Collection database (the default selection):

Screen Shot 1 2015-05-10 at 08.03.59 copy

Perform the search then select Create Citation Report.  This will return a pair of graphs showing number of publications per year and number of citations per year, plus a table with some metrics about average citations per year, etc., and a value for the current h-index of that author:

Screen Shot 2 2015-05-10 at 08.12.12 copy

Below that is a list of publications for Ollerton, J ranked by number of times cited:

Screen Shot 3 2015-05-10 at 08.31.13 copy

As you can see, WoS indicates that the h-index of Ollerton, J is 23.  That’s incorrect, it’s actually 22 (i.e. a not-quite-successful scientist) because despite having a relatively uncommon name, there are other people called Ollerton, J who publish (including my cousin Janice).  However it’s a simple matter to remove any publications that are not your own using the check boxes against each publication and the “Go” button.  Ignore any publications that are ranked lower than your h-index.

Once you have a clean list, use the drop-down menu underneath the page to save your list as either a text or Excel file; again, just save the publications that are contributing to your h-index by choosing the number of records that corresponds to your h-index [UPDATE: however see Vera van Noort’s comment below about the possible influence of early publications that were only cited once or twice on your early h-index.  UPDATE x 2:  see also the later comments by Alex Bateman and Vera – later publications can drop out of the h-index list too.  This wasn’t an issue for my set of publications, but it’s worth checking if you’re following this procedure].

The Excel***** file is easiest to work with: it provides you with the two graphs shown on the WoS citation report plus details of the publications, average citations and so forth, and all the raw data on number of citations per year back to 1950 (click on each image for a larger view):

Screen Shot 4 2015-05-12 at 16.22.15 copy

To make the spreadsheet easier to work with I advise deleting all the stuff you don’t need, including the figures and the columns from 1950 up to the date of your first publication.

You now have to calculate cumulative number of citations over time for each publication using the Sum function (I’ll not go into details, should be straightforward if you know your way around Excel).

Next, copy all of the data and paste-special onto another sheet, selecting “values” (to just paste the data, not the formulae) and “transpose” (to turn the data 90 degrees) from the paste-special options.  Remove the original data to just leave the cumulative citations and then select all of the data and use the Custom Sort function to order the rows by by date of publication:

Screen Shot 4 2015-05-14 at 09.24.38 copy

Now it’s a matter of going along the columns and recording the number of publications that exceed the h-index for the previous column; I’ve colour-coded this below to make it easier to see:

Screen Shot 6 2015-05-14 at 08.37.38 copy

Finally, graph up the data:

Screen Shot 7 2015-05-14 at 09.05.41 copy

The results are interesting (or at least I think so).  In relation to the questions I posed above its clear that there are periods when the h-index doesn’t increase for a couple of years; more periods when the h-index increases by one each year; and a couple of years when the h-index increases by 2 points.  But that’s the maximum and I suspect that increasing by 3 or more index points in a year would be very unusual indeed (though see my second point below).

Although there’s a clear “lag phase” in the first five years when the h-index hardly changes, there are also periods when there’s no increase in h-index much later, e.g. 2013/14, so this stasis is not restricted to the beginning of my career.

Some final points:

1.  Make sure your citation data on Web of Science is accurate.  I have found LOTS of mis-citations of my publications over the years, by  authors who include incorrect dates, volume numbers, page numbers, even authors, in the references they cite.  WoS has a facility for correcting these mis-citations, but you have to let them know, it’s not automatic.

2.  How representative are my results for the population of ecologists or scientists more generally?  I have no idea but I hope others go through the same procedure so that we can begin to build up a picture of how the h-index evolves.

*No doubt this could be automated in some way and perhaps this will stimulate some competent programmer or app developer to do so, but doing it by hand is so straightforward that I’m not sure it’s worth the effort of constructing a working system.  Certainly the Excel part of the procedure could be done more elegantly in R.

**Other beverages are available.

***Other indexing and citation systems are available.

****Other scientists are available 🙂  But it doesn’t seem fair to use someone else as an example.  In any case, consider this another post reflecting on my life and career in my 50th year on this planet!

*****Other spreadsheets are available.  That’s the last one, promise.

49 Comments

Filed under History of science, University of Northampton