There are many, generally
convincing, reasons to avoid getting sucked into metrics as proxy measures of
performance. Waiting times/ lists as a
measure of health service performance is perhaps the best example, although
league tables of school exam results give it a run for its money. In higher education, we have the
often-criticised reputation ranking for Universities, alongside other league
tables on research performance and teaching reputations. The professional academic equivalent may be a
department’s record of research achievement, measured in terms of grant income
and publication output according to external assessment (and the reputation of
the journals and book publishers).
It may also relate to the growth
in the assessment of individuals, through the income and output routes, but
also using metrics on things like citations.
This focus on citations can relate to, for example, the ‘impact factor’
of the journal as a guide for submissions – a proxy that appears to have become
almost taken for granted in many Universities/ disciplines, but one that is
under increasing criticism (for example, Google ‘lse blog impact factor’). It can
also relate to an individual’s h-index, which relates to the number of
publications cited a number of times (for example, an h of 16 means that 16 of
author’s outputs have been cited at least 16 times).
We can come up with an incredible
number of good reasons to question the value of individual citations measures,
including:
-
People may be cited as examples of rubbish academia, or as methods/ approaches to avoid.
- People may be able to inflate their citation scores by engaging in self-citation and (perhaps more importantly) group-citation, in which a group decides (explicitly or implicitly) to cite each other regularly.
- Some disciplines and sub-fields will generate a smaller amount of citations (such as in large parts of historical research which follow a long-research-low-output-but-high-quality model) and some will produce a higher amount, such as the life and natural sciences with a tendency to low word count, high output and co-authored papers.
However, I still think you
should, as sensibly as possible, play the game, for the following reasons:
-
You may get the impression from the (truly depressing) THE, and from departmental discussions, that we are all against these measures; that we have worked out their weaknesses and everyone else has too. This would be a mistake. Many people secretly (and some people openly) think that they are decent measures and act accordingly. For example, there are people like me who make sympathetic noises in discussions, then go off and play the game.
- More importantly, in my experience, they are favoured most by senior managers (and/ or important disciplines in universities). The general point is that: (a) policymakers always work in an environment of uncertainty and ambiguity, with limited measures of performance and a limited ability to make sense of the available information; and, (b) they still have to make choices. Specifically, they often make what we think are the wrong choices – but they make such decisions from a different vantage point.
- In my experience of promotions panels, someone’s h is discussed and debated quite naturally in the natural, physical and life sciences. In fact, on a panel of about 10, you might have 4 people with their laptops out, debating the size and significance of applicants’ scores (with, for example, an h of 20 the figure they hope for in a professor). It may not be the deciding factor, but it really counts.
- At least one of those 4 people is likely to be a senior manager. This is crucial for a discussion of the use of h in the arts, humanities and social sciences. This is where you get the biggest opposition to h scores and you might have panels that reject them as measures of performance. However, that senior manager may also be on the panel, applying the same mindset. More importantly, they might not hold as much sway in this committee, but the recommendations may then go to a more senior committee on which they sit.
- The same might be said for appointments panels. You may not be aware of the importance of the h, but there could be at least one person sitting there who has done the background work. Or, that person is waiting for the recommendations and may use h as a way to argue against them.
So, if I was asked by a new
colleague about the importance of h, I could not simply recommend that they
ignore it and just focus on good quality research (which is a bit misleading
anyway – I am with Silvia on this one). Instead I would recommend four things:
-
Get on top of the way that your h is measured. Senior managers tend to use something like ‘Publish or Perish’ which, in my experience, can bring your h down (particularly with books). Or, people in meetings start debating the right number instead of your application. Instead, I began to put on my promotion and application documents a link to my Google Scholar page, which is the list of my citations that I think is the most accurate.
- Present a convincing narrative of your h, not just in terms of how misleading the measure is. Often, this is about pointing out that, for example, your articles have been published very recently (too recent to take off) or, for example, that your citation rates for particular journals are higher than the 5-year median (the measure now used by Google scholar to rank journals). This would sit alongside the usual narrative on your outputs and the quality of journals. My preference is to focus on the h trajectory: it is this high now, which means that we can reasonably expect it to get this high in 5 years.
- Don’t be a self-flagellating non-citer of your own work for the sake of principle. If it makes sense to cite your own work, do it. Reviewers and editors will soon tell you if it is too much.
- We operate within a highly-critical profession in which constant rejection and criticism is something that we have to put up with to get ahead. So, enjoy the occasional pat on the back that Google Scholar gives you. Sign up for the service that allows you to receive an email when you are next cited – it is one of the very few boosts to the ego on which academics can rely.
Regarding the "reasons to question individual citation measures", and point the first, there are interesting efforts being made to integrate sentiment analysis, which would overcome the problem of 'negative citations' [http://dl.acm.org/citation.cfm?id=2084636]. Frequently people seem more inclined to pass over rubbish research in silence though (warning:based on anecdotal experience!).
ReplyDeleteRe: point 2, biblio/webometric tools such as InCites can readily account for self-citation, and come up with an analysis which excludes these.
For point 3, I suppose it is up to all of us to make sure that the weaknesses of this form of quantatitive analysis are as well understood as the strengths. This is the responsibility of researchers too, to understand how the picture of their (sub-)discipline is skewed or not by the blunt bibliometric tools which are available to them.
Also, regarding the H, I find that early career researchers are starting to use their m-index more, given that it puts them on a somewhat more even footing with their senior peers. Then again, it could be no more than a fad...