Few would argue that academia these days is overly dependent on the fraught and shaky pyramid of citation metrics.
There have been numerous ‘grey areas’ (e.g. self-citation and journal self-citation [which affects impact factor]) and outright scandals (e.g. citation cartels) that are related to – some might say driven by – this obsessive and uncontextualised numbers game.
Yet, it persists. And grows.
Research quality assessors and university executives appear to be ignoring the system’s teetering unsustainability in this regard, alongside the ever-more-meaningless achievement of ‘excellence’ that everyone is exhorted to pursue 24/7.
Last week, LSE Impact blog featured a post by Asit Biswas and Julian Kircherr titled Citations are not enough.
After establishing that just about no-one reads academic articles (and pointing out that only 20% of citations are works that have actually been read), they reiterate that scholarship should converse with broader society: “If academics want to have impact on policy makers and practitioners, they must consider popular media.”
Again, we tread on the hoary chestnut of wanting research to influence policy because, after all, everybody wants to rule the world.
Biswas and Kircherr argue that an academic’s popular media presence should count in their promotion folio:
For tenure and promotion considerations, scholars’ impacts on policy formulation and public debates should also be assessed. These publications often showcase the practical relevance and potential application of the research results to solve real world problems.
While I agree with the authors on many points, the call to add another metric to the pile didn’t sit entirely comfortably with me. We’ve seen the consequences of the metrics game. We’ve had warnings about the load and potential ineffectiveness of altmetrics as a performance measure.
I’ll admit a part of me went “hurrah!” at first about having public engagement and broader research communication (including social media) gain more recognition and value within the academy. Having this increasingly expected, but still not rewarded, labour treated as being more than a PR exercise, or part of ‘profile branding’ for institutions, would be very welcome.
But do we want it to be part of how we’ll be judged before a promotions committee?
These were my key concerns:
What if the research you do isn’t solving an immediate problem in society?
Or it doesn’t fit into popular media narratives?
For those who undertake research that isn’t immediately or easily slotted into the ‘applicable to society’ basket, it’s yet another criteria that they’ll have to be creative about. The increasing demand that research be demonstrably part of fixing society’s ills (as we recognise them, at the moment) narrows the field of projects, likely to gain the attention of popular media. The repeated calls for better and more industry partnerships affects research diversity in similar, limiting ways.
I’d venture to say that research that would catch the eye of popular media outlets wouldn’t be a huge swathe of the research that takes place in our institutions. It’s not a failure on the researchers’ part to provide a compelling enough narrative, it’s about the broad applicability and identification that’s necessary for topic to work in popular media. This is the kind of approach that works for mainstream media outlets – and that’s not for every topic or researcher. And, for the record, I don’t think it’s that often that it’s about researchers being wary of having their work ‘dumbed down’. It’s wariness of being misquoted, or having their work misrepresented. It’s a question of trusting who you’re talking to.
Doesn’t naming another promotion criteria just lead to more of the same?
The more I thought about it, the more I wondered whether we’d just be creating another element to be ‘gamed’, and laboured under.
What would public profile performance and promotion matrices require? Two op eds and a Conversation article a year for a Level B academic in Australia? Will Associate Professors need to be interviewed by ABC Radio National or Channel 7’s Sunrise before securing a Professorship?
Not instead of, but as well as.
My final concern that I’ll share here is that shifting recognition of how research travels beyond academia is great, but metricising it defines more necessary work for researchers to do.
Sure, academic journal articles may not be read by many at all, but will research councils and promotion committees be throwing them out as an indicator of quality/productivity/esteem any time soon? As Biswas and Kircherr note at the end of their post, “Change is happening very, very slowly.”
In the meantime, then, will promotions committees be looking for articles and popular media engagement? How does judging of impact from these popular media engagements take place?
Will they look at how often these media items are cited, perhaps?