Particularly in terms of publications, the pressure has long been on quantity; now, it has definitely shifted to quality.
To borrow – and possibly warp – an analogy from a recent café conversation (thank you, Thesis Whisperer):
What matters now isn’t how many races you’ve run, but where you’ve actually placed.
With the retraction of ERA journal rankings and a new ‘journal profile’ as the way of estimating publishing worth, it’s no wonder that there’s some confusion over how the quality of an academic’s research track-record in Australia is ascertained.
A brief digression and definition for those who have had the good fortune not to encounter the ERA journal rankings, or their attendant angst:
The journal rankings were part of a national scheme to assess university quality. For ERA 2010, then, the rankings of A*/A/B/C were very much on the minds of researchers and research managers. Many institutions brought into force weighted performance scales that were intended to position their universities better in the system (e.g. academics were rewarded more richly for publishing in A*/A journals than others). Though the rankings are scrapped, ERA itself has not gone away.
When you’re applying for grants, it’s not enough to have a track-record that includes a bunch of publications and maybe a grant or two. It needs to be a QUALITY track-record, particularly given the intense competition that attends to most major funding rounds these days (rumour has it that the recent Discovery Early Career Researcher Award scheme [DECRA] attracted about 2500 applications, which would result in a success rate of about 8%, if the flagged number of 200 awards stays consistent).
There have been times and places where academics were encouraged to publish as much as possible, regardless of the quality of the outlet, due to the mad rush for HERDC points (which, basically, determine how much money the government allocates to your institution).
Though it seems a much tougher row to hoe (…because it is), filling your CV with quality publications is a process that will repay you better than getting things out there just for the sake of it. I’ve encountered, and had to counsel, a good number of academics who are now regretting their ‘fast’ publications in less prestigious outlets. This includes those who have concentrated their publications in the fraught genre of refereed conference proceedings.
For me, in assessing someone’s CV, publication trends that set off warning bells with regard to the quality or development of a person’s work include:
- Overwhelming representation from conference papers. WHY? Because it often indicates a lack of follow-through when you present a paper and don’t develop it into an article. These are some of the thoughts that would be rolling around in people’s heads: Was what you said not worthy of being worked up into an article? Was it shot down as nonsense? Are you the kind of academic who perpetually describes what their project is, but haven’t done the work, so can’t present any actual findings?
- The majority of your publications are narrow in terms of outlet. WHY? It’s not a good look to publish only in a couple of journals, even if those journals are good ones. Very new or niche fields may be the exception here, but remember that developing a stronger profile often means framing your work in ways that engage with a broader range of cognate peers. If you’re only ever published in one or two journals, questions may emerge about why you aren’t publishing more widely (e.g. Is it because your work has no traction outside of your very specific area [surely not], or is it that you don’t know how to relate your work more broadly in the field?). That’s not to say that you shouldn’t publish regularly in your area’s peak journals, just not ONLY in those journals.
- Items on your publication list tend to be short. WHY? In major grant rounds, it’s accepted practice to note the length of your publications (i.e. how many pages they are). This holds for books, book chapters, and refereed journal articles. While length of a piece is dependent on many things (including disciplinary protocols and particular publications), if your work tends to be on the short side, questions may hang over your ability to contribute sustained intellectual work. For example, an article that’s fewer than ten pages long is unlikely to be an item of substance.
- Dodgy refereed publications. WHY? Actually, the first part of this may need to be “What?” – what I mean by dodgy refereed publications encompasses a range of things, including papers that are not full articles that happen to be in refereed publications (e.g. as discussion papers or contributions to a topical forum), or fully fledged articles in non-refereed publications. Sometimes, mistakes in listing these can stem from genuine ignorance of what constitutes a ‘refereed (or peer-reviewed) publication’. Other times, however, they don’t; getting this kind of thing wrong sends all the wrong signals.
This is by no means a definitive listing, and one that is informed by my background in humanities and social science disciplines. I haven’t included some potentially controversial items (such as the status of e-publications [this probably warrants a post on its own] or the fraught issue of being 1st/2nd/3rd author).
Overall, a quality track-record is relatively easy to spot. The ERA journal rankings brought a sledge-hammer approach to defining ‘quality’, but assessors and reviewers have been doing it for a long time without this kind of categorisation. Expert academics will know what your discipline’s expectations are, and pay attention to the breadth/depth of your research output. Padding out a weak track-record will be seen for what it is, and not at all appreciated!
Building a strong research track-record is important to most academics’ careers. Quality publications may come more slowly, but that adage about good things taking time has never been more true.