Associate Professor Peter Macauley teaches in the information management programs at RMIT University. Before starting at RMIT, he worked for 30 years in public, special and university libraries.
Over the past decade Peter’s research has focused on doctoral pedagogy, knowledge production, information literacy, scholarly communication and distance education.
With colleagues, he has been awarded ARC funding for two Discovery projects: ‘Australian doctoral graduates’ publication, professional and community outcomes’, and ‘Research capacity-building: the development of Australian PhD programs in national and emerging global contexts’. He publishes regularly in journals best suited to the readership for his research; some happen to be ERA-ranked A and A* on the 2010 list.
The Research Whisperer knows Peter as one of the good guys: a researcher with integrity and perspective, who tells it like it is.
ERA, which stands for ‘Excellence in Research for Australia’, is similar in many ways to research frameworks used in other countries to evaluate the quality (and sometimes quantity) of the research output of universities and—indirectly—individuals.
In the United Kingdom, they have REF (the Research Excellence Framework); in New Zealand, it is the PBRF (Performance Based Research Fund), and many other countries have similar schemes.
In this post, I focus on the journal ranking component of ERA.
Officially, the ERA journal rankings were abandoned after the first round of evaluation in 2010. Unofficially, the ERA journal rankings are alive and well and used for all the reasons they were withdrawn: job applications, promotions, grant applications and other forms of peer review (the bedrock of academe).
The great thing these schemes provide is a benchmark for academics, and this is particularly useful for Early Career Researchers (ECRs) to evaluate their performance against more experienced and established researchers. As we all know, no scheme is ever flawless, but at least they provide a guide to quality.
Quite clearly, there is an aspirational nature to the ERA rankings. It motivates academics to aim for the best possible journals (or conferences) to publish their research. This is a good thing. Having such a benchmark enables this to happen and most ECRs will know what the top journals are in their field – or at least they should know!
While the ERA 2010 journal rankings are not perfect (you will never get academics to agree on any such ranking scheme), at least it provides some guidance as to where you should aspire to publish. Over the ERA ‘era’, I have spoken to numerous academics and, while most have their gripes about the system, most of them agreed that the majority of rankings are not too far off the mark.
There are a number of not-so-good aspects to the ERA journal rankings.
First, where are the Australian A* journals? I did a test of the ERA 2010 journal rankings and found only two A* journals with Australia, Australian, Australasia, etc in the journal title. While this doesn’t preclude other Australian produced journals being in the A* category, it does provide an example how Australian-related journals are less likely to be ranked highly.
Second, as mentioned above, the rankings are not perfect and there have been some embarrassing mistakes.
Third, some academics focus solely on the ranking of the journal when they are deciding where to submit their articles, rather than considering the audience they should be attempting to reach (i.e. practitioners). What is the point of undertaking research if very few [relevant] people are going to read it? In other words, they are looking for impact on the pages of their resumes rather than impact in their specific intellectual fields or the real world!
What I find ugly about the ERA journal ranking schema is that it has unofficially become the yardstick in relation to success with job applications, promotions, and grant applications.
What does that do for the sustainability of smaller specialist journals or – heaven forbid – starting a new journal? What about the increased submission rates to top tier journals and the added workload for peer reviewers? This is exacerbated by the ‘vertical slide’ dynamic, where academics submit their articles to top journals and work their way down the list until they’re accepted by a lesser ranked journal (where, one could argue, it probably should have been submitted from the outset). Of course, great articles can be published in low-ranked journals, and vice versa.
For emerging areas where much of the exciting research is happening, publication is often in still-establishing or niche journals. These are unlikely to be the Top Gun journals of a whole field.
My view is that you should publish where the audience (i.e. the readers) will benefit the most. After all, what and who are we researching for? You may wish to use the ‘binary’ or ‘dualism’ method of publishing, where you publish in academic journals and, if appropriate, publish in practitioner focused magazines, blogs, etc.
If you are aiming for your research to make an impact, where do you think more impact would be made? An A* journal where a handful of people may read it, or an article in the Conversation, the weekend edition of The Age or Sydney Morning Herald?
My view would be: Always consider your audience and publish in the highest quality publication available to you. While the methods and metrics of research councils might change, good runs on the board in your field will be recognised as such by colleagues in your discipline.
These views are Peter’s own and not necessarily those of RMIT University.
Other posts on publications and quality: