ERA: The good, the bad, and the ugly

Associate Professor Peter Macauley (RMIT University)

Associate Professor Peter Macauley teaches in the information management programs at RMIT University. Before starting at RMIT, he worked for 30 years in public, special and university libraries.

Over the past decade Peter’s research has focused on doctoral pedagogy, knowledge production, information literacy, scholarly communication and distance education.

With colleagues, he has been awarded ARC funding for two Discovery projects: ‘Australian doctoral graduates’ publication, professional and community outcomes’, and ‘Research capacity-building: the development of Australian PhD programs in national and emerging global contexts’. He publishes regularly in journals best suited to the readership for his research; some happen to be ERA-ranked A and A* on the 2010 list.

The Research Whisperer knows Peter as one of the good guys: a researcher with integrity and perspective, who tells it like it is. 


Problematica (Photo by Tseen Khoo)

ERA, which stands for ‘Excellence in Research for Australia’, is similar in many ways to research frameworks used in other countries to evaluate the quality (and sometimes quantity) of the research output of universities and—indirectly—individuals.

In the United Kingdom, they have REF (the Research Excellence Framework); in New Zealand, it is the PBRF (Performance Based Research Fund), and many other countries have similar schemes.

In this post, I focus on the journal ranking component of ERA.

Officially, the ERA journal rankings were abandoned after the first round of evaluation in 2010. Unofficially, the ERA journal rankings are alive and well and used for all the reasons they were withdrawn: job applications, promotions, grant applications and other forms of peer review (the bedrock of academe).

The Good

The great thing these schemes provide is a benchmark for academics, and this is particularly useful for Early Career Researchers (ECRs) to evaluate their performance against more experienced and established researchers. As we all know, no scheme is ever flawless, but at least they provide a guide to quality.

Quite clearly, there is an aspirational nature to the ERA rankings. It motivates academics to aim for the best possible journals (or conferences) to publish their research. This is a good thing. Having such a benchmark enables this to happen and most ECRs will know what the top journals are in their field – or at least they should know!

While the ERA 2010 journal rankings are not perfect (you will never get academics to agree on any such ranking scheme), at least it provides some guidance as to where you should aspire to publish. Over the ERA ‘era’, I have spoken to numerous academics and, while most have their gripes about the system, most of them agreed that the majority of rankings are not too far off the mark.

The Bad

There are a number of not-so-good aspects to the ERA journal rankings.

First, where are the Australian A* journals? I did a test of the ERA 2010 journal rankings and found only two A* journals with Australia, Australian, Australasia, etc in the journal title. While this doesn’t preclude other Australian produced journals being in the A* category, it does provide an example how Australian-related journals are less likely to be ranked highly.

Second, as mentioned above, the rankings are not perfect and there have been some embarrassing mistakes.

Third, some academics focus solely on the ranking of the journal when they are deciding where to submit their articles, rather than considering the audience they should be attempting to reach (i.e. practitioners). What is the point of undertaking research if very few [relevant] people are going to read it? In other words, they are looking for impact on the pages of their resumes rather than impact in their specific intellectual fields or the real world!

The Ugly

What I find ugly about the ERA journal ranking schema is that it has unofficially become the yardstick in relation to success with job applications, promotions, and grant applications.

What does that do for the sustainability of smaller specialist journals or – heaven forbid – starting a new journal? What about the increased submission rates to top tier journals and the added workload for peer reviewers? This is exacerbated by the ‘vertical slide’ dynamic, where academics submit their articles to top journals and work their way down the list until they’re accepted by a lesser ranked journal (where, one could argue, it probably should have been submitted from the outset). Of course,  great articles can be published in low-ranked journals, and vice versa.

For emerging areas where much of the exciting research is happening, publication is often in still-establishing or niche journals. These are unlikely to be the Top Gun journals of a whole field.

Some Options

My view is that you should publish where the audience (i.e. the readers) will benefit the most. After all, what and who are we researching for? You may wish to use the ‘binary’ or ‘dualism’ method of publishing, where you publish in academic journals and, if appropriate, publish in practitioner focused magazines, blogs, etc.

If you are aiming for your research to make an impact, where do you think more impact would be made? An A* journal where a handful of people may read it, or an article in the Conversation, the weekend edition of The Age or Sydney Morning Herald?

My view would be: Always consider your audience and publish in the highest quality publication available to you. While the methods and metrics of research councils might change, good runs on the board in your field will be recognised as such by colleagues in your discipline.

These views are Peter’s own and not necessarily those of RMIT University.


Other posts on publications and quality:

8 comments

  1. Hi,

    Great post – the whole ERA area is confusing for new academics.

    How can a new journal get established if the abandoned rankings are still the de facto rankings? This is an even worse situation than if they continued as they will never be updated hence a new journal can never get ranked!

    Dale Reardon

    Like

    • Yes, it is a bit stifling, isn’t it.

      The other odd thing is that while the universities are looking at what journals you are publishing in, most academics are actually concerned about the level of citation of their articles. I’m not convinced that there is a correlation between the two. Maybe there is – I’d love to see the data.

      Like

  2. One of my supervisors only wants her name associated with A journals, so she won’t let me attempt to publish in anything less. But I’m a student – a trainee researcher. Ok I appreciate that she thinks I’m capable, but it seems unreasonable to me to only be allowed to publish in journals of that standing and not be allowed to test myself in some lower-ranked journals first.

    Like

    • Hi Susan

      I don’t see a problem with this. If she thinks that you are capable, then go for it. Once the research is done, the work of writing up the results is the same no matter the quality of the journal. The tension is higher, but the work of writing is the same. Your supervisor will feel most comfortable advising you on your draft for the journals that she publishes in (presumably these ones).

      In reality, she is allowing you to submit to high impact journals. If the first journal rejects your article (which will hurt, but it is part of the process), then you may yet get to test yourself in some lower-ranked journals. I hope not – I hope that you get published in the high impact journals and never look back.

      Start as you intend to proceed. That’s what I reckon.

      Like

  3. “Always consider your audience and publish in the highest quality publication available to you”

    Just a thought, but is it not the case that with modern search engines etc that anything published in an indexed journal is discoverable. Indeed do Journals really have ‘readerships’ any more?? I actually doubt it!

    Maybe another approach is : “Get it out and make sure that key words/title/abstract are discoverable . . . . “

    Like

    • I read journals, Peter. I probably read at least one item from every single issue of First Monday. I scan every issue of the Journal of Theoretical and Applied Electronic Commerce Research (JTAER) and another journal on assistive technology research.

      The contents listing arrives in my in-box each month (or quarter, depending) and I just move the interesting ones into ‘Read Later’. Then they are there waiting for me on my iPad, when I’m on the tram or stuck somewhere.

      I think it is actually easier to keep up with specific journals now than it was in the past. YMMV, of course.

      Jonathan (who should remember to log in as himself, and not use the generic ResearchWhisper account).

      Like

  4. Hi,

    I’m not being ageist at all – I’m 42 myself – but reading and scanning journals may be a dying breed of people. I myself setup electronic alerts to discover topics I’m interested in. I just don’t have time to read a whole journal. Also journals are hopefully to go all electronic and online (paper will die out hopefully) and then you’ll read and scan everything online.

    By the way I want full electronic and online versions so I can read them more easily as I’m vision impaired and paper versions are a real nuisance.

    Dale.

    Like

    • Funny that you and I replied at almost the same time. As I said to Peter, I still read a couple of journals (as opposed to alert services) because I like them. Also, I find that there is a bit of a serendipity factor in seeing some things that are important to the journal, but not central to my research.

      Hear! Hear! for fully electronic and online. Although Open Journal Systems uses frames, which aren’t fully accessible. And the Journal of Theoretical and Applied Electronic Commerce Research (JTAER) uses some system doesn’t seem to have persistent identifiers for the articles, which drives me crazy (and they should know better).

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.