
The Journal Impact Factor may not be dead, but it is looking pretty ill. When a relatively conservative body like the Australian Research Council (ARC) says:
“The ARC is not prescriptive about what can be included [in a publication list], but applicants should be aware of the widely held view that journal impact factors are a poor measure of an individual’s research performance.” – ARC Frequently Asked Questions for Discovery Projects commencing in 2017, question 4.11
If journal impact factor is a poor measure, what is a good measure of “an individual’s research performance”? How do you know that your publications are any good?
You just know
I get asked this by new academics a lot. The question comes in many forms: “What measures can I use?” “How will people know?” “Where should I publish?”
The unspoken question often revolves around an uncertainty about, or a fear of, the value of their own work. Don’t do that. Don’t pin the value of your work to the judgement of your peers, your promotion committee, or your grant assessors. That way, madness lies.
You know when you have written a good paper. You know when you have, through the pressure of deadlines, or the tragedy of lost data, written a not-so-good paper. Hold onto those feelings, that sense of judgement. It will sustain you.
Knowing that, here are some practical answers to this question.
People just know
The standard response is that people in the field know what is good and what isn’t so good. That is true, to some extent. In your own mind, you probably have a good sense of what journals in your field are ‘strong’ and which ones aren’t. However, it doesn’t take much discussion around the office to discovery that each person has a slightly different list in their head. You will get general agreement around the core, not so much agreement at the periphery.
One strategy, then, is to stick with the core, with the ‘canon’. Just publish in the best journals for your discipline. That works well when your research sits at the core of a discipline. But if you are doing anything multidisciplinary, you might run into trouble. You might want to show your fellow researchers, to your university or to funding assessors from different disciplines that your work is good, and appearing in well-respected outlets.
Here are some ways to do that.
Citations
Citations are the gold standard of research quality measures. While there are some issues with citations (particularly when comparing across disciplines), a citation generally indicates that someone values your contribution to knowledge.
When you include citation counts, the basic rule is to include them for all your publications, or for none of them. If you only include citation counts for some of your articles, then reviewers will assume that all the rest are zero (when that may not be the case).
Don’t be afraid of listing a publication with zero citations, particularly if it is new. It takes time for citations to appear.
Don’t be afraid that your citation counts are low until you have checked what the average is for your discipline. The level of citations varies dramatically across different disciplines.
I don’t care which citation count you use, as long as you make it clear (generally at the top of your reference list) which system you are using (e.g. ‘All citation figures drawn from Google Scholar on 9 February 2016.’)
Most applications that I see are using Google Scholar citation counts. If you are planning to do this, it is worth setting up a Google Scholar profile, so that you can see your updated citations at a glance. You might also be interested in Anne-Wil Harzing’s Publish or Perish (PoP) software, which is designed to help you to present your case for research impact.
Journal ranking lists
You could use a journal ranking list. Ranked lists of journals are generally set up by an august group, who come together to form a consensus on what is good. It is the equivalent of those discussions around the office writ large. Some disciplines do their own. Here are two that I’ve seen people use in my area in Australia:
- Australian Business Deans Council (ABDC) – Journal Quality List, 2013.
- Australian Political Studies Association (APSA) – Preferred Journal List, 2013.
I like these lists because they are independent. They are not driven by a publisher or other group with a commercial interest in the process. What I don’t like is that they were published in 2013. While these lists are ‘current’, they don’t get updated very often. This is a reflection, in part, of the difficulty in administering the process, and the difficulty in actually reaching consensus. I’m in two minds about the fact that they are local to Australia. On one hand, research is a social construct, so the idea of quality will be different in Australia than in China or Germany. On the other hand, we work in a global world, and our journal lists should probably reflect that.
If you are interested in finding a reputable journal ranking list that might work for you, Anne-Wil Harzing, who has done excellent work in this area, maintains the Journal Quality List, which shows journals as they are listed on multiple ranking lists.
I don’t care which ranking list you use as long as:
- You avoid misleading or commercial metrics (see below), which tend to favour their own products.
- You make it clear which ranking system you are using (e.g. ‘ABDC: A*’ or ‘APSA:B’).
h-index
I’m not going to talk about h-index, g-index, or the alphabet of other author-level metrics, because they are measures of an individual’s overall productivity and popularity. They don’t provide any indication of the quality of an individual publication. There is a whole other article to be written about those indices, and I’m not the person to write it.
What not to use: Predatory publications
Gold (pay to play) open access has created a market for predatory publishers, who are happy to publish your paper for a fee, and will run no quality process at all. If you are unsure, please check your journal against Jeffrey Beall’s lists:
- Potential, possible, or probable predatory scholarly open-access publishers.
- Potential, possible, or probable predatory scholarly open-access journals.
If you have been published by one of these journals or publishers, either make a clear note that it was during a period when it wasn’t predatory, or drop it from your publication list. One dodgy publication in your publication list brings all the others into question. If you are attaching that publication list to a research grant application, it works against the whole submission.
What not to use: Rogue publishers
A German requirement that PhD thesis be published has resulted in publishers who offer to publish your PhD thesis, but conduct no quality control or editorial process at all. Please avoid publishers such as:
- VDM Publishing, also known as OmniScriptum Publishing Group.
- Scholar’s Press; Lambert Academic Publishing: Both subsidiaries of VDM Publishing that target academics.
- Budrich Academic, who “have found creative ways of publishing PhD theses as ‘proper’ books rather cheaply.”
I presume there are others. If you have had your thesis or book published by any of these presses, please remove it from your publication list. I understand that this is a heartbreaking thing to do. That book represents the work that went into your thesis, and is supposed to be a major stepping stone towards a career in academia. You didn’t understand what they process was, and have effectively been duped. However, it brings you no credit as an academic. Unfortunately, because it has already been ‘published’, it is doubtful that any bona fide publisher will be interested in publishing it properly.
What not to use: Misleading metrics
When I said that I don’t care if you use a metric, as long as you say which one, I didn’t mean a metric that you pay to use or that is deliberately engineered to give a biased result. These might be predatory (pay to play) metrics or they might be commercial metrics that are skewed towards one source.
- Jeffery Beall’s list of misleading metrics.
- I hear that ResearchGate is trying to introduce a ranking metric. Please don’t encourage them. It can only end badly.
Fraudulent practices: Article brokers & paper mills
There are fraudulent ways to get papers published in reputable journals. Generally, this involves paying someone to intervene, illegally, at the writing, reviewing or publishing stage of the process. Let me be very clear: the activities below are both illegal and professional suicide. Research Whisperer condemns all of these practices.
The writing stage – Paper mills
A paper mill will write an article and get it published for you. This is the same as an essay mill, but for academics.
The review stage
I’ve seen reports of companies offering to add an author’s name at the review stage. The process works like this: a legitimate article is written and submitted for review to a relatively prestigious journal. After peer review, but before publication, the lead author contacts the journal to ‘add’ an author’s name. The author in question has paid for the service.
The publishing stage – Article brokers
Article brokers target indexed journals that are included in Journal Impact Factor metrics, but have lax or ‘easy’ article acceptance policies.
These are all examples of fraud. JUST DON’T.
———————–
You may also be interested in our earlier post: How good am I?: Publication quality at a glance
JIF is not only looking increasingly isolated, it’s not really that sound anyhow, and (IMHO) its use of Bradford’s work is dubious.
http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1012&context=acis2007
LikeLiked by 1 person
Thanks, John.
Always good to hear from an expert. The paper looks great – we drag our history behind us.
LikeLiked by 1 person
John, thanks for that article.
I got a chance to read it yesterday, and it is a pearler! I particularly love this bit:
– Lamp, John W.; Milton, Simon K.; Dawson, Linda; and Fisher, Julie, “RQF Publication Quality Measures: Methodological Issues” (2007). ACIS 2007 Proceedings. Paper 1. pg 483.
LikeLike
[…] Source: Are my publications any good? | The Research Whisperer […]
LikeLike
I think this is the most sensible piece I have read on this vexed topic. I am fairly sure that what you say applies across many fields and disciplines.
Why is it we so distrust our own judgements about quality? (And trust measures we know are deeply flawed). I think in part it is a consequence of a lack of community among researchers. In fields where people know each other well then we feel secure in making judgements, though I guess the danger is that can lead to closed groups of influence that can be hard to break into if you are an outsider.w
LikeLiked by 2 people
Thanks, Rob
I agree that lack of community is one big factor. As research becomes more multidisciplinary, the sense of community becomes more fragile. Or, to be more specific, it can be harder for the new researcher to understand who their community is, or who their communities are, as they span multiple areas. Being in ‘no-mans-land’ is a lonely place to be.
I think that there is also a lack of community (in its broadest sense) from research administration at universities. The pressure to publish, and to publish in the best quality outlets, is enormous. It is driven, to a large extent, by the desire to increase profile, improve rankings, and attract better students and more funding. In the middle of all this, it is easy for any one academic (particularly a new academic) to lose their sense of self, and their sense of self-worth. There is a way for bureaucracies to be kind, and that is to set realistic, achievable targets (but that is the topic for another post).
LikeLike
Have added a link to this from our Library’s Research Impact guide. http://rmit.libguides.com/researchimpact
and will look to put some of the links in our Publishing Your Research guide http://rmit.libguides.com/publishingresearch/qualitypublications Thanks!
LikeLike
Thanks, Gary
Always good to be helping out RMIT! Glad you liked it and found it useful.
LikeLike
Jonathan O’Donnell You Didn’t Made Any Evidence Against Thomson Reuters And Just Made Comments Against TR. Which Will Mislead A lot of People & Will Create Doubts In Minds of New Scholars (PhD Student) To Chose A Journal. I Agree With All The Blog, But Absolutely Disagree With “The Journal Impact Factor may not be dead, but it is looking pretty ill”
LikeLike
Thanks for the article – also good to mention this campaign: http://thinkchecksubmit.org/.
LikeLike
Thanks, Michelle. Think Check Submit is excellent!
LikeLike
[…] publication list to a research grant application, it works against the whole submission. (“Are my publications any good?“, The Research Whisperer blog, 22 Mar 2016) You may already be wise to this but please […]
LikeLike