When peer review goes well – and when it doesn’t

Dr Jodie Goldney is a qualitative researcher with extensive experience working in the community services sector.

Her PhD is a critique of the scientist-practitioner, or Boulder model of pedagogy that underpins the training of psychologists in most of the Western World. Drawing on that work, Jodie created the attributional approach to recruitment and training, which screens for reflexivity and capacity to learn from experience over a static knowledge-base.

Jodie recently launched Qualitate (www.qualitate.com.au; on Twitter: @Qualitate01), which applies qualitative methodologies to the problems of industry.

She is both adjunct and sessional academic with Charles Sturt University. Jodie tweets from @jgoldn01.

Photo by eatsmilesleep | www.flickr.com/photos/45378259@N05 (Shared via CC license 2.0 - creativecommons.org/licenses/by/2.0)

Photo by eatsmilesleep | http://www.flickr.com/photos/45378259@N05 (Shared via CC license 2.0 – creativecommons.org/licenses/by/2.0)

In the academic world, getting published is serious business, it can mean the difference between getting a job, and not even getting an interview.

For new PhD graduates, this need is particularly strong, with the catchphrase “publish or perish” whispered in hushed tones, as people unpack why one colleague seems able to carve out a professional path, with greater permanency than multiple sessional (and very part-time) roles across several institutions!

There is also an arguably ethical component to this expectation to publish and share one’s work. Those who’ve undertaken a doctorate have had heavy investments (in the form of time, mentorship, and financial support) from their respective academic institutions, supervisors, research participants, associated communities, and schools of study. Giving back in some way is only right.

I recently completed my PhD and, in keeping with publishing expectations, I have worked consistently to translate my thesis into scholarly, peer-reviewed articles over the last five months (among other work and family commitments). The results of this process for me so far are eight pieces at various stages of the publication process:  two in press, two in the review cycle, and four are in preparation.

Reflecting on my experiences thus far, I have been amazed by the diversity of ways that a prospective author can be treated by journals and their editors, how differently the peer-review process can occur, and what is considered as constituting academic substance. Read more of this post

How I assess a funding application: Part 2 – feasibility

Sea of Wisdom temple (Beijing) by Jonathan O’Donnell (on Flickr)

The previous post in this series addressed the issue of how I assess track-records on grant applications. It talked about a range of X-factors that I look for when assessing applications with (typically) excellent research CVs.

This post focuses on project feasibility and whether the project sounds like it’s going to work.

On one level, it’s a dead obvious question: Can the project be done?

It is, however, an aspect that depends entirely on the evidence presented in the application that:

  1. The team (or individual) is good and experienced;
  2. The budget’s credible and appropriately linked to a methodology that has integrity; and
  3. The project itself has significant intellectual rigour and vigour.

One of the trickiest balancing acts that I find with grant applications is demonstrating innovation and creativity in your research without sacrificing feasibility.

This can sometimes boil down to a question of ‘do you have a Plan A and Plan B?’. If we’re talking about the honest face of research, we’d have to admit that things don’t always work. The project direction that’s so assiduously planned may go awry in the first six months when the research team implodes or the data doesn’t do what you’d like it to. Research is often exploratory, which introduces doubt about what its real final outcomes might be.

If you were being completely honest, you’d have to say that the project may not work.


How I assess a funding application: Part 1 – track record

Now that our Discovery applications have been fed into the gaping maw of the Australian Research Council (ARC) competition, I thought I’d take my 2-part series of posts about assessing funding applications out for a spin. Part 1 focuses on track-records and the research team. Part 2 will address an application’s overall feasibility.

“It’s all a lottery!”

“You need to game the system or you haven’t got a hope.”

“Only those who’ve had them before will get one.”

Sea of Wisdom temple (Beijing) by Jonathan O’Donnell (on Flickr)

The urban myths circulating about grant rounds are as tenacious as those about waking up in ice-filled bathtubs and realising you’ve had your kidney harvested.

No doubt, spending so much time and investing intellectual resources in a major application makes the lack of success bite that much deeper.

Having been around the traps as a supplicant, awardee, assessor, and now advisor, I’d have to say that most funding assessment processes do end up giving money to the strongest teams and most compelling projects. This isn’t to say that the processes or choices are always perfect, or that rogue results (in good and bad ways) don’t pop up. There’s always that story of the ARC Discovery that was written over a weekend and got up.

This post is about how I assess funding applications and, in particular, the track-record components. Over my academic career, I’ve:

  • Been part of judging panels for niche academic association committees that gave out travel and small grants,
  • Been invited onto a university’s fellowship selection panel, and
  • Assessed for a bunch of international funding bodies (in Australia, Canada, and Hong Kong).

I’m not claiming that my process is necessarily best practice, but I thought it might be useful for you to gain insight into one assessor’s valuations (and, it has to be said, biases).

Each funding scheme’s selection criteria may differ in detail but the two basic elements of track-record and project idea are always there.

The role of the assessor, for me, is in gauging the quality and feasibility of the overall proposition. The fact that the ARC now gives ‘feasibility’ an overt weighting in the Discovery scheme gives rise to interesting conversation (but that’s for another post!).

What do I look for when assessing the track-records of researchers on grant applicants?


Rational responses to referees

Preliminary evidence appears to show that this approach to responding to referees is – on balance – probably sub-optimal. (Photo by Tseen Khoo)

This post is co-authored by Adam Golberg of Cash for Questions (UK), and Jonathan O’Donnell and Tseen Khoo of The Research Whisperer (Australia).

It arises out of a comment that Jonathan made about understanding and responding to referees on one of Adam’s posts about what to do if your grant application is unsuccessful. This seemed like a good topic for an article of its own, so here it is, cross-posted to our respective blogs.

A quick opening note on terminology: We use ‘referee’ or ‘assessor’ to refer to academics who read and review research grant applications, then feed their comments into the final decision-making process. Terminology varies a bit between funders, and between the UK and Australia. We’re not talking about journal referees, although some of the advice that follows may also apply there.


There are funding schemes that offer applicants the opportunity to respond to referees’ comments. These responses are then considered alongside the assessors’ scores/comments by the funding panel. Some funders (including the Economic and Social Research Council [ESRC] in the UK) have a filtering process before this point, so if you are being asked to respond to referees’ comments, you should consider it a positive sign as not all applications get this far. Others, such as the Australian Research Council (ARC), offer you the chance to write a rejoinder regardless of the level of referees’ reports.

If the funding body offers you the option of a response, you should consider your response as one of the most important parts of the application process.  A good response can draw the sting from criticisms, emphasise the positive comments, and enhance your chances of getting funding.  A bad one can doom your application.

And if you submit no response at all? That can signal negative things about your project and research team that might live on beyond this grant round.

The first thing you might need to do when you get the referees’ comments about your grant application is kick the (imaginary) cat.* This is an important process. Embrace it.


Professionally judgemental

“Do you think it’s any good?”

If this kind of question gets you excited, you may well have an assessor’s enthusiasm that will endear you to academic journals, granting bodies, and publishers the world over.

As I’ve mentioned in previous posts, I was a journal editor for about five years. It’s a role that has numerous highs and lows.

Highs? Pulling together a really tight, quality issue. Expanding circulation and frequency. The lavish 3-course lunch put on by juggernaut publisher annually.

Lows? Reading reams of bad papers. Fitting the editing work around an already-packed schedule. Dealing with unprofessional, always-late, or just plain nasty reviewers. It’s the latter aspect that I’ll focus on for this post.

The process of reviewing well for journals is a complex skill. Because of my own disciplinary background, I’m speaking mostly about humanities publications, but much of what I say applies across many fields.

First, what exactly am I talking about? READ MORE