People seem to be talking about how the peer review funding system is in crisis – more applications, less funding, not enough reviewers, etc. Always eager to help, here’s this Research Whisperer’s Top Five ideas on how funding organizations can reduce their workloads.
My examples are based upon the Australian Research Council (ARC), as this is the system I know best. However, these ideas can adopted by any funding agency. After all, I pinched three of them from existing funding schemes.
1. Review a set number of applications
One of the fears seems to be that a rising number of applications is forcing the quality of peer review down. The thinking goes like this: more applications means more reviews required. More reviews means more applications per reviewer (on average). Reviewers, therefore, may be spending less time on each review, or even be refusing to review applications. Without suitable reviews, the system of peer review falls over, catastrophically.
If this fear is justified, one response could be to cap the number of applications accepted. This process is known as ‘demand management’ in the UK context. The ARC knows how many admin people it has, how many people it has on the College of Experts, how many assessors it sends applications to, how many reply, how long the average application is for each scheme. These figures could be munged together to provide an upper limit of applications that the ARC would accept for each funding scheme that it runs.
If they know their capacity, the question then becomes how to make sure that only that number of applications are presented each round.
There are a couple of ways that they could do this:
i) Institutional limits:
Universities and other organizations could be given a maximum numbers of applications. This is what a number of philanthropic funding agencies do to cope with limited administrative resources. Let the universities work out which applications go forward. We could call this the ‘blood on the floor’ scenario.
ii) Sector limits:
Groups of universities (like the Australian Technology Network) could be given a maximum number of applications. The sector groups could argue about what their appropriate allowance should be, on behalf of their members. They then divide the allocation between their members. We could call this the ‘inmates running the asylum’ scenario.
iii) Individual limits:
The ARC could reduce the number of applications per person. Many art support schemes, for example, only allow one application per individual per scheme per round.
A great way for the ARC to improve on the current situation would be to stop people from claiming more than 100% of their time on research. “How can you spend more than 100% of your time on research?” I hear you ask. At the moment, the ARC has rules about how many applications a person can submit. You can be working 25% on 5 applications (i.e. 125%), and the ARC will fund them all. A simpler method might be to say the people can only work on or plan to work on 1 Effective Full Time Staffing Unit (i.e. 100%) of research at a time. So, if people are listing themselves as 25% on their applications, they would only be able to put in four applications at a time, or fewer if they already had successful grants. If they have a 100% fellowship, then they can only work on that fellowship.
This works across schemes, too. The ARC requires every applicant to list all their current grants and submitted applications. If they asked for the time fraction on each of these grants and applications, this would give them all the information they need to limit people to 100% research time across all funding schemes.
2. Change the measure of success
Currently, universities measure their grant success by counting the number of successful applications.
This is a strange measure to use, since it probably results in smaller grants over time. Counting the number of successful applications encourages granting bodies to give more grants, which means smaller amounts are given out.
Stay with me here (there will be maths). The ARC has its success ratio pegged at 20% for its major grants schemes. It makes sure that 20% of applications get funded every year. In part, this is because a fall below 20% might be politically untenable.
In the face of increasing numbers of applications and a fixed funding envelope, it maintains the 20% figure by reducing the amount given to each successful applicant. Clearly the best answer is to increase the funding envelope. However, that isn’t within the purview of the funding agency. It is a political decision and the ARC believes that it won’t happen in the current political climate.
The ARC system works roughly like this: the highest ranked applications receive almost 100% of their request. Lower ranked applications receive a smaller proportion of their funding request. The lowest 80% of applications receive nothing at all.
Knowing this, the universities should be reporting on the percentage of their request that was funded. This effectively represents the quality of applications submitted. Instead of reporting the number of successful grants (which encourages the ARC to give more grants of less money), universities should skite about the quality of the applications that they submitted. Over time, this might encourage the ARC to give fewer grants of larger amounts.
3. Eliminate the track record
The Gates Foundation gives out preliminary grants via a two page application form, and so does the Australian Centre for Health Service Innovation. They do this by concentrating on the value of the idea, rather than the track record of the researcher.
Eliminating the track record would reduce the size of most applications by about 75% since each application often has 3-5 people, with 10-page resumes attached. Smaller page counts mean less reading, which means happier reviewers (in theory). Making the grant peer review process into a double-blind one brings it in line with the journal peer review process, which would give a nice sort of symmetry to everything (neatness counts in my world).
4. Eliminate the budget
Both of the grant programs mentioned above have eliminated the budget from the application. They provide a fixed amount of funding.
Admittedly, providing a fixed amount of funding wouldn’t reduce the number of applications. If fact, it might even increase them, as it would make them easier to write. It would also make them much easier to review and compare (apples with apples and all that) since all applications are effectively asking for the same amount.
It would stop forcing people to create detailed budgets for projects that will almost never be fully funded. During the application process, RO peeps work with applicants to construct carefully costed budgets. As soon as funding is announced, researchers start changing their research plans to accommodate the reduced amount of funding.
A fixed budget means that every successful grant would be fully funded. This makes planning much easier, both for the funding agency and researcher.
5. Eliminate the application entirely
Having gotten rid of the budget and the researchers’ resume, the next logical step is to eliminate the application entirely. Some schemes, like the MacArthur Foundation Fellowship program and Myer Foundation Creative Fellowships, have done this. You don’t apply. In the case of the MacArthur Foundation, they call you. For the Myer Foundation, someone else nominates you.
The ARC could do this for some senior fellowships and grants to eminent researchers. They could review a senior researcher’s past performance and award funds accordingly. They have all the information that they require, either from university performance reporting, citation services, or their own funding results.
This would not work for postgrads or early career researchers, or people coming back from a break. It would work for senior fellowships and institutional funding.
This approach would create a space where researchers could band together, putting their awarded funds to projects that they believe are useful. This is peer review at its best: senior researchers making plans to pool their funds to create the best outcomes once they know how much funding they have.
The ARC, however, already knows that this would be deeply unpopular with universities. They keep offering to calculate research infrastructure for universities based on historical data. They can show that an automatic process works just as well as the current laborious information collection, but universities don’t want it. They would prefer to do the work and take their chances.
And that’s what it comes down to sometimes: You would prefer to do the work and take the chance, because the other option reduces the illusion of control.
I agree that making the grant application and review process more efficient is important – we seem to waste a lot of time and effort allocating a grant money. While I tend to agree with your 4th and 5th points, I’m much less certain about points 1-3.
1. I don’t like the idea of institutional limits to grants. It would have two poor outcomes. Firstly, the burden for grant assessment would be added to universities. And those grants that were put forward would be reviewed in two separate rounds. I don’t think that would reduce the total amount of assessment done – it would just shift how it was done and by whom, and it might actually increase the amount of assessment. Secondly, internal university politics might have a large influence on which grants were put forward and which were not, rather than the merits of a particular project relative to others in that discipline.
2. Universities quote measures of success that make them look good. There is not one universal measure of success. But plenty of universities quote the success rate already, as do individual researchers.
3. How closely does track record correlate with performance on new grants? If it is high, then it seems reasonable to retain track record to at least some extent. If it is low, then it seems a candidate for cutting.
4. I’m a big fan of eliminating the budget. It takes a lot of time to prepare, staff in university research offices look at the budgets closely, yet they are mostly irrelevant because researchers don’t get the money they ask for anyway. Preparing them is mostly a waste of time. Funding could be assigned in three classes (for example), and people simply nominate the level of funding they seek, and the funding agency decides on the level provided. Justification would only need to be cursory (e.g., I need salary for a post-doc to work on this, plus some funds for travel, equipment, etc). This reflects reality of how people think about budget requirements, and how funding is allocated. The funding agency would have strict rules on what the money could be spent on, and this expenditure would be audited to make sure it complies.
5. I think there is a case to be made for highly successful applicants to be simply granted some funds. For example, if an applicant is successful on three grants in succession, then the applicant would automatically get a grant in the next round for which they are eligible. I’ve suggested these last two options (points 4 and 5) to the ARC previously. I think they have merit for reducing wasted time of leading researchers.
Thanks for a wonderful, thoughtful reply. I pretty much agree with all your points.
For some schemes, the universities already undertake a defacto assessment process. I don’t deal with students, so I’m not familiar with how Australian Postgraduate Awards (APA) are distributed now, but it used to be that universities ranked their applications internally. Having said that, I agree that internal university ranking is not the best way.
If the Australian Research Council (ARC) were to bring in a limit, my bet is that they would go for option (iii) and further tighten the individual limits. But I’m just guessing.
I suggested deleting track record mostly because it is such a bulky part of the application. Frankly, I don’t think that it is going away any time soon – people are wedded to it. However, it can work against innovation sometimes, which is why the the two schemes I quoted have ditched it.
I think it was South Africa that used to award funding purely on track record. I think that they used to have a system somewhat like a fellowship, whereby you received funding based entirely on your past performance. But that was a long time ago – I don’t know how they do it now.
While on the subject of innovative ways to allocate funds, have you seen that Deakin have partnered with Pozible to seek funding directly from the public. Given your research interests, you might like to have a look at “Discovering Papua New Guinea’s Mountain Mammals” by Euan Ritchie and Jim Thomas.
They have raised over $7,500 so far, and still have three weeks to go in their funding campaign. I hope that they can make it to the full $20,000 that they are looking for.
Thanks. APAs are awarded by unis largely by “formula” based on marks, with adjustments for other things like publications. It is somewhat automated, which means it doesn’t take long to do, but does not always mean the best researchers get scholarships. I’m not sure if there could be a better and more efficient system.
I’ve seen Euan’s efforts via Pozible. I’ve been pinging Stephen Fry in tweets to try to get him to advertise it (without success so far!).
It is definitely heartbreaking to see all the wasted time and money spent on the development of funding applications and the assessment of them. “Isn’t there a better way?” is surely an oft-asked question, thanks for this very interesting post giving some possible answers.
LikeLiked by 1 person