People seem to be talking about how the peer review funding system is in crisis – more applications, less funding, not enough reviewers, etc. Always eager to help, here’s this Research Whisperer’s Top Five ideas on how funding organizations can reduce their workloads.
My examples are based upon the Australian Research Council (ARC), as this is the system I know best. However, these ideas can adopted by any funding agency. After all, I pinched three of them from existing funding schemes.
1. Review a set number of applications
One of the fears seems to be that a rising number of applications is forcing the quality of peer review down. The thinking goes like this: more applications means more reviews required. More reviews means more applications per reviewer (on average). Reviewers, therefore, may be spending less time on each review, or even be refusing to review applications. Without suitable reviews, the system of peer review falls over, catastrophically.
If this fear is justified, one response could be to cap the number of applications accepted. This process is known as ‘demand management’ in the UK context. The ARC knows how many admin people it has, how many people it has on the College of Experts, how many assessors it sends applications to, how many reply, how long the average application is for each scheme. These figures could be munged together to provide an upper limit of applications that the ARC would accept for each funding scheme that it runs.
If they know their capacity, the question then becomes how to make sure that only that number of applications are presented each round.
There are a couple of ways that they could do this:
i) Institutional limits:
Universities and other organizations could be given a maximum numbers of applications. This is what a number of philanthropic funding agencies do to cope with limited administrative resources. Let the universities work out which applications go forward. We could call this the ‘blood on the floor’ scenario.
ii) Sector limits:
Groups of universities (like the Australian Technology Network) could be given a maximum number of applications. The sector groups could argue about what their appropriate allowance should be, on behalf of their members. They then divide the allocation between their members. We could call this the ‘inmates running the asylum’ scenario.
iii) Individual limits:
The ARC could reduce the number of applications per person. Many art support schemes, for example, only allow one application per individual per scheme per round.
A great way for the ARC to improve on the current situation would be to stop people from claiming more than 100% of their time on research. “How can you spend more than 100% of your time on research?” I hear you ask. At the moment, the ARC has rules about how many applications a person can submit. You can be working 25% on 5 applications (i.e. 125%), and the ARC will fund them all. A simpler method might be to say the people can only work on or plan to work on 1 Effective Full Time Staffing Unit (i.e. 100%) of research at a time. So, if people are listing themselves as 25% on their applications, they would only be able to put in four applications at a time, or fewer if they already had successful grants. If they have a 100% fellowship, then they can only work on that fellowship.
This works across schemes, too. The ARC requires every applicant to list all their current grants and submitted applications. If they asked for the time fraction on each of these grants and applications, this would give them all the information they need to limit people to 100% research time across all funding schemes.
2. Change the measure of success
Currently, universities measure their grant success by counting the number of successful applications.
This is a strange measure to use, since it probably results in smaller grants over time. Counting the number of successful applications encourages granting bodies to give more grants, which means smaller amounts are given out.
Stay with me here (there will be maths). The ARC has its success ratio pegged at 20% for its major grants schemes. It makes sure that 20% of applications get funded every year. In part, this is because a fall below 20% might be politically untenable.
In the face of increasing numbers of applications and a fixed funding envelope, it maintains the 20% figure by reducing the amount given to each successful applicant. Clearly the best answer is to increase the funding envelope. However, that isn’t within the purview of the funding agency. It is a political decision and the ARC believes that it won’t happen in the current political climate.
The ARC system works roughly like this: the highest ranked applications receive almost 100% of their request. Lower ranked applications receive a smaller proportion of their funding request. The lowest 80% of applications receive nothing at all.
Knowing this, the universities should be reporting on the percentage of their request that was funded. This effectively represents the quality of applications submitted. Instead of reporting the number of successful grants (which encourages the ARC to give more grants of less money), universities should skite about the quality of the applications that they submitted. Over time, this might encourage the ARC to give fewer grants of larger amounts.
3. Eliminate the track record
The Gates Foundation gives out preliminary grants via a two page application form, and so does the Australian Centre for Health Service Innovation. They do this by concentrating on the value of the idea, rather than the track record of the researcher.
Eliminating the track record would reduce the size of most applications by about 75% since each application often has 3-5 people, with 10-page resumes attached. Smaller page counts mean less reading, which means happier reviewers (in theory). Making the grant peer review process into a double-blind one brings it in line with the journal peer review process, which would give a nice sort of symmetry to everything (neatness counts in my world).
4. Eliminate the budget
Both of the grant programs mentioned above have eliminated the budget from the application. They provide a fixed amount of funding.
Admittedly, providing a fixed amount of funding wouldn’t reduce the number of applications. If fact, it might even increase them, as it would make them easier to write. It would also make them much easier to review and compare (apples with apples and all that) since all applications are effectively asking for the same amount.
It would stop forcing people to create detailed budgets for projects that will almost never be fully funded. During the application process, RO peeps work with applicants to construct carefully costed budgets. As soon as funding is announced, researchers start changing their research plans to accommodate the reduced amount of funding.
A fixed budget means that every successful grant would be fully funded. This makes planning much easier, both for the funding agency and researcher.
5. Eliminate the application entirely
Having gotten rid of the budget and the researchers’ resume, the next logical step is to eliminate the application entirely. Some schemes, like the MacArthur Foundation Fellowship program and Myer Foundation Creative Fellowships, have done this. You don’t apply. In the case of the MacArthur Foundation, they call you. For the Myer Foundation, someone else nominates you.
The ARC could do this for some senior fellowships and grants to eminent researchers. They could review a senior researcher’s past performance and award funds accordingly. They have all the information that they require, either from university performance reporting, citation services, or their own funding results.
This would not work for postgrads or early career researchers, or people coming back from a break. It would work for senior fellowships and institutional funding.
This approach would create a space where researchers could band together, putting their awarded funds to projects that they believe are useful. This is peer review at its best: senior researchers making plans to pool their funds to create the best outcomes once they know how much funding they have.
The ARC, however, already knows that this would be deeply unpopular with universities. They keep offering to calculate research infrastructure for universities based on historical data. They can show that an automatic process works just as well as the current laborious information collection, but universities don’t want it. They would prefer to do the work and take their chances.
And that’s what it comes down to sometimes: You would prefer to do the work and take the chance, because the other option reduces the illusion of control.