Dr Muriel E Swijghuisen Reigersberg is a researcher development manager (strategy) at The University of Sydney, Australia, and previously worked at Goldsmiths, University of London, UK. At Sydney, she oversees the development of a University-wide researcher development training program in collaboration with researchers, faculty staff and professional service units.
In her spare time, Muriel maintains an academic profile in applied and medical ethnomusicology, regularly presenting at academic conferences, penning academic texts, peer reviewing and blogging. She has also offered consultancy support to specialist research institutes in arts and humanities in Slovenia and Japan. Muriel is a keen supporter of the responsible sharing of academic knowledge.
This article is a response to Tseen Khoo’s great post Nowhere to hide (29th August) where she wonders whether vice-chancellors are capable of understanding the current struggles and working conditions early career researchers face in today’s modern university.
I’ll unpack some of the issues Tseen raises, using my third-space hat: the research manager/ researcher hat.
First, I should say that perhaps I am lucky. Not all vice-chancellors are the same and my encounters with senior staff, including the odd vice-chancellor every now and again, have been strikingly positive. I say ‘strikingly’ because when I was a junior administrator and early career researcher, I never had access to the upper echelons of the university.
Now that I do more regularly, and have had some conversations with senior colleagues, I have come to understand that issues such as fixed-term contractual arrangements; metrics; the ‘enforced’ mobility questions and definitions of what an early career researcher is (in terms of age, etc.) are rather more complex than I’d first imagined.
So this article is as much about my personal learning as it is about the lamentable state of affairs.
The ‘easier’ challenge to address is the question of definitions of what an early career researcher is, in terms of age. When I was working on grant submissions, I noted that funding bodies defined ‘early career researcher’ in a variety of ways. Some said up to 3 years postdoc, some 5 years and nowadays some funders go as far as 10 years post-PhD while others also include PhD students, or cut off your early career researcher status when you become Associate Professor in their definitions. It is all a bit bamboozling and a real nightmare when it comes to eligibility checking, the pitching of researcher development events, and resourcing. Many funders have changed their definitions as well, in response to the economic and employability downturn post-2008, so I have some sympathy for the vice-chancellors who think all early career researchers are ‘young’. They are not entirely wrong and ‘young’ is a relative term, after all, particularly in the academy.
Now, fixed term contracts. Diabolical and stressful. I agree. I do believe, unlike a respondent to Tseen’s article, that vice-chancellors are not necessarily to blame. There are external factors that impact on a vice-chancellor’s ability to allocate resources to salaries and employment stability, as well as internal university challenges. Universities do many things and teaching and infrastructure budgets compete with research budgets for a hearing during the annual budget rounds. Research Council grants income is never a profit-making income stream, so arguing for additional long-term investment in research salaries and activity is hard to do when labs need to be built and students need to be supported. Increased investment usually also comes at a price: the necessity of metrics and performance indicators to justify local and public expenditure. More on metrics later.
Fixed-term contracts are also very much the result of the ways in which research is funded at a national level and the saturation of the job market in higher education. In the UK for instance, the Haldane Principle and dual support system mean that universities have access to quality related research income via the Higher Education Funding for England (HEFCE) which supports research salaries and activities. This income stream is complemented by competitive research income and industry income, which is where fixed-term postdoc contracts feature most heavily. However, the number of PhD qualified colleagues far outstrips positions available by both streams combined, and there is still a significant stigma attached to leaving the academy. Colleagues who live in countries where research is funded through competitive grants alone may also find that employment stability is more fragile. This, I think again, is not something vice-chancellors can remedy individually and, after all, an educated work force is desirable nationally. The question that should be asked is: should all PhD- qualified staff remain in higher education or instead put their skills to use elsewhere?
What perhaps vice-chancellors can do, is to be openly supportive of colleagues who wish to leave higher education, allocate resources to facilitating this, and for academic culture to adapt accordingly. Like Tseen, I think the ‘deficit model’ must be done away with. Swiftly and by the academy at large. Additionally, I think aspiring academics should be suitably informed at a very early stage what a career in the academy looks like, so that they enter it from an informed position. The responsibility for providing that detail is shared and could lie with Careers Services, supervisors and mentors, with vice-chancellors offering their blessings and resources for this type of engagement.
Now, for that tricky topic: metrics and performance indicators.
My remaining 250 words are insufficient to address the complexity of the metrics debate. The variety of metrics available also increases what seems like daily. Thankfully, in the UK, they have helped me by compiling The Metric Tide, an independent, national UK review of why citation metrics in particular are not a robust way of assessing and administering research excellence (whatever ‘excellence’ might look like or mean). The Higher Education Funding Council for England, UK has also created a Responsible Metrics Forum to explore which metrics might be used in research assessment, if not bibliometrics.
This UK response, in part, is due to the fact that in the UK research assessment tied up with the aforementioned quality related research income through the Research Excellence Framework (REF). The Research Excellence Framework is a hugely time consuming and competitive exercise, precisely because money is attached. In Australia, they have opted for a process that is less time-intensive, Excellence in Research for Australia (ERA). ERA relies heavily on bibliometrics and citation indices. Quality related income is not allocated in the same way as it is in the UK. While less onerous to administer, ERA arguably is also less accurate in identifying research excellence. What these types of measuring activities have in common, though, is that they are increasingly used to help ‘inform’ various league tables, (internal and external) funding allocations and resources out there, (inter)nationally.
As Tseen points out, research is a global market. Researcher mobility and competition mean that league tables are seen as necessary to attract talented staff, students and so on. Some colleagues also do rather well by these tables, thanks very much. Consequently, some see no problem in playing citation games and hiring and firing according to h-indices. Neither do I think it realistic to believe that metrics will disappear in this global research market. Numbers of a different kind to citation indices might still tell us something. What this might be exactly remains to be determined by senior colleagues.
Many vice-chancellors are all too aware of these tensions. Unless our senior leaders at multiple levels stage a joint ‘revolution’ with their peers and are willing to risk losing successful staff to institutions that do play the citation game, though, not much will change. Change in culture and transparency is needed, however, and what this change might look like will be determined by (inter) national and local drivers.
Senior colleagues in positions of leadership can assist and foster change in determining these drivers. They will need to do this together not just locally but (inter) nationally, supporting transparent environments where institutions clearly state how and which metrics are used to report on research-related university activity such as recruitment and retention through regular communication, institutional statements, or by supporting initiatives like the San Francisco Declaration on Research Assessment (DORA) through action. This would support the career development of many researchers, including early career researchers. As I hope I have shown, too, supporting early career researchers and researcher development must be a joint endeavor for which everyone must take responsibility.
Meanwhile, I hope at least some vice-chancellors will help me with my ‘revolution’.