AI levels the playing field for researchers applying for grants
UNITED KINGDOM
Generative AI enables individual researchers to write grant proposals at a similar speed to big teams who employ research associates or ghost writers. Funding agencies require researchers to state when they use AI to help write a proposal – why have they never had a problem with other people writing grants for principal investigators?
Around two years ago generative artificial intelligence took our lives by storm. Prior to this, most of us had already seen, usually through the news, some of the amazing feats that the latest AI developments achieved, such as drawing incredible art or beating grandmasters at Go.
It was only when ChatGPT was released to the general public that we could actually play with a generative AI tool and discover for ourselves the extraordinary things that this technology can do. The developments have not stopped since, and recently a new tool was released that can create videos from simple prompts.
Generative AI seems to be growing in an exponential way, with many wondering where it will go next.
As we discover and play with new AI tools, we are also realising how potentially dangerous they can be. Will AI become sentient and launch nuclear missiles? AI experts have joined forces with different governments to draft safety requirements regarding the different ethical and moral implications of this new technology.
In academia, we have had to face – albeit on a much smaller scale – different ethical and moral implications almost since day one, when generative AI tools became available to our students.
Most of us had to redesign our assessments to circumvent the use of generative AI, and there has been a lot of discussion about what can be considered ‘fair use’. Most would agree that if a student used ChatGPT to write an entire essay, that would not be ‘fair use’.
On the other hand, there are other ways students can use generative AI, such as fixing grammar or obtaining high-level feedback before submission. I would personally consider this ‘fair use’ – and indeed, this seems to be the perspective of many universities.
AI and research proposals
As academics, there are many areas where we can use generative AI.
I have used it to fix my grammar – as a non-English speaker – or to get ideas as to why an experiment was not working. Another area I have used these tools extensively is writing research proposals for funding agencies. I wrote a column about this for the journal Nature late last year.
It cited a 2023 Nature survey of 1,600 researchers which found that more than 25% use AI to help them write manuscripts and that more than 15% use AI technology to help them write grant proposals.
I will not repeat my article’s contents here but let me summarise. Application forms for funding opportunities are usually extremely long, asking for many different types of documents. In many cases they are so long that unless you have a team to help you, it is almost impossible to complete them in the required timeframe.
Tools like ChatGPT are extremely helpful because they can be that ‘team’, accelerate the writing process, and therefore enable more researchers to apply for grants.
After the Nature article was published, I received dozens of emails. Something that surprised me is that a lot of them explained to me a very similar story: before ChatGPT, very few principal investigators wrote their proposals, they mostly used their research associates (post-docs) or external companies to write them.
Meaning that if you had the means, writing all those long applications wouldn’t be a problem. If the principal investigators are not writing proposal themselves, why should it matter if they have been written by AI or by an external company?
It seems that it does matter.
Recently a colleague shared with me an application form from a British funding agency. Page 14 of this form says: “You will confirm that (…) artificial intelligence tools have not been used in writing this application, or where they have been used, they have been clearly identified in the responses.”
This particular funder is not an exception, and many others are adding similar sentences to their guidelines. Editors from scientific journals and conferences have similar requirements.
Proposal-writing hypocrisy
I would assume that such declarations about AI have been added to guarantee the integrity of academia, to make sure that the principal investigator did ‘write’ the application – but grants and papers have been ghost written for a long time.
Top universities have teams of grant writers who will do almost everything for you. Principal investigators or consortiums very often hire private companies that will ghost write a proposal or paper for you. Big research teams employ several post-docs to write proposals.
The names of the post-docs, private bid writers or university-employed research office editors are never disclosed in these grants. And funding agencies never request this information. Some would say that if you have been successful enough to be in a position to use these resources, then it is fair that you use them.
On the other hand, the vast majority of researchers run very small teams, and they don’t have the luxury of employing post-docs to write grants or spend thousands of pounds to hire a bid writer. I am one of them. I write grants during the summer, and at nights and weekends.
Is it fair that I compete with applications being written by ghost writers, teams of grant writers or a few post-docs working on it full time? I know many will say yes. They managed to create that team that helps them. I can see their point. I have nothing against people being successful and getting more resources and building bigger teams.
But here comes the beloved ChatGPT, the great equaliser. Anyone can use it, big or small. You no longer need a team of post-docs, or a team of grant writers, to write a grant without spending a month working two jobs.
It enables us, small researchers, to write grants at a similar speed as bigger teams. It is interesting how so many people seem to have a problem with this, but many of them never had a problem with the idea of research teams writing grants for principal investigators.
How come researchers are required to disclose the use of AI, but not if they hired a bid writer? Shouldn’t any kind of aid be disclosed? If only a very small group of people had access to chatbots, would funding agencies add these regulations?
My opinion is that any kind of aid must be disclosed. But if this full disclosure does not become a requirement, in a similar way to how lecturers had to redesign their assessments, funding agencies should redesign the process to guarantee that the principal investigator, and only this person, wrote the application.
It is time for academia to update itself to the 21st century. It is time for the whole system to become more fair, less unequal, and more welcoming to any individual from any background who likes science and has good ideas.
Juan Manuel Parrilla is a lecturer in robotics at Glasgow Caledonian University in the United Kingdom. His research lies at the intersection between artificial intelligence and manufacturing. He is passionate about making academia more welcoming to everyone, and updating it to the 21st century.