I saw a graph recently (see here, source: exponential view via Ethan Mollick) of the research capabilities (as measured using the GPQA Diamond metric) of A.I. models. At the moment OpenAI’s o3 model is about at the level of recent PhD graduates answering questions in their own field. The improvement in these models shows no sign of slowing down. I think the existence of these LLMs raises interesting questions on whether certain research fields will still be funded in the years to come.
I’m a final year graduate student researching pure mathematics. ChatGPT has got to the point now where I consistently use it in my research. On about 65% of queries I ask it gives a reasonable response that allows me to make progress quicker than if I had not used it. This has developed a lot from last year where the models wouldn’t be able to understand the questions. I see no reason for the research capabilities of these LLMs to stop at a PhD level of research and as a result I struggle to see why funding bodies will continue to fund research projects in theoretical fields when it will be many orders of magnitude cheaper and quicker to these research questions to AI systems.
So my, perhaps depressing, outlook is that funding in these fields will significantly reduce over the next few years and the structure of the academia in the sciences will move towards being completely focused on carrying out experiments, with the results of these experiments being analysed with AI systems. Hence, in fields like mathematics there will be fewer postdocs and permanent positions. This is one of the reasons that I’ve decided not to apply for postdocs.
Leave a Reply