Artificial intelligence (AI) agents are no longer simply tools of science—they now act as “co-scientists,” participating in all stages of research design and analysis. Traditionally, researchers start with a well-defined question or problem, such as predicting protein structure based on amino acid sequences, and then develop or apply artificial intelligence tools (such as AlphaFold) to solve that specific problem. Over the past year, researchers have increasingly begun to use AI as co-scientists, participating in a broader range of scientific activities, including generating hypotheses, designing experiments, and writing papers.1,2,3,4,5. These fellow AI scientists build on advances in AI agents—autonomous systems built on large language models (LLMs) that can use tools, access external databases, and search scientific literature.
While there are promising examples of fellow AI scientists designing nanobodies and generating experimentally validated hypotheses, it remains an emerging field.1,2. Many fundamental questions still remain open: How creative are agent scientists working in the field of artificial intelligence? How should human researchers collaborate with them? How capable are LLMs of reviewing scientific papers? These questions are difficult to study because journals and conferences currently prohibit AI co-authors and LLM reviewers, and researchers often hide how they use AI.5.





