AI in Science: Promising Potential, Persistent Pitfalls Highlighted at Agents4Science 2025
November 3, 2025
Experts cautioned that AI can be competent yet uninspiring, with questions sometimes misframed, underscoring that human-guided scientific judgment remains essential.
Across tasks from brainstorming to data processing, AI tools such as ChatGPT, Claude, and Google Gemini frequently hallucinated references, lost context, or produced technically correct but unengaging results.
Case studies show AI-generated writing can include redundant text and hallucinations until humans intervene, and in another analysis, AI fabricated sources, highlighting reliability concerns.
AI-led research was spotlighted at the Agents4Science 2025 online conference, illustrating both the capabilities and the ongoing flaws of AI as a contributor to science.
Over 300 submissions yielded 47 papers with AI listed as sole first author, signaling a shift in authorship norms even as many journals still restrict AI from authorship.
Experts nevertheless view AI as a potential accelerator if humans steer questions and priorities, positioning AI as an assistant or collaborator rather than a replacement for human inquiry.
Summary based on 1 source
Get a daily email with more AI stories
Source

Firstpost • Nov 3, 2025
AI ‘scientists’ fail to impress human experts at first-of-its-kind research conference