Human Scientific Writing in the Age of Large Language Models
Writing scientific articles is not merely a means of reporting results but a cognitive tool that fosters structured thinking, enabling researchers to transform raw data and ideas into coherent narratives that reflect both meaning and insight; this process is integral to the scientific method and is backed by evidence showing that writing, particularly by hand, enhances brain connectivity and memory, thus the call to preserve and value human-generated scientific writing gains urgency in an era where large language models (LLMs) can swiftly produce entire articles or peer-review reports, bypassing the reflective process that traditionally underpins scholarly work, although LLMs can be useful for editing, summarizing, or brainstorming, they cannot assume intellectual accountability nor replicate the creative judgment involved in scientific reasoning, and their output must be meticulously verified due to risks such as hallucination or citation errors, moreover, writing is not just output but a form of inquiry that helps scientists identify the core message of their research and its broader implications, outsourcing this process to machines may undermine the intellectual rigor and ownership that come with formulating and articulating ideas oneself, while LLMs may assist non-native speakers or help overcome writer’s block, full dependence on them could erode the researcher’s critical engagement with their own work, ultimately, scientific writing is thinking, and removing researchers from the act of writing is tantamount to detaching them from the thinking that drives their science, therefore, preserving human authorship is not nostalgic resistance but a defense of epistemic integrity, ensuring that science remains a product of human insight, reflection, and responsibility.