This article is part of an ongoing series on how researchers can use LLMs in writing and thinking without becoming overly dependent on them.
I am a clinical doctor and a PhD student. In this blog, I often introduce ways to use LLMs for collecting information, but I also use them when writing research articles. They are extremely useful, but overusing them can be dangerous in several ways.
I am not an AI specialist, but I try to understand the mechanics of LLMs to the extent possible. I therefore approach them from the perspective of a working researcher, combining my own trial-and-error experiences with discussions and advice from experts.
This series follows the research writing process, from generating ideas to drafting individual sections of a paper.
This first article discusses how LLMs should be positioned in that process.
Many researchers already use LLMs in multiple parts of the research workflow — drafting manuscript text, adjusting tone and phrasing for academic style, summarizing prior studies, and organizing ideas during writing. These roles became common around 2024, when I began introducing AI tools for researchers. In fact, many of the AI-powered research tools that emerged during this period already incorporate these functions as core features for academic writing.
This way of using LLMs is particularly valuable for non-native English speakers like me. Writing academic prose in English can be a significant barrier to publishing in international journals, but since the emergence of LLMs, this barrier has become considerably lower.
However, there is still plenty of debate around LLM use, for two main reasons. From a broader, societal perspective: does widespread LLM use actually benefit science in the long run? The number of research articles is already exploding, and LLMs risk making the situation worse by fueling AI-assisted paper mills. In fact, considering the growing burden on peer reviewers, I believe this situation is already getting worse.
From an individual perspective: LLMs can take over the very activities — synthesizing evidence, constructing logical arguments — that sharpen a researcher's thinking. Over-reliance could erode our capacity for original thought, which would be devastating for generating innovative ideas.
On top of these concerns, new tools and prompting strategies for boosting efficiency seem to appear every week. Frankly, I'm tired of seeing "10 prompts you must use for academic writing" or "Top 10 AI tools every researcher needs!" The more important question is not which tool or which prompt, but how LLMs fit into the way you think and write — and that is exactly what this series tries to address.
When I try to write manuscripts with LLMs, I actually feel these tensions firsthand, and I feel the need to articulate and share what I've learned about how LLMs fit into day-to-day research work. Therefore, firstly, I'd like to consider both how to use LLMs for scientific writing and how not to use them — with one guiding principle: use LLMs without outsourcing your thinking.
Why LLMs Are Not an Alternative Writer for You
LLMs are extremely useful. They are fast, knowledgeable, and never get tired. However, I honestly do not think they are well suited for a task like writing a research paper. They are ultimately just an input–output interface, and their output depends heavily on the input they receive.
This property is often described as context engineering. The input to an LLM does not consist only of prompts; it can also include other information such as memory, user preferences, and external knowledge sources through retrieval systems (RAG).
This concept is becoming increasingly common. Major LLM systems such as GPT, Claude, and Gemini now maintain user-specific memory and context.
However, even with this additional context, LLMs still struggle to produce exactly what you want in a research article.
When we write a manuscript, we usually do not fully understand all the information we will ultimately need. A research article typically presents a novel argument built on previous studies and supported by evidence, including experimental results. In other words, it forms a network of knowledge, from which a single path is selected to construct a coherent storyline.
To build this argument, we must:
- choose relevant facts (from both previous studies and our own results),
- construct relationships among them (interpretation),
- and then select a narrative path that supports the central claim.
Importantly, this process is not linear. While writing the narrative, we often realize that something is wrong and rewrite it from scratch. While building the conceptual network, we may notice missing information and return to the data or literature to add new pieces of evidence.
This iterative process constantly involves information that lies outside the current context. That is why LLMs cannot function as an alternative writer for you.
Misusing LLMs Can Hinder Your Ability to Think Through Problems
“Writing is thinking.”
This phrase was used as the title of a Nature article in 2025:
https://www.nature.com/articles/s44222-025-00323-4
I strongly agree with this idea.
Overusing LLMs can gradually erode this thinking process.
The key intellectual process for researchers is the one described above: connecting pieces of knowledge to form new arguments. Sometimes this requires a conceptual leap — linking ideas that initially appear unrelated.
These connections can come from many sources: something seen at a conference, a conversation with colleagues, or a sudden insight in daily life (like the famous story of Newton’s apple, even if the story itself may be partly fictional).
This kind of implicit knowledge remains difficult for LLMs to access. LLMs operate primarily through statistical relationships between words and sentences. As a result, rare or previously unnoticed connections may be overlooked.
Another risk is that excessive reliance on LLMs can make researchers more passive in their own work. If you rely on them without actively thinking through the arguments, you may gradually lose the deeper understanding required for your research. Your own internalized knowledge and understanding of the research naturally matters in situations such as presentations and discussions with collaborators. Over-reliance on LLMs can significantly affect your performance in these situations, where internalized knowledge is essential.
Imagine an obedient subordinate who follows instructions perfectly but never truly understands the work they are doing. They can complete tasks, but they cannot explain the reasoning behind them or develop new ideas. Deliberately turning yourself into an obedient subordinate to an LLM seems utterly pointless. The same situation can occur if you rely on LLMs too heavily.
Then, What Should We Do with LLMs?
For the reasons described above, I prefer to use LLMs in only two specific roles.
1. Expanding on Your Own Ideas
This may seem slightly contradictory to what I said above, but the key is that I use LLMs to expand on ideas I have already generated myself. When I use LLMs for brainstorming, they frequently suggest ideas that, while not necessarily original, are ones I had not considered.
After writing a tentative version of a sentence or paragraph, I sometimes ask an LLM for alternative ways to support the argument, or for suggestions about style and tone.
If I find a suggestion useful, I adopt the idea only after rewriting it in my own words. The goal is not to copy the output but to use it as a stimulus for improving my own writing.
2. Providing an External Perspective
Another useful role is to ask the LLM to act as a particular type of reader — for example:
- a journal editor
- a peer reviewer
- a researcher in the same field
- a researcher from a different field
By asking the model to review the manuscript from these perspectives, I can check whether the storyline is understandable and whether the logic is clear. If the explanation or reasoning seems insufficient, I revise the manuscript accordingly.
Of course, getting feedback from actual readers is extremely valuable, but even before that step, using an LLM alone can often reveal oversights I was not aware of.
Following the fundamental purpose is extremely important when using tools. LLMs are developing very rapidly, even if the pace may seem slower recently. If you do not want to be at the mercy of weekly updates and new tools, it is better to focus on the underlying principles.
The goal of writing a paper is not simply to fill a manuscript with text, but to build a network of claims and supporting evidence that contributes to scientific progress. Using LLMs passively will not fulfill this goal.
Even based on these principles, there is still a lot we can do. In future articles, I will continue discussing how LLMs can be used based on the principles outlined here.


コメント