Adapt or Lag Behind: Why Researchers in Traditional, Complementary, and Integrative Medicine Must Master Prompt Engineering in the Era of Artificial Intelligence
Article information
The integration of generative artificial intelligence (GenAI) chatbots into medical research offers new opportunities to enhance efficiency and knowledge synthesis, particularly in traditional, complementary, and integrative medicine (TCIM). However, optimizing GenAI chatbot-generated outputs requires the strategic design of inputs to yield better quality responses. This skill is called prompt engineering.
This editorial explores the role of prompt engineering in TCIM research. It highlights its applications in literature review and evidence synthesis, data extraction and analysis, grant/scholarly applications and manuscript preparation, and enhanced research transparency and reproducibility. By inputting precisely structured prompts, researchers can maximize the utility of GenAI chatbots for scientific rigor and contextual relevance. Despite the benefits of artificial intelligence (AI), there are challenges such as AI bias, misinterpretation of TCIM concepts, and the ethical considerations surrounding GenAI chatbot-assisted research which necessitate careful monitoring. As AI technology advances, the timely incorporation of prompt engineering into TCIM research methodologies will enhance the accuracy, and efficiency of research in the future and impact TCIM studies positively. TCIM researchers need to develop proficiency in prompt engineering to responsibly and effectively leverage GenAI chatbots in their work.
The rapid integration of AI into various domains of medical research has brought forth novel opportunities and challenges, and potentially, TCIM research stands to benefit greatly from AI-driven tools, particularly GenAI chatbots. These chatbots, powered by large language models, offer an efficient means of processing vast amounts of literature, generating hypotheses, summarizing findings, and assisting in manuscript preparation [1]. However, optimizing their utility requires skilled prompt engineering. This may be described as the art and science of determining precise inputs to yield GenAI chatbot-generated outputs that are more likely to be accurate and relevant (https://www.ibm.com/think/topics/prompt-engineering).
The Importance of Prompt Engineering in TCIM Research
Prompt engineering plays a critical role in medical research by shaping how GenAI chatbots interpret and respond to queries [2]. In TCIM research, where the synthesis of diverse and often culturally nuanced knowledge is essential, one may argue that determining effective prompts is vital [3]. The complexity of TCIM research, which encompasses various traditional healing systems, herbal medicine, mind-body practices, and integrative approaches, necessitates well-structured prompts for GenAI models to extract meaningful insights. Without careful prompt design, GenAI chatbot-generated responses risk being superficial, biased, or misaligned with the rigor required for scientific inquiry [3,4]. One of the fundamental challenges in applying GenAI chatbots in TCIM research is ensuring that outputs reflect high-quality, evidence-based information. Unlike conventional biomedical research, TCIM literature is often scattered across multiple databases, including non-indexed sources, historical texts, and region-specific publications [3]. Consequently, researchers must develop prompts that direct GenAI models to prioritize authoritative sources, and critically evaluate the reliability of the retrieved information. For example, instead of a general query such as “What are the benefits of acupuncture?” a more refined prompt like, “Summarize published systematic reviews on the effectiveness of acupuncture for chronic pain,” can yield more targeted responses.
Applications of Prompt Engineering in TCIM Research
1. Literature review and evidence synthesis
Prompt engineering can facilitate the automation of systematic reviews and meta-analyses. By designing prompts that guide GenAI chatbots to extract, categorize, and synthesize data, researchers can expedite literature reviews while maintaining methodological rigor [5–8]. This approach is particularly useful for navigating heterogeneous TCIM studies, where differences in study designs, patient populations, and intervention protocols pose challenges for synthesis. For instance, a GenAI chatbot can be prompted to analyze randomized controlled trials on herbal medicine interventions for specific conditions, and highlight study methodologies, sample sizes, and primary outcomes. This process allows researchers to rapidly assess patterns in the literature and identify gaps that warrant further investigation.
2. Data extraction and analysis
Beyond literature reviews, GenAI chatbots can assist in data extraction by identifying key variables, such as dosage regimens, patient demographics, and treatment durations. By using structured prompts, researchers can train GenAI models to extract quantitative data from studies, thus, enabling more efficient meta-analytical assessments [9,10]. In addition, GenAI-driven natural language processing tools can help identify trends in TCIM research, such as shifts in study designs or emerging areas of interest. A well-thought prompt could be: “develop a list of key items to include in a data extraction form for a systematic review that aims to identify and summarize trends in the use of Ayurvedic treatments for diabetes in clinical trials, based on these uploaded articles published between 2010 and 2024.”, while concurrently uploading the full-text copies of included articles. Depending on the study methodology, researchers can enhance accuracy by uploading relevant datasets, study protocols, or bibliographic information, thus allowing the GenAI chatbot to generate more precise and context-aware insights.
3. Grant writing and manuscript preparation
GenAI chatbots are increasingly used to assist in drafting research proposals, grant applications, and manuscripts [11]. In TCIM research, where terminology and contextual sensitivity are critical, prompt engineering helps tailor GenAI chatbot-generated content to meet academic study and funding requirements. Researchers can refine GenAI chatbot-generated drafts by specifying journal style guidelines e.g., citation format, and the inclusion of specific research frameworks. Assuming the use of GenAI chatbots is permitted by the granting agency, a researcher preparing a grant application might use the following prompt: “generate a 500-word research proposal on the effects of Tai Chi on cardiovascular health which includes a background, objectives, methodology, and expected outcomes, in a format suitable for the ABC Foundation grant.” To further refine the output, the researcher could provide the GenAI chatbot with publicly available information about the grant application, such as funding guidelines, eligibility criteria, or specific formatting requirements. Supplying this additional context helps the GenAI chatbot generate a more tailored and coherent proposal which would align closely with the expectations of the funding body.
4. Enhancing research transparency and reproducibility
An important aspect of GenAI chatbot-assisted research is ensuring transparency and reproducibility. Prompt engineering can be leveraged to improve documentation practices by generating standardized reporting templates and facilitating the consistent presentation of study findings [12]. By refining prompts, researchers can ensure that GenAI chatbots generate outputs that align with established reporting guidelines, such as the Consolidated Standards of Reporting Trials statement for clinical trials or the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement for systematic reviews [13]. For example, a structured prompt for GenAI chatbot-generated reporting might be, “create a structured abstract summarizing a systematic review on mindfulness-based interventions for anxiety disorders, which adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 for Abstracts Checklist.” The researcher may also consider uploading a blank copy of the reporting guideline checklist or pasting in the items in the checklist to improve the accuracy of the GenAI chatbot-generated response.
Challenges and Ethical Considerations
The potential of GenAI chatbots is profound, however, their application in TCIM research is not without risks. Over-reliance on GenAI chatbot-generated content without proper verification can lead to the dissemination and propagation of inaccuracies, particularly in a field where scientific validation varies across modalities [14]. In addition, the inherent biases present in GenAI models, which are trained predominantly on mainstream biomedical literature, may result in the underrepresentation of TCIM perspectives. Therefore, researchers must combine prompt engineering with critical appraisal skills to mitigate these risks and uphold research integrity. Furthermore, the potential for GenAI chatbot-generated text to introduce errors or misinterpretations when summarizing complex TCIM concepts presents another challenge. For instance, some GenAI models may conflate distinct TCIM practices or oversimplify mechanisms of action. Researchers must therefore refine prompts iteratively and cross-check GenAI chatbot-generated content with authoritative sources to ensure accuracy. Moreover, ethical considerations surrounding GenAI chatbot use in research warrant attention. Issues such as authorship attribution, data privacy, and the transparency of GenAI chatbot-generated contributions must be addressed [15,16]. Additionally, the use of GenAI chatbots may be prohibited in certain research contexts, particularly when funding bodies, academic institutions, or regulatory bodies impose restrictions on GenAI chatbot-generated content. More importantly, the use of GenAI chatbots to process sensitive patient data, particularly without proper anonymization or consent, may violate ethical guidelines or data protection regulations e.g., the General Data Protection Regulation in Europe [17] or Health Insurance Portability, and Accountability Act in the United States [18]. In such cases, TCIM researchers must be aware of these restrictions and navigate GenAI chatbot usage in compliance with relevant policies and ethical standards to protect both research integrity and patient privacy. TCIM researchers need to be aware of what GenAI chatbot uses are permitted by the potential journals (as well as the journals’ publishers) to which they plan to submit their work to avoid encountering any issues once their manuscript is written up. Many journals [19] and publishers [20] have already developed guidelines on the responsible use of GenAI chatbots, and emphasize the importance of human input to monitor the research process.
The Future of AI Chatbots and Prompt Engineer-ing in TCIM Research
As GenAI chatbots become increasingly embedded in medical research workflows, it will be increasingly important that TCIM researchers develop competency in prompt engineering. Integrating the teaching of this skill into TCIM research training programs and methodological frameworks will enhance the responsible and effective use of GenAI chatbots, ultimately advancing the quality and visibility of TCIM research. Future initiatives should focus on refining GenAI models to better accommodate the complexities of TCIM, the promotion of interdisciplinary collaborations between AI specialists and TCIM scholars, and establishing best practices for prompt design in evidence-based TCIM research [1]. Furthermore, ongoing improvements in GenAI chatbot capabilities, including domain-specific fine-tuning and enhanced contextual understanding, will enable more nuanced applications in TCIM research. Collaborative effort between AI developers and TCIM researchers can help create customized GenAI models that better capture the intricacies of TCIM knowledge systems [1].
1. Training and educational resources for prompt engineering
To support TCIM researchers in acquiring prompt engineering skills, a growing number of accessible training options are available. Free online courses and tutorials from sources such as Coursera (https://www.coursera.org/), edX (https://www.edx.org/), LinkedIn Learning (https://www.linkedin.com/learning/), and Udemy (https://www.udemy.com/) offer foundational and advanced instruction in prompt engineering, often within broader AI literacy curricula. However, not all resources are of high quality, and researchers should use their discretion to ensure that the materials they select are credible, current, and relevant to their specific research needs. Organizations like OpenAI Academy (https://academy.openai.com/) and IBM (https://www.ibm.com/think/topics/prompt-engineering-guide) also provide hands-on guides and interactive documentation tailored to the use of large language models. In addition, reflecting the current AI era, academic, and professional conferences on digital health and AI frequently include workshops or breakout sessions dedicated to prompt design. Incorporating such resources into TCIM research training programs will be crucial for equipping researchers with the practical competencies needed to use GenAI chatbots effectively and ethically.
Conclusion
Prompt engineering is a valuable skill for TCIM researchers who want to make better use of GenAI chatbots. Well-structured prompts can help maximize the chances that GenAI chatbot-generated content is as relevant, accurate, and aligned with scientific rigor as possible. As AI technologies continue to evolve, integrating prompt engineering into research workflows may improve efficiency and support high-quality TCIM research. Although GenAI chatbots are not a substitute for critical thinking, using it effectively through careful prompt engineering promises to be a valuable addition to the TCIM researcher’s toolkit.
Notes
Conflicts of Interest
The author has no competing interests to declare.
Author Use of AI Tools Statement
No AI tools were used in the writing of this article.
Funding
None.
Ethical Statement
This is an editorial article which does not require ethics approval.
Data Availability
There are no data or materials associated with this article.
