Skip Navigation
Skip to contents

Perspect Integr Med : Perspectives on Integrative Medicine

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Perspect Integr Med > Volume 4(3); 2025 > Article
Editorial
Adapt or Lag Behind: Why Researchers in Traditional, Complementary, and Integrative Medicine Must Master Prompt Engineering in the Era of Artificial Intelligence
Jeremy Y. Ng1,2,3,4,*orcid
Perspectives on Integrative Medicine 2025;4(3):127-130.
DOI: https://doi.org/10.56986/pim.2025.10.001
Published online: October 31, 2025

1Institute of General Practice and Interprofessional Care, University Hospital Tübingen, Tübingen, Germany

2Robert Bosch Center for Integrative Medicine and Health, Bosch Health Campus, Stuttgart, Germany

3Department of Health Research Methods, Evidence, and Impact, Faculty of Health Sciences, McMaster University, Hamilton, Canada

4School of Public Health, Faculty of Health, University of Technology Sydney, Sydney, Australia

*Corresponding author: Jeremy Y. Ng, Institute of General Practice and Interprofessional Care, University Hospital Tübingen, Osianderstr. 5, 72076 Tübingen, Germany, Email: ngjy2@mcmaster.ca, jeremy.ng@med.uni-tuebingen.de
• Received: April 18, 2025   • Revised: May 18, 2025   • Accepted: July 3, 2025

©2025 Jaseng Medical Foundation

This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).

next
  • 800 Views
  • 21 Download
The integration of generative artificial intelligence (GenAI) chatbots into medical research offers new opportunities to enhance efficiency and knowledge synthesis, particularly in traditional, complementary, and integrative medicine (TCIM). However, optimizing GenAI chatbot-generated outputs requires the strategic design of inputs to yield better quality responses. This skill is called prompt engineering.
This editorial explores the role of prompt engineering in TCIM research. It highlights its applications in literature review and evidence synthesis, data extraction and analysis, grant/scholarly applications and manuscript preparation, and enhanced research transparency and reproducibility. By inputting precisely structured prompts, researchers can maximize the utility of GenAI chatbots for scientific rigor and contextual relevance. Despite the benefits of artificial intelligence (AI), there are challenges such as AI bias, misinterpretation of TCIM concepts, and the ethical considerations surrounding GenAI chatbot-assisted research which necessitate careful monitoring. As AI technology advances, the timely incorporation of prompt engineering into TCIM research methodologies will enhance the accuracy, and efficiency of research in the future and impact TCIM studies positively. TCIM researchers need to develop proficiency in prompt engineering to responsibly and effectively leverage GenAI chatbots in their work.
The rapid integration of AI into various domains of medical research has brought forth novel opportunities and challenges, and potentially, TCIM research stands to benefit greatly from AI-driven tools, particularly GenAI chatbots. These chatbots, powered by large language models, offer an efficient means of processing vast amounts of literature, generating hypotheses, summarizing findings, and assisting in manuscript preparation [1]. However, optimizing their utility requires skilled prompt engineering. This may be described as the art and science of determining precise inputs to yield GenAI chatbot-generated outputs that are more likely to be accurate and relevant (https://www.ibm.com/think/topics/prompt-engineering).
Prompt engineering plays a critical role in medical research by shaping how GenAI chatbots interpret and respond to queries [2]. In TCIM research, where the synthesis of diverse and often culturally nuanced knowledge is essential, one may argue that determining effective prompts is vital [3]. The complexity of TCIM research, which encompasses various traditional healing systems, herbal medicine, mind-body practices, and integrative approaches, necessitates well-structured prompts for GenAI models to extract meaningful insights. Without careful prompt design, GenAI chatbot-generated responses risk being superficial, biased, or misaligned with the rigor required for scientific inquiry [3,4]. One of the fundamental challenges in applying GenAI chatbots in TCIM research is ensuring that outputs reflect high-quality, evidence-based information. Unlike conventional biomedical research, TCIM literature is often scattered across multiple databases, including non-indexed sources, historical texts, and region-specific publications [3]. Consequently, researchers must develop prompts that direct GenAI models to prioritize authoritative sources, and critically evaluate the reliability of the retrieved information. For example, instead of a general query such as “What are the benefits of acupuncture?” a more refined prompt like, “Summarize published systematic reviews on the effectiveness of acupuncture for chronic pain,” can yield more targeted responses.
1. Literature review and evidence synthesis
Prompt engineering can facilitate the automation of systematic reviews and meta-analyses. By designing prompts that guide GenAI chatbots to extract, categorize, and synthesize data, researchers can expedite literature reviews while maintaining methodological rigor [58]. This approach is particularly useful for navigating heterogeneous TCIM studies, where differences in study designs, patient populations, and intervention protocols pose challenges for synthesis. For instance, a GenAI chatbot can be prompted to analyze randomized controlled trials on herbal medicine interventions for specific conditions, and highlight study methodologies, sample sizes, and primary outcomes. This process allows researchers to rapidly assess patterns in the literature and identify gaps that warrant further investigation.
2. Data extraction and analysis
Beyond literature reviews, GenAI chatbots can assist in data extraction by identifying key variables, such as dosage regimens, patient demographics, and treatment durations. By using structured prompts, researchers can train GenAI models to extract quantitative data from studies, thus, enabling more efficient meta-analytical assessments [9,10]. In addition, GenAI-driven natural language processing tools can help identify trends in TCIM research, such as shifts in study designs or emerging areas of interest. A well-thought prompt could be: “develop a list of key items to include in a data extraction form for a systematic review that aims to identify and summarize trends in the use of Ayurvedic treatments for diabetes in clinical trials, based on these uploaded articles published between 2010 and 2024.”, while concurrently uploading the full-text copies of included articles. Depending on the study methodology, researchers can enhance accuracy by uploading relevant datasets, study protocols, or bibliographic information, thus allowing the GenAI chatbot to generate more precise and context-aware insights.
3. Grant writing and manuscript preparation
GenAI chatbots are increasingly used to assist in drafting research proposals, grant applications, and manuscripts [11]. In TCIM research, where terminology and contextual sensitivity are critical, prompt engineering helps tailor GenAI chatbot-generated content to meet academic study and funding requirements. Researchers can refine GenAI chatbot-generated drafts by specifying journal style guidelines e.g., citation format, and the inclusion of specific research frameworks. Assuming the use of GenAI chatbots is permitted by the granting agency, a researcher preparing a grant application might use the following prompt: “generate a 500-word research proposal on the effects of Tai Chi on cardiovascular health which includes a background, objectives, methodology, and expected outcomes, in a format suitable for the ABC Foundation grant.” To further refine the output, the researcher could provide the GenAI chatbot with publicly available information about the grant application, such as funding guidelines, eligibility criteria, or specific formatting requirements. Supplying this additional context helps the GenAI chatbot generate a more tailored and coherent proposal which would align closely with the expectations of the funding body.
4. Enhancing research transparency and reproducibility
An important aspect of GenAI chatbot-assisted research is ensuring transparency and reproducibility. Prompt engineering can be leveraged to improve documentation practices by generating standardized reporting templates and facilitating the consistent presentation of study findings [12]. By refining prompts, researchers can ensure that GenAI chatbots generate outputs that align with established reporting guidelines, such as the Consolidated Standards of Reporting Trials statement for clinical trials or the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement for systematic reviews [13]. For example, a structured prompt for GenAI chatbot-generated reporting might be, “create a structured abstract summarizing a systematic review on mindfulness-based interventions for anxiety disorders, which adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 for Abstracts Checklist.” The researcher may also consider uploading a blank copy of the reporting guideline checklist or pasting in the items in the checklist to improve the accuracy of the GenAI chatbot-generated response.
The potential of GenAI chatbots is profound, however, their application in TCIM research is not without risks. Over-reliance on GenAI chatbot-generated content without proper verification can lead to the dissemination and propagation of inaccuracies, particularly in a field where scientific validation varies across modalities [14]. In addition, the inherent biases present in GenAI models, which are trained predominantly on mainstream biomedical literature, may result in the underrepresentation of TCIM perspectives. Therefore, researchers must combine prompt engineering with critical appraisal skills to mitigate these risks and uphold research integrity. Furthermore, the potential for GenAI chatbot-generated text to introduce errors or misinterpretations when summarizing complex TCIM concepts presents another challenge. For instance, some GenAI models may conflate distinct TCIM practices or oversimplify mechanisms of action. Researchers must therefore refine prompts iteratively and cross-check GenAI chatbot-generated content with authoritative sources to ensure accuracy. Moreover, ethical considerations surrounding GenAI chatbot use in research warrant attention. Issues such as authorship attribution, data privacy, and the transparency of GenAI chatbot-generated contributions must be addressed [15,16]. Additionally, the use of GenAI chatbots may be prohibited in certain research contexts, particularly when funding bodies, academic institutions, or regulatory bodies impose restrictions on GenAI chatbot-generated content. More importantly, the use of GenAI chatbots to process sensitive patient data, particularly without proper anonymization or consent, may violate ethical guidelines or data protection regulations e.g., the General Data Protection Regulation in Europe [17] or Health Insurance Portability, and Accountability Act in the United States [18]. In such cases, TCIM researchers must be aware of these restrictions and navigate GenAI chatbot usage in compliance with relevant policies and ethical standards to protect both research integrity and patient privacy. TCIM researchers need to be aware of what GenAI chatbot uses are permitted by the potential journals (as well as the journals’ publishers) to which they plan to submit their work to avoid encountering any issues once their manuscript is written up. Many journals [19] and publishers [20] have already developed guidelines on the responsible use of GenAI chatbots, and emphasize the importance of human input to monitor the research process.
As GenAI chatbots become increasingly embedded in medical research workflows, it will be increasingly important that TCIM researchers develop competency in prompt engineering. Integrating the teaching of this skill into TCIM research training programs and methodological frameworks will enhance the responsible and effective use of GenAI chatbots, ultimately advancing the quality and visibility of TCIM research. Future initiatives should focus on refining GenAI models to better accommodate the complexities of TCIM, the promotion of interdisciplinary collaborations between AI specialists and TCIM scholars, and establishing best practices for prompt design in evidence-based TCIM research [1]. Furthermore, ongoing improvements in GenAI chatbot capabilities, including domain-specific fine-tuning and enhanced contextual understanding, will enable more nuanced applications in TCIM research. Collaborative effort between AI developers and TCIM researchers can help create customized GenAI models that better capture the intricacies of TCIM knowledge systems [1].
1. Training and educational resources for prompt engineering
To support TCIM researchers in acquiring prompt engineering skills, a growing number of accessible training options are available. Free online courses and tutorials from sources such as Coursera (https://www.coursera.org/), edX (https://www.edx.org/), LinkedIn Learning (https://www.linkedin.com/learning/), and Udemy (https://www.udemy.com/) offer foundational and advanced instruction in prompt engineering, often within broader AI literacy curricula. However, not all resources are of high quality, and researchers should use their discretion to ensure that the materials they select are credible, current, and relevant to their specific research needs. Organizations like OpenAI Academy (https://academy.openai.com/) and IBM (https://www.ibm.com/think/topics/prompt-engineering-guide) also provide hands-on guides and interactive documentation tailored to the use of large language models. In addition, reflecting the current AI era, academic, and professional conferences on digital health and AI frequently include workshops or breakout sessions dedicated to prompt design. Incorporating such resources into TCIM research training programs will be crucial for equipping researchers with the practical competencies needed to use GenAI chatbots effectively and ethically.
Prompt engineering is a valuable skill for TCIM researchers who want to make better use of GenAI chatbots. Well-structured prompts can help maximize the chances that GenAI chatbot-generated content is as relevant, accurate, and aligned with scientific rigor as possible. As AI technologies continue to evolve, integrating prompt engineering into research workflows may improve efficiency and support high-quality TCIM research. Although GenAI chatbots are not a substitute for critical thinking, using it effectively through careful prompt engineering promises to be a valuable addition to the TCIM researcher’s toolkit.

Conflicts of Interest

The author has no competing interests to declare.

Author Use of AI Tools Statement

No AI tools were used in the writing of this article.

Funding

None.

Ethical Statement

This is an editorial article which does not require ethics approval.

There are no data or materials associated with this article.
pim-2025-10-001f1.jpg
  • [1] Ng JY, Cramer H, Lee MS, Moher D. Traditional, complementary, and integrative medicine and artificial intelligence: novel opportunities in healthcare. Integr Med Res 2024;13(1):101024. ArticlePubMedPMC
  • [2] Wang J, Shi E, Yu S, Wu Z, Ma C, Dai H, et al. [Preprint]. Prompt engineering for healthcare: Methodologies and applications. arXiv:2304.14670: 2023 Apr 28 Available from: https://doi.org/10.48550/arXiv.2304.14670Article
  • [3] Ng JY. Prompt engineering for generative artificial intelligence chatbots in health research: a practical guide for traditional, complementary, and integrative medicine researchers. Integr Med Res 2025;14(4):101222. Article
  • [4] Xue J, Wang YC, Wei C, Liu X, Woo J, Kuo CC. Bias and fairness in chatbots: an overview. APSIPA Trans Signal Inf Process 2024;13(2):e102. Article
  • [5] Ng JY, Dhawan T, Dogadova E, Taghi-Zada Z, Vacca A, Wieland LS, et al. Operational definition of complementary, alternative, and integrative medicine derived from a systematic search. BMC Complement Med Ther 2022;22:104. ArticlePubMedPMCPDF
  • [6] Nordmann K, Sauter S, Stein M, Aigner J, Redlich MC, Schaller M, et al. Evaluating the performance of artificial intelligence in supporting evidence synthesis: a blinded comparison between chatbots and humans. BMC Med Res Methodol 2025;25:150. ArticlePubMedPMC
  • [7] Li M, Sun J, Tan X. Evaluating the effectiveness of large language models in abstract screening: a comparative analysis. Syst Rev 2024;13:219. ArticlePubMedPMCPDF
  • [8] Colangelo MT, Guizzardi S, Meleti M, Calciolari E, Galli C. How to write effective prompts for screening biomedical literature using large language models. BioMedInformatics 2025;5(1):15. Article
  • [9] Ge L, Agrawal R, Singer M, Kannapiran P, De Castro Molina JA, Teow KL, et al. Leveraging artificial intelligence to enhance systematic reviews in health research: advanced tools and challenges. Syst Rev 2024;13:269. ArticlePubMedPMCPDF
  • [10] Marshall IJ, Wallace BC. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev 2019;8:163. ArticlePubMedPMCPDF
  • [11] Ng JY, Maduranayagam SG, Suthakar N, Li A, Lokker C, Iorio A, et al. Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. Lancet Digit Health 2025;7(1):e94−102.ArticlePubMed
  • [12] Fukataki Y, Wakako H, Naoki N, Ito YM. Developing artificial intelligence tools for institutional review board pre-review: A pilot study on ChatGPT’s accuracy and reproducibility. PLoS Digit Health 2025;4(6):e0000695. ArticlePubMedPMC
  • [13] Wrightson JG, Blazey P, Moher D, Khan KM, Ardern CL. GPT for RCTs? Using AI to determine adherence to clinical trial reporting guidelines. BMJ Open 2025;15(3):e088735. ArticlePubMedPMC
  • [14] Jeyaraman M, Ramasubramanian S, Balaji S, Jeyaraman N, Nallakumarasamy A, Sharma S. ChatGPT in action: harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J Methodol 2023;13(4):170. ArticlePubMedPMC
  • [15] Kasani PH, Cho KH, Jang JW, Yun CH. Influence of artificial intelligence and chatbots on research integrity and publication ethics. Sci Ed 2024;11(1):12−25.ArticlePDF
  • [16] Moffatt B, Hall A. Is AI my co-author? The ethics of using artificial intelligence in scientific publishing. Account Res 2024;1−7. Epub 2024 Aug 7ArticlePubMed
  • [17] European parliament: directorate-general for parliamentary research services. Lagioia F, Sartor GThe impact of the general data protection regulation on artificial intelligence. Publications Office, 2020.
  • [18] Li J. Security implications of AI chatbots in health care. J Med Internet Res 2023;25:e47551. ArticlePubMedPMC
  • [19] Lund BD, Naheem KT. Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals. Learn Publ 2023;36(2):1582. Article
  • [20] Bhavsar D, Duffy L, Jo H, Lokker C, Haynes RB, Iorio A, et al. Policies on artificial intelligence chatbots among academic publishers a cross-sectional audit. Res Integr Peer Rev 2025;10(1):1. ArticlePubMedPMCPDF

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      • PubReader PubReader
      • ePub LinkePub Link
      • Cite
        Download Citation
        Download a citation file in RIS format that can be imported by all major citation management software, including EndNote, ProCite, RefWorks, and Reference Manager.

        Format:
        • RIS — For EndNote, ProCite, RefWorks, and most other reference management software
        • BibTeX — For JabRef, BibDesk, and other BibTeX-specific software
        Include:
        • Citation for the content below
        Adapt or Lag Behind: Why Researchers in Traditional, Complementary, and Integrative Medicine Must Master Prompt Engineering in the Era of Artificial Intelligence
        Perspect Integr Med. 2025;4(3):127-130.   Published online October 22, 2025
        Close
      • XML DownloadXML Download
      Figure
      • 0
      Related articles
      Adapt or Lag Behind: Why Researchers in Traditional, Complementary, and Integrative Medicine Must Master Prompt Engineering in the Era of Artificial Intelligence
      Image
      Graphical abstract
      Adapt or Lag Behind: Why Researchers in Traditional, Complementary, and Integrative Medicine Must Master Prompt Engineering in the Era of Artificial Intelligence

      Perspect Integr Med : Perspectives on Integrative Medicine
      TOP