A pioneering faculty training program for artificial intelligence in Ukrainian medical education
A
A
A
Oleksandr
Bulbuk¹
1✉
Emailobulbuk@ifnmu.edu.ua
Liubomyr
Havryshchuk²
1
Roman
Lisovsky³
1
Olena
Bulbuk⁴
1
Oleksandr
Bulbuk
1
1
Department of Chemistry, Pharmaceutical Analysis and Postgraduate Education
Ivano-Frankivsk National Medical University
Ivano-Frankivsk
Ukraine
2
Department of Medical Informatics, Medical and Biological Physics
Ivano-Frankivsk National Medical University
Ivano- Frankivsk
Ukraine
3
Department of Stomatology of ESIPE
Ivano-Frankivsk National Medical University
Ivano-Frankivsk
Ukraine
Authors: Oleksandr Bulbuk¹*, Liubomyr Havryshchuk², Roman Lisovsky³, Olena Bulbuk⁴
Affiliations: ¹ Vice Rector, Ivano-Frankivsk National Medical University, Ivano-Frankivsk, Ukraine
² Department of Chemistry, Pharmaceutical Analysis and Postgraduate Education, Ivano-Frankivsk National Medical University, Ivano-Frankivsk, Ukraine
³ Department of Medical Informatics, Medical and Biological Physics, Ivano-Frankivsk National Medical University, Ivano-Frankivsk, Ukraine
⁴ Department of Stomatology of ESIPE, Ivano-Frankivsk National Medical University, Ivano-Frankivsk, Ukraine
Corresponding author: Oleksandr Bulbuk, obulbuk@ifnmu.edu.ua
Abstract
Background
The global higher education landscape is undergoing a profound transition due to the convergence of digital technologies and pedagogical theory. In medical education, large language models now demonstrate clinical reasoning capabilities comparable to those of medical students, yet faculty training remains highly inconsistent globally. In Ukraine, this challenge is compounded by wartime pressures and a high rate of «bottom-up» AI adoption (84%) among students, which is often plagued by ethical uncertainty and a lack of structured guidance. There is an urgent need to shift the educator’s role from a «transmitter of absolute truths» to a «mediator of learning» within the emerging clinician-AI-patient triad.
Methods
We implemented a multi-component training program at Ivano-Frankivsk National Medical University during the 2024–2025 academic year. The program included practical workshops on AI fundamentals and medical education-specific tools, individual consultations for discipline-specific AI adaptation, the development of methodological materials, and the creation of a community of practice. We tracked quantitative metrics, including the number of trained faculty, program coverage, and the development of resources, alongside qualitative assessments of AI integration into teaching practices.
Results
Between September 2024 and March 2025, 211 faculty members (29% of the total 732 scientific and pedagogical staff members) completed the foundational training, with participation exceeding 90% in departments such as Anatomy and Physiology. Demographic analysis revealed strong cross-generational engagement, with 34% of participants being over 50 years old. To address the resulting regulatory and ethical gaps, Ivano-Frankivsk National Medical University formally adopted a comprehensive «Policy on the Use of Artificial Intelligence Systems» in May 2025, providing a binding legal and ethical framework for AI integration. Key outputs included Ukraine’s first comprehensive methodological guide for AI in medical education, a library of 142 documented use cases, and the deployment of two custom AI assistants that achieved a user satisfaction rating of 4.3 out of 5 across 3.200 interactions. Faculty reported successful integration of AI into lecture synthesis, case-based learning, and research supervision.
Conclusions
This study demonstrates the feasibility of large-scale faculty AI literacy initiatives even in resource-constrained and socially disrupted contexts. Systematic training, coupled with a formal institutional policy, facilitates the fundamental transformation of medical education required for the 21st century, ensuring educators can effectively guide students in AI-augmented healthcare practice. Investment in faculty digital competencies is essential for maintaining institutional competitiveness and academic rigor in the face of rapidly advancing machine intelligence.
Keywords:
artificial intelligence
medical education
faculty development
Ukraine
digital literacy
prompt engineering
institutional policy
large language models
pedagogy
machine learning
A
A
Background
The global higher education landscape is currently undergoing a period of profound transition, marked by the rapid convergence of digital technologies and pedagogical theories. Within the sphere of medical education, this transformation is particularly acute, as the maturation of artificial intelligence (AI) necessitates a comprehensive reevaluation of the competencies required of both academic faculty and their students. AI represents one of the most disruptive innovations in healthcare, attracting intense focus from physicians, researchers, and educational leaders due to its ability to manage massive quantities of unstructured data and solve complex clinical problems [1–3]. This integration is no longer a distant forecast but a present-day reality, with recent data indicating that 2 in 3 physicians are already using health AI in their clinical practice, marking a 78% increase from 2023 [4, 5].
This technological shift demands a fundamental reconceptualization of how medical knowledge is taught, assessed, and applied [6, 7]. Large language models (LLMs) such as GPT-4 have demonstrated remarkable capabilities in clinical reasoning, differential diagnosis, and data interpretation [5, 8]. Evidence shows that LLMs can perform at levels comparable to or exceeding medical students and residents on standardized medical examinations, such as the USMLE [6–9]. However, the integration of AI into medical training remains highly inconsistent [10]. A 2024 scoping review revealed a critical gap: while 71.8% of German medical schools offered AI-related courses, the vast majority remained elective or extracurricular, and 85% of Canadian medical students reported no formal AI education within their core curriculum [11, 12]. Furthermore, a Stanford Medicine survey found that 44% of physicians and 23% of medical students believed their education had not adequately prepared them for emerging healthcare technologies [13].
Ukraine’s medical education system faces additional, unique pressures that have simultaneously accelerated digital adoption and tightened resource constraints [14]. The dual impact of the COVID-19 pandemic and the full-scale military aggression since 2022 compelled a massive shift toward online and blended learning models, requiring academic faculty to attain unprecedented digital literacy in a condensed period. National strategies identify digital transformation as an operational goal essential for ensuring the international competitiveness of Ukrainian graduates. Despite these strategic aims, a significant gap exists between policy and binding regulation. To date, no official national regulations have defined the legal boundaries of AI use in academic research or clinical training in Ukraine. The «White Paper» issued by the Ministry of Digital Transformation provides general guidance but lacks enforceable rules, leaving medical universities to navigate ethical complexities independently [8]. To address this regulatory gap, Ivano-Frankivsk National Medical University formally adopted a comprehensive «Policy on the Use of Artificial Intelligence Systems» in May 2025 [15]. This policy establishes a binding legal and ethical framework for integrating AI in educational, scientific, and administrative activities, aligning with national recommendations from the Ministry of Digital Transformation and the Ministry of Education and Science of Ukraine.
The urgency of formal training is underscored by the high rate of «bottom-up» adoption among Ukrainian learners. Research at Bogomolets National Medical University indicates that 84% of medical PhD students and interns utilize AI tools for academic purposes. However, this adoption is plagued by ethical uncertainty and a lack of structured guidance: 51% of participants in the same study admitted to cheating on tests previously, and 36% of students viewed the use of AI itself as a form of misconduct [8, 16]. There is a critical danger that students may stop thinking critically or fall victim to «automation bias», accepting AI outputs without question [5]. Furthermore, the risk of AI «hallucinations» poses a direct threat to patient safety if clinical decisions are based on unverified AI outputs [3, 4, 17].
Addressing this bottleneck requires a shift in the educator’s role from a «transmitter of absolute truths» to a «mediator of learning» and knowledge curator [18, 19]. Systematic faculty training is essential for creating institutional capacity for AI-integrated curriculum development and establishing robust ethical frameworks. Academic faculty must be prepared to navigate the shift from the traditional clinician-patient dyad to a much more complex, ethically and emotionally, clinician-AI-patient triad [20]. By treating AI training as a «humanitarian category» that encompasses cultural reproduction and professional identity, rather than a purely technical skill, institutions can foster unique human abilities, such as empathy and ethical judgment, which complement machine capabilities [21].
A
Against this backdrop, Ivano-Frankivsk National Medical University (IFNMU) launched Ukraine’s first comprehensive, systematic program to train medical faculty in AI technologies. This initiative was designed as a foundational element of institutional digital transformation rather than an isolated intervention. The program aimed to build faculty digital literacy, create capacity for AI-integrated assignments, and demonstrate a scalable model for other institutions seeking to maintain academic rigor and ethical standards in an increasingly AI-driven healthcare environment. This article describes the development, implementation, and outcomes of this pioneering program, offering practical insights for the future of medical pedagogy in Ukraine and beyond.
Methods
Program Design and Development
The faculty training program at Ivano-Frankivsk National Medical University was developed through a collaborative, multi-disciplinary process involving university administration, information technology specialists, educational methodologists, and faculty representatives from both clinical and theoretical departments. The development cycle occurred in three distinct phases during the spring and summer of 2024 to ensure institutional readiness for the 2024–2025 academic year. Phase 1 focused on a comprehensive needs assessment conducted across all university departments to determine baseline digital literacy, identify perceived barriers to adoption, and catalog discipline-specific requirements. This phase involved structured focus group discussions with twenty-four department heads to align the program with institutional priorities and address faculty concerns about the potential replacement of traditional teaching roles by automation.
Phase 2 involved designing a modular curriculum based on the findings of the assessment. The content was scaffolded into three competency levels: foundational AI literacy for all staff, intermediate instructional skills for active users, and advanced capabilities for faculty leading AI-integrated course development. Materials were developed in Ukrainian, with relevant English terminology integrated to reflect the global technological landscape. Phase 3 consisted of pilot testing with twenty volunteer faculty members in August 2024 (Fig. 1). Feedback from this pilot led to significant modifications, specifically an increased emphasis on hands-on practice over theoretical content and the development of troubleshooting resources for specialized medical applications.
Instructional Components and Implementation
The final program structure consisted of four interconnected components designed to support academic faculty through various stages of adoption. Component 1 consisted of structured, practical training workshops, each lasting 3 to 4 hours in duration. These in-person sessions covered AI fundamentals, large language models (LLMs), and specific tools for generating questions, clinical case studies, and literature reviews. Component 2 offered individual consultations, lasting 60–90 minutes, available both in person and via video conference. These sessions allowed facilitators to address discipline-specific challenges, such as developing AI-integrated assignments or designing assessment strategies that mitigate academic dishonesty.
Component 3 focused on the collaborative development of methodological materials, including a comprehensive institutional guide approved by the Academic Council, twelve discipline-specific prompt engineering guides, and eighteen assessment rubrics. Component 4 established a community of practice for ongoing peer support, utilizing dedicated communication channels and monthly meetups to share successful use cases and innovative applications. The program employed a voluntary but highly encouraged recruitment strategy, involving outreach from department heads and testimonials from early adopters to foster a culture of sustainable transformation rather than mandated compliance.
Technological Ecosystem and Tools
The training program integrated a diverse range of AI platforms, categorized by their primary functional application in medical education and research. For general text generation and high-quality dialogue, faculty utilized OpenAI’s ChatGPT, which reached 100 million users within two months of launch, as well as Microsoft Copilot for its integration with office productivity suites. Anthropic’s Claude was emphasized for ethically guided interactions, as its development is governed by a specific «constitution» of safety principles. For real-time information retrieval, the program included xAI Grok, noted for its integration with current data from X (formerly Twitter), and Perplexity AI, which is favored in academic settings for its ability to provide direct citations for its outputs.
For scientific research and systematic reviews, academic faculty were trained on Elicit. This AI research assistant identifies papers based on meaning rather than keywords, and SciSpace, which extracts key theses, methods, and conclusions from academic articles. The program also introduced Connected Papers to visualize citation graphs. To organize custom knowledge bases, faculty employed NotebookLM, which analyzes personal research files to provide structured summaries.
Methodological and visual support tools included Napkin AI, which generates diagrams and infographics from raw text. For creating structured presentations, the curriculum included Gamma, which enables the generation of interactive slides from textual descriptions, and Genspark, which provides automated structural content generation. Language and communication support relied on DeepL for neural machine translation and Grammarly AI for refining the grammar, style, and tone of English-language academic manuscripts.
Prompt Engineering and Methodology
A core methodology for «prompt engineering» was established to help faculty overcome the «blank page problem» and improve the accuracy of AI outputs. Training followed a multi-stage logic refinement process:
Step 1: Formulating a simple request.
Step 2: Adding context and a specific target audience.
Step 3: Specifying the response format.
Step 4: Adding desired characteristics, such as the use of professional medical terminology.
Step 5: Refining the style for clarity and asking the AI to provide verifiable scientific references to reduce the risk of «hallucinations».
Faculty were also instructed in the use of specialized personas, such as teaching the AI to «Adopt the role of a Professor of Biology» to ensure the AI utilizes a specialized vocabulary from its database. To stimulate creativity, educational gaming via tools like Secret Prompter was used to help faculty experiment with unique generative instructions.
Data Collection and Analysis
The study utilized a mixed-methods approach to evaluate program outcomes. Quantitative metrics included the tracking of faculty completion rates across departments, the number of resources developed (including a repository of 142 use cases), and student satisfaction ratings from deployed AI assistants. Qualitative data collection involved semi-structured interviews with thirty-five program participants representing diverse ranks (Assistant, Associate, and Full Professors) and departments. Semi-structured interviews were conducted using an interview guide developed specifically for this study (see Additional File 1). The guide covered seven main areas: background and prior experience with digital technologies, training program experience and satisfaction, implementation and integration of AI tools into teaching practice, ethical considerations and concerns, perceived impact and value of the program, recommendations for future directions, and opportunities for additional comments. The interview guide was developed by the research team in Ukrainian (the primary language of instruction at IFNMU) and pilot-tested with three faculty members before full implementation. Interviews lasted 30–45 minutes and were conducted either in person or via video conference based on participant preference. All interviews were audio-recorded with permission and transcribed verbatim for analysis. Additionally, feedback was collected from 847 students across AI-enhanced courses using existing institutional evaluation mechanisms. Qualitative data were analyzed through a thematic analysis conducted by a team of educational researchers to identify recurring patterns in faculty experiences and challenges to implementation.
Ethical Considerations
The study was conducted as a quality improvement initiative within routine institutional activities and, as such, was classified under educational innovation that does not require formal human subjects research ethics approval. All faculty participation was voluntary. Training explicitly addressed issues of data privacy, algorithmic bias, and the appropriate attribution of AI-assisted work. The program's ethical framework aligned with the university’s definitions of academic integrity. In accordance with the 2025 Policy, strict adherence to GDPR was mandated, including a total prohibition on uploading sensitive medical data to public AI platforms. The «human-in-the-loop» principle was established as a core requirement for all AI-assisted educational content. Participants were taught to evaluate AI-generated content critically and were required to anonymize any student data used during practice sessions.
Results
Program Participation and Institutional Reach
Between September 2024 and March 2025, the AI training initiative achieved significant institutional penetration, with 211 faculty members completing the foundational workshop component. This figure represents 29% of the total of 732 scientific and pedagogical staff members at Ivano-Frankivsk National Medical University. While the institutional average was 29%, engagement was notably higher in specific clinical and theoretical departments, with participation rates exceeding 90% in the departments of Anatomy and Physiology (92%) and showing robust involvement in Pathology (87%), Internal Medicine (85%), and Pediatrics (86%).
The depth of faculty engagement extended beyond initial attendance. Of those who completed the foundational training, 127 faculty members (60%) proceeded to individual consultations for discipline-specific guidance. Furthermore, 89 faculty members (42%) actively contributed to the development of institutional methodological materials. In comparison, the established Community of Practice grew to include 156 active members who regularly exchange insights and queries via dedicated university channels. Demographic analysis dispelled initial assumptions regarding age-related resistance to digital transformation. While assistant professors comprised 42% of the participants, associate professors (36%) and full professors (22%) were heavily represented. Additionally, 34% of all participants were over the age of 50 (Fig. 2).
Resources and Digital Infrastructure Developed
The program yielded a substantial repository of institutional knowledge and student-facing infrastructure. Most significantly, the initiative produced Ukraine’s first comprehensive methodological guidelines for the use of AI in medical education, a detailed document formally approved by the IFNMU Academic Council. This document served as the foundational precursor to the official university-wide AI Policy. The resulting policy now mandates that any AI-assisted creation of educational materials must involve mandatory human editing and expert verification to ensure factual accuracy and alignment with modern medical standards.
Beyond documentation, the program facilitated the deployment of two custom AI assistants powered by large language model technology and integrated with university-specific knowledge bases. One assistant was designed to navigate prospective students through admission requirements, while the other provides study planning and resource navigation for current students. In the first three months of deployment, these tools recorded over 3.200 interactions, achieving a user satisfaction rating of 4.3 out of 5. Faculty contributors also established a Use Case Library containing 142 documented examples of AI integration, including the generation of clinical case scenarios, automated feedback for student reflective writing, and the creation of virtual patient simulations. To support technical competency, the program also disseminated twelve discipline-specific prompt engineering guides and eighteen assessment rubrics for evaluating student work produced with the aid of AI.
Integration into Pedagogy and Research Supervision
Academic faculty reported immediate application of AI tools across various teaching modalities. In lecture-based courses, faculty utilized generative models to synthesize recent research findings and create pre-lecture reading questions. For practical sessions, AI was employed to design case-based learning (CBL) scenarios and differential diagnosis exercises. In the context of research supervision, faculty participants integrated tools such as SciSpace and Elicit to automate literature searches and synthesize key theses from academic articles. This allowed supervisors to provide more comprehensive feedback to larger cohorts of students without compromising the quality of academic oversight.
Assessment practices underwent a fundamental evolution to maintain academic integrity while acknowledging the utility of AI. Faculty implemented strategies such as:
Requiring students to submit AI interaction transcripts alongside their final assignments.
Designing tasks that require students to evaluate and correct AI-generated clinical content, thereby assessing their critical reasoning.
Incorporating oral examinations where students must defend AI-assisted work.
In accordance with Article 2.2.2 of the university policy, students are now formally required to include a mandatory citation of AI usage (naming the system and nature of use) as a separate item at the end of every written work to ensure transparency and academic integrity.
Student Reception and Outcomes
Preliminary data from 847 students across twelve courses with extensive AI integration revealed a positive shift in learner satisfaction. Students reported that AI-enhanced formats provided more rapid feedback on formative assignments and granted access to a wider variety of practice materials. Informal feedback suggested that students valued the faculty’s willingness to experiment, viewing it as a model for the continuous learning and adaptation required in modern medical practice. While some students expressed concerns about overdependence on these tools, the majority (92.8%, based on regional studies) indicated a strong willingness to learn how to integrate AI into their future clinical careers.
Faculty Experiences and Challenges
Qualitative thematic analysis of interviews with thirty-five participants revealed a progression from initial anxiety to professional enthusiasm. A pharmacology associate professor noted that AI «gave me more time to focus on student interactions by managing routine tasks». In contrast, a surgery professor emphasized that the tool «amplifies my expertise rather than replacing my judgment». However, the results also highlighted persistent challenges, most notably the rapid pace of technological change, which necessitated continuous curriculum updates. Faculty observed that generic AI performance often faltered in specialized medical domains, requiring time-intensive manual verification of factual accuracy to mitigate the risk of «hallucinations» in clinical data.
Discussion
The implementation of the AI training program at Ivano-Frankivsk National Medical University addresses a critical global bottleneck in medical education: the shortage of faculty expertise in utilizing advanced digital technologies. While international scoping reviews indicate that over 70% of German medical schools offer AI-related content, these initiatives are often elective, leaving 85% of Canadian medical students, for example, without formal AI training in their core curriculum. Our results, demonstrating a 29% participation rate among the university’s total faculty (211 out of 732 scientific and pedagogical staff members), suggest that even as a voluntary initiative, systematic training can achieve significant institutional reach and overcome the typical pattern where digital adoption clusters primarily among early adopters.
This high engagement is attributed to three primary factors: a supportive institutional leadership that allocated resources for digital infrastructure, a modular curriculum scaffolded into three competency levels, and the creation of a Community of Practice that reduced professional isolation. By treating AI training as a humanitarian and philosophical category—rather than a purely technical skill—the program allowed educators to transition from a «transmitter of absolute truths» to a «mediator of learning» and curator of knowledge. This shift is essential for navigating the evolving clinician-AI-patient triad, where machine intelligence complements rather than replaces human expertise.
The diversity of tools integrated into the IFNMU program highlights the extensive applicability of AI across various medical disciplines. Beyond general language models like ChatGPT, which reached 100 million users in two months, the program utilized specialized research tools. Elicit and SciSpace were used to automate literature reviews for faculty. The introduction of Perplexity AI was particularly significant for academic research due to its ability to provide direct citations, which mitigates the risk of AI «hallucinations».
A core methodological contribution of this study is the formalization of prompt engineering within medical pedagogy. The «blank page problem» was addressed through a structured five-step logic refinement process, which moved faculty from simple requests to complex, context-rich instructions. By using specialized personas, such as instructing the AI to «Adopt the role of a Professor of Biology», educators were able to ensure that generative models utilized the appropriate professional vocabulary from their vast databases. Educational gaming, facilitated by tools like Secret Prompter, further stimulates creativity, demonstrating that the process of learning AI collaboration can be engaging and non-threatening.
The tension between AI-driven efficiency and the preservation of critical thinking remained a central theme in faculty discussions. Research within Ukraine indicates that while 84% of medical students use AI, 36% still view it as a form of academic misconduct, and over half admit to cheating. The IFNMU model addresses this by discouraging the prohibition of AI; instead, faculty are actively experimenting with assessment designs that require students to demonstrate understanding, such as submitting AI interaction transcripts or correcting AI-generated clinical errors.
Ensuring factual accuracy in high-stakes medical contexts requires particular vigilance. The program emphasized that AI should be viewed as a «cognitive instrument» requiring human verification. This «human-in-the-loop» principle is codified in the IFNMU policy, which prohibits the total replacement of teaching expertise with automated content and mandates human control over key administrative and clinical decisions. This is reflected in the institutional deployment of two custom AI assistants, which recorded over 3.200 interactions with a 4.3 out of 5 satisfaction rating. These assistants utilize university-specific knowledge bases, ensuring that responses regarding admissions and study planning are grounded in verifiable institutional data.
The development of this program occurred against the backdrop of the ongoing conflict in Ukraine since 2022. Paradoxically, these crisis conditions may have accelerated innovation. Faculty and administrators, already adapted to the major disruptions caused by the COVID-19 pandemic and subsequent military aggression, demonstrated a reduced resistance to radical change. The existential stakes of the conflict have made the value of educational transformation more apparent, as digital competencies are seen as vital for the international competitiveness and future resilience of Ukrainian medical graduates.
Limitations and Future Directions
Several limitations must be acknowledged. This is a single-institution study, and the generalizability of the IFNMU model to other cultural or political contexts requires careful consideration. Furthermore, our outcome data are primarily short-term, focusing on the first academic year of implementation. While we documented 142 use cases, long-term follow-up is necessary to assess the sustained impact of faculty AI literacy on actual student learning outcomes, such as clinical reasoning and knowledge acquisition. Future research should also examine the effect of AI training on underrepresented or disadvantaged groups within the faculty population and investigate optimal strategies for teaching AI ethics in varied clinical environments.
Conclusions
This study documents the successful implementation of Ukraine’s first comprehensive program to train medical faculty in AI technologies. Significant institutional engagement, substantial resource development, and active integration of AI into teaching practices demonstrate the program’s effectiveness. Key success factors included a multi-level program structure that accommodated diverse needs, strong institutional leadership commitment, the creation of a supportive community of practice, an emphasis on practical applications and quick wins, and ongoing support rather than one-time training.
Persistent challenges include the rapid pace of technological change, which requires continuous learning; balancing AI efficiency with the preservation of critical thinking; ensuring the factual accuracy of AI outputs in medical contexts; and adapting generic AI tools to specialized medical applications. These challenges appear inherent to educational AI integration and require ongoing attention rather than one-time solutions.
The program demonstrates the feasibility of large-scale faculty AI literacy initiatives even in challenging contexts characterized by resource constraints and broader societal disruptions. Our experience offers a replicable model for other medical education institutions. The high scalability potential suggests broad applicability across Ukrainian and international contexts.
Investment in faculty digital competencies represents not merely the adoption of innovative technologies, but a fundamental transformation of medical education for the twenty-first century. As AI capabilities continue advancing rapidly, the gap between AI-literate and AI-illiterate faculty will widen, with significant implications for student preparation and institutional competitiveness. Systematic faculty development programs, like ours, offer a pathway to ensure that medical educators can effectively guide students in developing the critical competencies necessary for AI-augmented healthcare practice.
Future work should focus on the long-term sustainability of faculty AI competencies, the impact on student learning outcomes, and the development of practical strategies for ongoing adaptation to rapidly evolving technology. Collaboration among institutions and sharing of effective practices will accelerate progress toward AI-integrated medical education that maintains the highest standards of academic rigor and ethical responsibility.
All procedures performed in this study were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Participation in the training program was voluntary, and all faculty members were informed that anonymized program evaluation data would be used for research and quality improvement purposes. The Ethics Committee waived the requirement for written informed consent given the minimal risk nature of this educational quality improvement initiative. All participants provided verbal consent to participate.
Acknowledgements:
We thank all IFNMU faculty members who participated in the training program and contributed to the development of resources. We acknowledge the university’s information technology staff for technical support and the Academic Council for approving and disseminating the methodological guidelines.
A
Data Availability
Data supporting the findings of this study are available from the corresponding author upon reasonable request, subject to ethical and legal restrictions. The developed AI assistants contain proprietary institutional information and are not publicly available.
A
Author Contribution
OB conceived the study, led program development and implementation, and drafted the manuscript. LH provided institutional leadership support, contributed to program design, and drafted the manuscript. RL contributed to the development of methodological materials and data collection. OBu provided an international perspective and contributed to manuscript revision. All authors read and approved the final manuscript.
References
1.Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: Systematic review. JMIR Med Educ. 2020; 6(1):e19285. Available from: https://doi.org/10.2196/19285
2.Sriram A, Ramachandran K, Krishnamoorthy S. Artificial intelligence in medical education: Transforming learning and practice. Cureus. 2025; 17(3):e80852. Available from: https://doi.org/10.7759/cureus.80852
3.Ng FYC, Thirunavukarasu AJ, Cheng H, Tan TF, Gutierrez L, Lan Y et al. Artificial intelligence education: An evidence-based medicine approach for consumers, translators, and developers. Cell Rep Med. 2023; 4(10):101230. Available from: https://doi.org/10.1016/j.xcrm.2023.101230
4.Saroha S. Artificial intelligence in medical education: Promise, pitfalls, and practical pathways. Adv Med Educ Pract. 2025; 16:1039–46. Available from: https://doi.org/10.2147/AMEP.S523255
5.Mehta N, Mehta S, Rubenstein A, Wood SK. Not replaced, but reinvented: AI education pathways to prepare future physicians to lead healthcare transformation. Perspect Med Educ. 2025; 14(1):849–59. Available from: https://doi.org/10.5334/pme.2233
6.Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023; 2(2):e0000198. Available from: https://doi.org/10.1371/journal.pdig.0000198
7.Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW et al. Large language models encode clinical knowledge. Nature. 2023; 620(7972):172–80. Available from: https://doi.org/10.1038/s41586-023-06291-2
8.Lymar L, Kuchyn I, Bielka K, Puljak L. Academic misconduct and artificial intelligence use by medical students, interns, and PhD students in Ukraine: a cross-sectional study. BMC Med Educ. 2025; 25(1):1496. Available from: https://doi.org/10.1186/s12909-025-08100-y
9.Knopp MI, Warm EJ, Weber D, Kelleher M, Kinnear B, Schumacher DJ et al. AI-enabled medical education: Threads of change, promising futures, and risky realities across four potential future worlds. JMIR Med Educ. 2023; 9:e50373. Available from: https://doi.org/10.2196/50373
10.Ahsan Z. Integrating artificial intelligence into medical education: a narrative systematic review of current applications, challenges, and future directions. BMC Med Educ. 2025; 25(1):1187. Available from: https://doi.org/10.1186/s12909-025-07744-0
11.Kasneci E, Sessler K, Küchemann S, Bannert M, Dementieva D, Fischer F et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Differ. 2023; 103(102274):102274. Available from: https://doi.org/10.1016/j.lindif.2023.102274
12.Lee J, Wu AS, Li D, Kulasegaram KM. Artificial intelligence in undergraduate medical education: A scoping review: A scoping review. Acad Med. 2021; 96(11S):S62–70. Available from: https://doi.org/10.1097/ACM.0000000000004291
13.Tolentino R, Baradaran A, Gore G, Pluye P, Abbasgholizadeh-Rahimi S. Curriculum frameworks and educational programs in AI for medical students, residents, and practicing physicians: Scoping review. JMIR Med Educ. 2024; 10:e54793. Available from: https://doi.org/10.2196/54793
14.Masters K. Ethical use of Artificial Intelligence in health Professions Education: AMEE Guide no. 158. Med Teach. 2023; 45(6):574–84. Available from: https://doi.org/10.1080/0142159X.2023.2186203
15.University policies [Internet]. Ivano-Frankivsk National Medical University [cited 2026 Jan 10]. Available from: https://www.ifnmu.edu.ua/home/public-information/univerctiy_policies/
16.Kuchyn IL, Lymar LV, Bielka KY, Storozhuk KV, Kolomiiets TV. New training, new attitudes: non-clinical components in Ukrainian medical PHDs training (regarding critical thinking, academic integrity and artificial intelligence use). Wiad Lek. 2024; 77(4):665–9. Available from: https://doi.org/10.36740/WLek202404108
17.Bakhov I, Opolska N, Bogus M, Anishchenko V, Biryukova Y. Emergency distance education in the conditions of COVID-19 pandemic: Experience of Ukrainian universities. Educ Sci (Basel). 2021; 11(7):364. Available from: https://doi.org/10.3390/educsci11070364
18.Oliveira LB, Pereira CA, Lunardelli A. Faculty development in health professions: a bibliometric analysis [Internet]. Research Square. 2025. Available from: https://doi.org/10.21203/rs.3.rs-5774420/v1
19.Sharov S, Tereshchuk S, Romas L, Zemlianska A, Derkachova O, Saregar A. Accessibility and responsibility: Use of generative AI by Ukrainian students. TEM J. 2025; 3572. Available from: https://doi.org/10.18421/tem144-62
20.Grunhut J, Marques O, Wyatt ATM. Needs, challenges, and applications of artificial intelligence in medical education curriculum. JMIR Med Educ. 2022; 8(2):e35587. Available from: https://doi.org/10.2196/35587
21.Philosophy of the future in the context of scientific and pedagogical workers training and artificial intelligence application. Futurity Philosophy. 2023; 44–62. Available from: https://doi.org/10.57125/fp.2023.03.30.04