References
1.El Arab, R.A., Almoosa, Z., Alkhunaizi, M., Abuadas, F.H., Somerville, J.: Artificial intelligence in hospital infection prevention: an integrative review. Front. Public. Health. 13, 1547450 (2025). https://doi.org/10.3389/fpubh.2025.1547450
2.Guha, A., Shah, V., Nahle, T., et al.: Artificial intelligence applications in cardio-oncology: a comprehensive review. Curr. Cardiol. Rep. 27(1), 56 (2025). https://doi.org/10.1007/s11886-025-02215-w
3.Păcuraru, I.-M., Chirvase, C.-S., Tiriteu, Ș.-I.: The role of artificial intelligence in personalised medicine: advancements, challenges, and future perspectives. Bus. Excell Manag. 15(1), 59–84 (2025). https://doi.org/10.24818/beman/2025.15.1-05
4.Ratwani, R.M., Sutton, K., Galarraga, J.E.: Addressing AI algorithmic bias in health care. JAMA. 332(13), 1051–1052 (2024). https://doi.org/10.1001/jama.2024.14735
5.Velichkovska, B., Gjoreski, H., Denkovski, D., et al.: Bias in vital signs? Machine learning models can learn patients' race or ethnicity from the values of vital signs alone. BMJ Health Care Inf. 32(1), e101098 (2025). https://doi.org/10.1136/bmjhci-2024-101098
6.Hasanzadeh, F., Josephson, C.B., Waters, G., Adedinsewo, D., Azizi, Z., White, J.A.: Bias recognition and mitigation strategies in artificial intelligence healthcare applications. NPJ Digit. Med. 8(1), 154 (2025). https://doi.org/10.1038/s41746-025-01503-7
7.Cary, M.P. Jr., Grady, S.D., McMillian-Bohler, J., et al.: Building competency in artificial intelligence and bias mitigation for nurse scientists and aligned health researchers. Nurs. Outlook. 73(3), 102395 (2025). https://doi.org/10.1016/j.outlook.2024.102395
8.Gameiro, R.R., Woite, N.L., Sauer, C.M., et al.: The data artifacts glossary: a community-based repository for bias on health datasets. J. Biomed. Sci. 32(1), 14 (2025). https://doi.org/10.1186/s12929-024-01106-6
9.Ferryman, K., Cesare, N., Creary, M., Nsoesie, E.O.: Racism is an ethical issue for healthcare artificial intelligence. Cell. Rep. Med. 5(6) (2024). https://doi.org/10.1016/j.xcrm.2024.101617
10.Lee, T., Puyol-Antón, E., Ruijsink, B., Aitcheson, K., Shi, M., King, A.P.: An investigation into the impact of deep learning model choice on sex and race bias in cardiac MR segmentation. In: Workshop on Clinical Image-Based Procedures. Springer (2023). https://doi.org/10.1007/978-3-031-45249-9_21
11.Thompson, H.M., Sharma, B., Bhalla, S., et al.: Bias and fairness assessment of a natural language processing opioid misuse classifier: detection and mitigation of electronic health record data disadvantages across racial subgroups. J. Am. Med. Inf. Assoc. 28(11), 2393–2403 (2021). https://doi.org/10.1093/jamia/ocab148
12.Liu, M., Ning, Y., Teixayavong, S., et al.: A scoping review and evidence gap analysis of clinical AI fairness. NPJ Digit. Med. 8(1), 360 (2025). https://doi.org/10.1038/s41746-025-01667-2
13.Correa, R., Shaan, M., Trivedi, H., et al.: A systematic review of 'fair' AI model development for image classification and prediction. J. Med. Biol. Eng. 42(6), 816–827 (2022). https://doi.org/10.1007/s40846-022-00754-z
14.de Vieira, C., Barboza, J.R., Cajueiro, F., Kimura, D.: Towards fair AI: mitigating bias in credit decisions—a systematic literature review. J. Risk Financ Manag. 18(5), 228 (2025). https://doi.org/10.3390/jrfm18050228
15.Fields, C.T., Black, C., Thind, J.K., et al.: Governance for anti-racist AI in healthcare: integrating racism-related stress in psychiatric algorithms for Black Americans. Front. Digit. Health. 7, 1492736 (2025). https://doi.org/10.3389/fdgth.2025.1492736
16.Abulibdeh, R., Celi, L.A., Sejdić, E.: The illusion of safety: a report to the FDA on AI healthcare product approvals. PLOS Digit. Health. 4(6) (2025). https://doi.org/10.1371/journal.pdig.0000866 e0000866
17.Page, M.J., McKenzie, J.E., Bossuyt, P.M., et al.: The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 372, n71 (2021). https://doi.org/10.1136/bmj.n71
18.Allen, A., Mataraso, S., Siefkas, A., et al.: A racially unbiased, machine learning approach to prediction of mortality: algorithm development study. JMIR Public. Health Surveill. 6(4), e22400 (2020). https://doi.org/10.2196/22400
19.Gupta, R., Sasaki, M., Taylor, S.L., et al.: Developing and applying the BE-FAIR equity framework to a population health predictive model: a retrospective observational cohort study. J. Gen. Intern. Med. 1–11 (2025). https://doi.org/10.1007/s11606-025-09462-1
20.Wang, H., Landers, M., Adams, R., et al.: A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models. J. Am. Med. Inf. Assoc. 29(8), 1323–1333 (2022). https://doi.org/10.1093/jamia/ocac065
21.Cronjé, H.T., Katsiferis, A., Elsenburg, L.K., et al.: Assessing racial bias in type 2 diabetes risk prediction algorithms. PLOS Glob Public. Health. 3(5), e0001556 (2023). https://doi.org/10.1371/journal.pgph.0001556
22.Velichkovska, B., Gjoreski, H., Denkovski, D., et al.: Vital signs as a source of racial bias. medRxiv (2022). https://doi.org/10.1101/2022.02.03.22270291
23.Velichkovska, B., Gjoreski, H., Denkovski, D., et al.: AI learns racial information from the values of vital signs. medRxiv (2023). https://doi.org/10.1101/2023.12.11.23299819
24.Khor, S., Haupt, E.C., Hahn, E.E., et al.: Racial and ethnic bias in risk prediction models for colorectal cancer recurrence when race and ethnicity are omitted as predictors. JAMA Netw. Open. 6(6), e2318495 (2023). https://doi.org/10.1001/jamanetworkopen.2023.18495
25.Pfob, A., Heil, J.: Artificial intelligence to de-escalate loco-regional breast cancer treatment. Breast. 68, 201–204 (2023). https://doi.org/10.1016/j.breast.2023.09.009
26.Bouguettaya, A., Stuart, E.M., Aboujaoude, E.: Racial bias in AI-mediated psychiatric diagnosis and treatment: a qualitative comparison of four large language models. NPJ Digit. Med. 8(1), 332 (2025). https://doi.org/10.1038/s41746-025-01512-5
27.Gulamali, F., Sawant, A.S., Liharska, L., et al.: Detecting, characterizing, and mitigating implicit and explicit racial biases in health care datasets with subgroup learnability: algorithm development and validation study. J. Med. Internet Res. 27, e71757 (2025). https://doi.org/10.2196/71757
28.Chang, T., Nuppnau, M., He, Y., et al.: Racial differences in laboratory testing as a potential mechanism for bias in AI: a matched cohort analysis in emergency department visits. PLOS Glob Public. Health. 4(10), e0003555 (2024). https://doi.org/10.1371/journal.pgph.0003555
29.Mikhaeil, J.M., Gelman, A., Greengard, P.: Hierarchical Bayesian models to mitigate systematic disparities in prediction with proxy outcomes. J. R Stat. Soc. Ser. Stat. Soc. (2024). https://doi.org/10.1093/jrsssa/qnae142 qnae142
30.Ladin, K., Cuddeback, J., Duru, O.K., et al.: Guidance for unbiased predictive information for healthcare decision-making and equity (GUIDE): considerations when race may be a prognostic factor. NPJ Digit. Med. 7(1), 290 (2024). https://doi.org/10.1038/s41746-024-01245-y
31.Sjoding, M.W., Valley, T.S.: Pulse oximetry and inequitable consequences of health policy. Am. J. Respir Crit. Care Med. 207(1), 5–6 (2023). https://doi.org/10.1164/rccm.202209-1692ED
32.Cerrato, P.L., Halamka, J.D.: How AI drives innovation in cardiovascular medicine. Front. Cardiovasc. Med. 11, 1397921 (2024). https://doi.org/10.3389/fcvm.2024.1397921
33.Chen, R.J., Wang, J.J., Williamson, D.F., et al.: Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 7(6), 719–742 (2023). https://doi.org/10.1038/s41551-023-01056-8
34.Huang, J., Galal, G., Etemadi, M., Vaidyanathan, M.: Evaluation and mitigation of racial bias in clinical machine learning models: scoping review. JMIR Med. Inf. 10(5), e36388 (2022). https://doi.org/10.2196/36388
35.Radingwana, T.T., Afolabi, O.A., Adeleke, O.O.: Multi-domain AI fairness in healthcare: a systematic review synthesis. Front. Digit. Health. 7, 1456789 (2025)
36.Xu, J., Xiao, Y., Wang, W.H., et al.: Algorithmic fairness in computational medicine. EBioMedicine. 84, 104250 (2022). https://doi.org/10.1016/j.ebiom.2022.104250
37.Chinta, S.V., Wang, Z., Palikhe, A., et al.: AI-driven healthcare: a review on ensuring fairness and mitigating bias. arXiv preprint arXiv. (2024). https://doi.org/10.48550/arXiv.2407.19655 :2407.19655
38.Wells, G.A., Shea, B., O'Connell, D., Peterson, J., Welch, V., Losos, M., Tugwell, P.: Jan. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Ottawa Hospital Research Institute. Accessed 18 (2026). http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp
39.Critical Appraisal Skills Programme (CASP). CASP Qualitative Checklist. CASP UK. Accessed 18: (2026). https://casp-uk.net/casp-tools-checklists/