(Naglaa Fadul, Mona Fahad Alaskar, Kamal Bakari Jillahi, Dalia Bassem El-Khaled)
IntSys Research - J. Fut. Artif. Intell. Tech. - Journal of Future Artificial Intelligence and Technologies
Abstrak:
This review examines the rapidly expanding landscape of Generative Artificial Intelligence (GenAI) in healthcare, focusing on how models such as GANs, VAEs, diffusion models, and large language models are being explored across medical imaging, clinical documentation, synthetic data generation, drug discovery, and decision-support workflows. Despite GenAI's growing influence, persistent challenges, including limited annotated datasets, concerns over model generalizability, privacy risks, and the opacity of generative architectures, underscore the need for careful evaluation and governance. Accordingly, this study aims to map current applications, assess methodological and ethical constraints, and identify future research opportunities. Using a structured search across ScienceDirect, Scopus, and other sources, the study follows a structured narrative review complemented by quantitative descriptive analysis of the literature. The review also adopts PRISMA-guided screening and standardized data extraction, the review synthesizes evidence from 110 studies published up to October 2025. The findings indicate that the literature frequently reports improvements in imaging quality, data augmentation, molecular modeling workflows, and clinical documentation through generative approaches, particularly in technically constrained settings; however, evidence of clinically validated impact remains uneven across domains. While issues of bias, hallucination, and limited interpretability persist as significant obstacles, imaging-focused applications appear comparatively more mature than decision-support and patient-level modeling tasks. Across domains, diffusion models are commonly associated with higher visual fidelity in biomedical image generation, whereas LLMs demonstrate promise in narrative-oriented tasks but require stronger factual grounding and external verification mechanisms. Overall, the evidence suggests that GenAI’s potential in healthcare is highly context-dependent and contingent upon robust validation frameworks, transparent governance, and human-in-the-loop oversight. The review concludes that responsible integration of GenAI—guided by ethical, legal, and clinical safeguards, will be essential for ensuring safe, equitable, and sustainable adoption in healthcare research, delivery, and policy.