pISSN : 3058-423X eISSN: 3058-4302
Open Access, Peer-reviewed
Jemin Kim,Jihee Boo,Chang Ook Park
10.17966/JMI.2024.29.3.85 Epub 2024 October 11
Abstract
The application of artificial intelligence (AI) in the medical mycology field represents a new era in the diagnosis and management of fungal infections. AI technologies, particularly machine learning (ML) and deep learning (DL) methods, enhance diagnostic accuracy by leveraging large datasets and complex algorithms. This review examines current applications of AI in laboratory and clinical settings for fungal diagnostics. In the laboratory, AI models analyze microscopic images from potassium hydroxide (KOH) examinations, fungal culture tests, and histopathologic slides, which improves the detection rates of fungal pathogens significantly. In the clinical setting, AI assists the diagnosis of fungal infections using medical images, exhibiting high efficacy in binary classification tasks. However, challenges include small sample sizes, class imbalances, reliance on expert-labeled data, and the black box nature of AI models. Explainable AI offers potential solutions by providing human-comprehensible insights into AI decision-making processes. In addition, human-computer collaboration can enhance diagnostic accuracy, particularly for less experienced clinicians. The development of generative AI models, e.g., large language models and multimodal AI, promises to create extensive datasets and integrate various data sources for comprehensive diagnostics. Addressing these limitations through prospective clinical validation and continuous feedback will be essential for realizing the full potential of AI in medical mycology.
Keywords
Artificial intelligence Deep learning Explainable AI Fungal diagnostics
In medical science, the integration of artificial intelligence (AI) has marked a revolutionary shift, particularly in disease diagnostics and management. AI, which simulates human intelligence processes using machines, especially computer systems, encompasses learning, reasoning, and self-correction processes. Machine learning (ML) and its subset, deep learning (DL), have emerged as pivotal technologies within this broad domain1. ML algorithms leverage historical data to predict outcomes with increasing accuracy, and DL techniques enable computers to perform complex tasks by learning from ex- amples and large datasets2.
The application of AI in clinical medicine has introduced a paradigm shift, notably in diagnostic accuracy, which now rivals that of specialist clinicians across various fields, including dermatology and ophthalmology. For example, the diagnostic prowess of AI techniques is evident in their ability to identify various conditions, e.g., diabetic retinopathy and skin cancer, frequently matching or exceeding the accuracy of human experts3,4.
In medical mycology, AI and DL technologies are set to transform how fungal infections are diagnosed and treated. Fungal infections, which can range from superficial skin conditions to invasive diseases affecting internal organs, pose significant diagnostic challenges due to fungi's diverse morphology and genetic makeup5. The application of DL in medical mycology primarily focuses on enhancing the accuracy and efficiency of diagnosing fungal infections through advanced image analysis techniques6, which is critical in settings where rapid and accurate pathogen identification can influence clinical outcomes considerably. For example, recent advancements in AI have shown promise in terms of automating the detection and classification of fungal pathogens in clinical and laboratory settings, thereby making it a valuable tool for both direct patient care and backend laboratory analysis.
The primary objective of this review is to elucidate the current and potential applications of AI in the medical mycology field. This paper explores how AI technologies are applied in laboratory and clinical settings to diagnose fungal infections, and we examine the limitations inherent in current technologies and discuss the prospective advancements that could shape the future of fungal diagnostics.
Multiple AI-based and ML-based models have been developed to assist at various stages of laboratory diagnostics for fungal infections. These technologies are employed to analyze datasets derived from microscopic images of specimens obtained through potassium hydroxide (KOH) examinations, fungal culture tests, or histopathologic slides from human samples7-13. AI and ML can utilize four basic approaches, i.e., supervised, unsupervised, semisupervised, and reinforcement learning; however, most AI research on fungal diagnostics relies on supervised learning techniques. In this approach, researchers provide algorithms with labeled training data and a smaller dataset for validation. The algorithms are trained iteratively to learn the dynamics of the data and the relationship between the inputs and outputs, and then the corresponding models are evaluated on a test set to determine how accurately they match the gold labels14. Gold labels are frequently determined by expert interpretation of microscopic slide images or are validated through results from other confirmation tests, e.g., PCR analysis.
Recent studies on DL methods for laboratory diagnosis of fungal infections are summarized in Table 1. Koo et al.8 developed a model using the YOLO-v4 network to detect hyphal structures in video images processed with KOH staining. They converted the videos into frame-by-frame images, and they annotated the fungal hyphae locations for training. The model was validated on datasets (Dataset-100, Dataset-40, and Dataset-all) comprising different optical magnification ratios. Here, two methods were employed for detection, i.e., image classification and object determination. In image classification, the presence of hyphae results in a positive outcome. In contrast, the object determination method identifies and analyzes hyphae-like objects for more detailed insights into their location and size. This method achieved ROC AUC values of 0.9987 for the Dataset-40 model and 0.9966 for the Dataset-100 model, highlighting its accuracy and reliability for the hyphae detection task.
Study |
Objectives and key findings |
Algorithm |
Sample size |
Koo et al. |
To detect fungal hyphae from potassium hydroxide
(KOH) examination images at 40-fold and 100-fold magnifications. ROC AUC of
0.9987 for 40X model and 0.9966 for 100X model. |
Regional CNN |
3,707 images from |
Yilmaz et al. |
KOH examination to detect onychomycosis from
microscopic images. These networks demonstrated superior diagnostic
performance compared to dermatologists, with mean accuracy rates of |
VGG16, |
457 images -
Fungi: 160 images -
Keratin: 297 images |
Tochigi et al. |
To distinguish Aspergillus
from Mucorales in GMS |
Custom AI |
214 images -
Aspergillosis: 147 -
Mucormycosis: 67 |
Rahman et al. |
Classified 89 different fungal genera from culture
images using DL models, with DenseNet providing |
DenseNet, Xception, |
1,079 images |
Milanović et al. |
For the presumptive determination of
nondermatophyte molds. The software utilizes |
EfficientNet-B2 |
8,138 images from |
Zieliński et al. |
Developed an ML approach using deep neural networks
and bag-of-words to classify microscopic images of nine fungus species.
Accuracy from |
Multiple CNNs, |
180 images |
Decroos et al. |
For histopathological diagnosis of onychomycosis |
CNN similar to |
727 images |
AUC:
area under the curve; CNN: convolutional neural network; GMS: Grocott's methenamine silver; KOH: potassium hydroxide; PAS: periodic acid-Schiff;
ROC: receiver operating characteristic; VGG: visual geometry group |
Zieliński et al.13 introduced a method that utilizes deep neural networks and a bag-of-words method to classify various fungi species from microscopic images, thereby eliminating the need for costly biochemical identification process. Their multistep algorithm generates robust image features and classifies them using a support vector machine, which reduced diagnosis time by two to three days and lowered costs. This method focuses on morphologically similar species by refining various visual parameters, e.g., size, shape, and color, to realize improved accuracy in identification. In addition, Decroos et al.7 employed a CNN (similar to VGG-13) for the histo- pathological diagnosis of onychomycosis using whole slide images of PAS-stained nail clippings. This model was trained on data annotated by experts and refined through self-supervised learning to avoid overfitting, and it achieved an AUC value of 0.981. This method demonstrates noninferiority to human dermatopathologists and offers significant benefits, e.g., time savings and reduced misdiagnosis, particularly under high workloads or time pressures.
In addition to laboratory tests, numerous reports have high- lighted the use of AI models to diagnose fungal infections from medical images obtained in clinical settings. Studies leveraging large databases of medical images typically fall into two categories, i.e., those that focus exclusively on fungal infections15-17 and those that include a subset of fungal-related conditions within a broader range of disease presentations18-20. Most image labeling is based on the clinical assessment of specialists, and some studies have utilized diagnostic results from concurrent fungal cultures or incorporated other clinical information17,19.
Table 2 summarizes recent studies that have utilized DL methods to analyze medical images in clinical settings to aid the diagnosis of fungal infections. Han et al.15 developed an AI system to diagnose onychomycosis using a large dataset of 49,567 nail images. Their model combined ResNet-152 and VGG-19, and it achieved an AUC value of 0.98. This AI-based method outperformed most of the 42 dermatologists who participated in the study, with a sensitivity/specificity of 96.0/94.7 for the primary dataset. The Youden index, which reflects diagnostic accuracy, was significantly higher for the AI system than the dermatologists, thereby demonstrating its potential in clinical diagnosis. In addition, Pangti et al.20 reported the development of a mobile health application based on DenseNet-161 to diagnose 40 common skin diseases. The algorithm trained by datasets including five categories of superficial fungal infections (tinea capitis, tinea cruris, tinea corporis or faciei, tinea manuum, tinea pedis, and tinea unguium), which comprised 20% of the total dataset by image count. The application was validated on 5,014 patients, achieving an overall top-1 accuracy of 75.07% and a mean AUC value of 0.90 for detecting these conditions. Tang et al.17 developed an AI model to classify pathogenic fungal genera in fungal keratitis using 3,364 in vivo confocal microscopy (IVCM) images. The model demonstrated an AUC of 0.887 and an accuracy of 81.7% for identifying Fusarium, and an AUC of 0.827 and an accuracy of 75.7% for identifying Aspergillus, showcasing the potential of deep learning in the clinical management of fungal keratitis.
Study |
Objectives and key findings |
Algorithm |
Sample size |
Han et al. |
Classified onychomycosis in nail images with an |
Ensemble model |
49,567 images |
Nigat et al. |
Used CNN to classify four common fungal skin |
CNN based model |
407 images |
Liu et al. |
Identified 26 common skin conditions in adult cases |
Inception-v4 |
16,114 images
|
Pangti et al.
|
To diagnose 40 common skin diseases, including
specific types of tinea (tinea capitis, tinea cruris, |
DenseNet-161
|
15,418 images from
|
Muhaba et al. |
To diagnose five common skin diseases (including |
MobileNet-v2 |
1,880 images |
Tang et al. |
To classify pathogenic fungal genera in fungal |
Inception-ResNet V2, |
3,364 images from |
AUC: area under the curve; CNN: convolutional
neural network; GBM: gradient boosting machine; VGG: visual geometry group |
Despite AI's promising applications in fungal diagnostics, several limitations must be acknowledged. First, sample sizes across studies vary significantly, often under 5,000 images, which results in substantial class imbalances and selection bias. Many studies utilize closed, in-house datasets, thereby hindering cross-validation and broader applicability21.
Second, various models have shown high performance in binary classification tasks at consistent backgrounds, e.g., detecting onychomycosis presence in the nail plate6; however, their efficacy in multilabel classification tasks or when the background environment varies is inconsistent. This variability can limit the utility of AI in more complex diagnostic scenarios, where the model must differentiate between multiple conditions or adapt to diverse imaging settings.
Third, the performance of these algorithms is heavily dependent on the quality of the provided images and the accuracy of the gold labels, which are frequently based on the assessments of a few clinical experts. This reliance is particularly problematic for tests like KOH, where interobserver variability can affect algorithm training and outcomes.
Furthermore, ML models struggle to distinguish diagnostic- ally meaningful structures from background or artifacts, even though algorithms have been developed to facilitate the object detection of relevant fungal structures8,22. The black box nature of DL algorithms raises concerns about the transparency and interpretability of these decision-making processes, which is critical in terms of establishing and maintaining clinical trust and justification.
Finally, most studies have been retrospective and validated in silico rather than in real-world clinical settings. Prospective validation is crucial to ensure these algorithms perform reliably in uncontrolled, high-stakes environments, address ethical concerns, and minimize selection bias23.
As AI methods continue to evolve, overcoming the black box nature of DL models has become increasingly important. Explainable AI is gaining traction, particularly in medical fields, to elucidate how neural networks arrive at their decisions. Most attempts to produce human-comprehensible explanations for decisions acquired using ML methods focus on post-hoc explainability, aiming to dissect the model's decision-making process24. Techniques like heat maps (or saliency maps) highlight the image's most crucial areas for diagnosis. In contrast, other methods, e.g., locally interpretable model-agnostic explanations and Shapley values, permute input examples to identify which local features most influence the model's decisions25. These approaches can help identify AI-induced image biomarkers that significantly affect the identification of specific fungal species.
Another promising development is the application of human-computer collaboration and augmented decision-making processes, which are becoming increasingly prominent in image-based AI fields. Research has demonstrate that AI assistance can improve the diagnostic accuracy of nonexpert physicians significantly for various skin conditions26. Less experienced clinicians derive the most significant benefit from AI support27, which suggests that AI could be particularly valuable in settings requiring more specialized personnel to diagnose fungal infections. However, there is a risk of over-reliance on AI, which could lead to adherence to incorrect AI diagnoses, and continuous feedback and iterative testing in clinical environments are essential to mitigate this issue.
Finally, leveraging large-scale generative AI models, e.g., the Chat Generative Pretrained Transformer model, represents a novel approach. In addition, generative adversarial networks can create extensive datasets from small sample sizes or generate images with specific phenotypes28. Further, combining medical images, electronic health records, molecular analysis related to fungal infections29, and existing literature through multimodal self-supervised training can develop generalist medical AI (GMAI)30, which could make decisions in a manner that is similar to human clinicians. The advent of GMAI promises revolutionary advancements in medical mycology, and it offers comprehensive and integrated diagnostic capabilities.
In conclusion, the integration of AI in medical mycology holds immense potential to revolutionize fungal diagnostics. Despite the limitations related to sample size variability, class imbalance, and the black box nature of DL models, ongoing advancements in explainable AI, human-computer collaboration, and generative AI models promise significant improvements. By addressing these challenges and harnessing the power of AI effectively and efficiently, we can enhance diagnostic accuracy, reduce misdiagnosis, and optimize clinical outcomes. Future research and prospective clinical validation will be crucial to fully realizing the transformative impact of AI in the medical mycology field.
References
1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:436-444
Google Scholar
2. Goldenberg SL, Nir G, Salcudean SE. A new era: artificial intelligence and machine learning in prostate cancer. Nat Rev Urol 2019;16:391-403
Google Scholar
3. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115-118
Google Scholar
4. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 2016; 316:2402-2410
Google Scholar
5. Fang W, Wu J, Cheng M, Zhu X, Du M, Chen C, et al. Diagnosis of invasive fungal infections: challenges and recent developments. J Biomed Sci 2023;30:42
Google Scholar
6. Lim SS, Ohn J, Mun JH. Diagnosis of onychomycosis: from conventional techniques and dermoscopy to artificial intelligence. Front Med (Lausanne) 2021;8:637216
Google Scholar
7. Decroos F, Springenberg S, Lang T, Paepper M, Zapf A, Metze D, et al. A deep learning approach for histo- pathological diagnosis of onychomycosis: not inferior to analogue diagnosis by histopathologists. Acta Derm Venereol 2021;101:adv00532
Google Scholar
8. Koo T, Kim MH, Jue MS. Automated detection of super- ficial fungal infections from microscopic images through a regional convolutional neural network. PLoS One 2021; 16:e0256290
Google Scholar
9. Milanović M, Otašević S, Ranđelović M, Grassi A, Cafarchia C, Mares M, et al. Multi-convolutional neural network-based diagnostic software for the presumptive determination of non-dermatophyte molds. Electronics 2024;13:594
Google Scholar
10. Rahman MA, Clinch M, Reynolds J, Dangott B, Meza Villegas DM, Nassar A, et al. Classification of fungal genera from microscopic images using artificial intelligence. J Pathol Inform 2023;14:100314
Google Scholar
11. Tochigi N, Sadamoto S, Oura S, Kurose Y, Miyazaki Y, Shibuya K. Artificial intelligence in the diagnosis of invasive mold infection: development of an automated histologic identification system to distinguish between Aspergillus and Mucorales. Med Mycol J 2022;63:91-97
Google Scholar
12. Yilmaz A, Göktay F, Varol R, Gencoglan G, Uvet H. Deep convolutional neural networks for onychomycosis detection using microscopic images with KOH examin- ation. Mycoses 2022;65:1119-1126
Google Scholar
13. Zieliński B, Sroka-Oleksiak A, Rymarczyk D, Piekarczyk A, Brzychczy-Włoch M. Deep learning approach to describe and classify fungi microscopic images. PloS One 2020; 15:e0234806
Google Scholar
14. Nichols JA, Herbert Chan HW, Baker MAB. Machine learning: applications of artificial intelligence to imaging and diagnosis. Biophys Rev 2019;11:111-118
Google Scholar
15. Han SS, Park GH, Lim W, Kim MS, Na JI, Park I, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diag- nosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PloS One 2018;13:e0191493
Google Scholar
16. Nigat TD, Sitote TM, Gedefaw BM. Fungal skin disease classification using the convolutional neural network. J Healthc Eng 2023;2023:6370416
Google Scholar
17. Tang N, Huang G, Lei D, Jiang L, Chen Q, He W, et al. An artificial intelligence approach to classify pathogenic fungal genera of fungal keratitis using corneal confocal microscopy images. Int Ophthalmol 2023;43:2203-2214
Google Scholar
18. Liu Y, Jain A, Eng C, Way DH, Lee K, Bui P, et al. A deep learning system for differential diagnosis of skin diseases. Nat Med 2020;26:900-908
Google Scholar
19. Muhaba KA, Dese K, Aga TM, Zewdu FT, Simegn GL. Automatic skin disease diagnosis using deep learning from clinical image and patient information. Skin Health Dis 2021;2:e81
Google Scholar
20. Pangti R, Mathur J, Chouhan V, Kumar S, Rajput L, Shah S, et al. A machine learning-based, decision support, mobile phone application for diagnosis of common dermatological diseases. J Eur Acad Dermatol Venereol 2021;35:536-545
Google Scholar
21. Puri P, Comfere N, Drage LA, Shamim H, Bezalel SA, Pittelkow MR, et al. Deep learning for dermatologists: Part II. Current applications. J Am Acad Dermatol 2022; 87:1352-1360
Google Scholar
22. Jansen P, Creosteanu A, Matyas V, Dilling A, Pina A, Saggini A, et al. Deep learning assisted diagnosis of onychomycosis on whole-slide images. J Fungi (Basel) 2022;8:912
Google Scholar
23. Evans EL, Whicher D. What should oversight of clinical decision support systems look like? AMA J Ethics 2018; 20:E857-863
Google Scholar
24. Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health 2021;3:e745-e750
Google Scholar
25. Van der Velden BHM, Kuijf HJ, Gilhuijs KGA, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470
Google Scholar
26. Tschandl P, Rinner C, Apalla Z, Argenziano G, Codella N, Halpern A, et al. Human-computer collaboration for skin cancer recognition. Nat Med 2020;26:1229-1234
Google Scholar
27. Kim J, Lee C, Choi S, Sung DI, Seo J, Lee YN, et al. Augmented decision-making in wound care: Evaluating the clinical utility of a deep-learning model for pressure injury staging. Int J Med Inform 2023;180:105266
Google Scholar
28. Kim K, Cho K, Jang R, Kyung S, Lee S, Ham S, et al. Updated primer on generative artificial intelligence and large language models in medical imaging for medical professionals. Korean J Radiol 2024;25:224-242
Google Scholar
29. Kim SM, Park CO. Immune Response to Fungal Pathogens. J Mycol Infect 2020;25:1-9
30. Moor M, Banerjee O, Abad ZSH, Krumholz HM, Leskovec J, Topol EJ, et al. Foundation models for generalist medical artificial intelligence. Nature 2023;616:259-265
Google Scholar
Congratulatory MessageClick here!