Multimodal Deep Learning for Chelonia Mydas Conservation Genomics: Architectures, Fusion Approaches, And Research Directions

Faizah Aplop, Afshan Naseem

1Institute of Oceanography and Environment (INOS), Universiti Malaysia Terengganu, 21030 Kuala Nerus, Terengganu, Malaysia

faizah_aplop@umt.edu.my

To Cite this Article :


Abstract

Modern deep learning approaches have enhanced predictive capability in conservation genomics beyond what is achievable with conventional statistical models. For Chelonia mydas (green sea turtle), DL is promising for genomic-enabled prediction of conservation-relevant traits (e.g., disease susceptibility, growth, hatchling success), but conventional unimodal approaches often underuse the rich ecological and health context surrounding genomic data. Multimodal deep learning (MMDL) addresses this limitation by integrating multiple information sources such as genomic sequences and variants, environmental drivers (sea-surface temperature, salinity, storm exposure), and phenotypic or health indicators to increase predictive capacity. In this review, we introduce the core concepts of MMDL, outline widely used neural architectures (MLPs, CNNs, temporal RNNs/Transformers, autoencoders), and describe strategies for fusing heterogeneous modalities (early, intermediate, late fusion). We also summarize practical computational resources for implementing MMDL in turtle genomics. Finally, we survey applications and cross-domain evidence relevant to C. mydas, providing a meta-level view of when and why MMDL performs well and The capacity of these techniques to handle multifaceted conservation challenges. In general, multimodal deep learning achieves superior prediction accuracy compared with single-modality deep learning and conventional machine learning approaches, albeit with increased computational demands, due to its ability to capture cross-modal relationships. In C. mydas, effective implementation relies


Article Overview

  • Volume 0
  • Pages : 0