LUMEN: Low-light Unified Multi-stage Enhancement Network to Improve RetinaFace-Based Face Detection
(Rivaldi Ramadani, De Rosal Ignatius Moses Setiadi, Imanuel Harkespan, Kristoko Dwi Hartomo, Christian Arthur)
DOI : 10.62411/faith.3048-3719-123
- Volume: 2,
Issue: 4,
Sitasi : 5 25-Dec-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Face detection in low-light conditions remains challenging due to underexposure, noise, and unstable contrast, which significantly degrade the performance of convolutional-based models. Conventional enhancement techniques, such as Histogram Equalization, Contrast-Limited Adaptive Histogram Equalization (CLAHE), and Low-Light Image Enhancement (LIME), often improve brightness but introduce visual artifacts and lack robustness for face detection. This study proposes Low-light Unified Multi-stage Enhancement Network (LUMEN), a lightweight multi-stage image enhancement pipeline designed as a preprocessing module to improve RetinaFace-based face detection under low-light conditions. LUMEN integrates Multiscale Retinex with Color Restoration, adaptive gamma correction, CLAHE-based local contrast enhancement, controlled image fusion, and Non-Local Means denoising to jointly stabilize illumination, preserve texture, and maintain visual naturalness. The method is evaluated on a low-light subset of the Human Faces Object Detection Dataset using RetinaFace. Detection performance is assessed using detection rate and confidence score, while visual quality is evaluated using no-reference metrics such as the Natural Image Quality Evaluator (NIQE) and the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE). Experimental results show that LUMEN achieves a face detection rate of 91% with a high confidence score (0.9545), rep-resenting an improvement of 33 percentage points over raw low-light images (58%), while maintaining detection performance comparable to CLAHE, which ranks as the second-best method, LUMEN delivers superior perceptual quality, evidenced by the lowest BRISQUE score, as well as more stable visual appearance with reduced noise amplification and fewer contrast artifacts in low-light facial regions. LUMEN also achieves the lowest BRISQUE score, indicating superior texture preservation and perceptual stability. Ablation studies confirm that Retinex and CLAHE are the most critical components for detection robustness, while gamma correction, fusion, and denoising mainly contribute to visual naturalness. These results demonstrate that LUMEN provides an effective and practical preprocessing solution for low-light face detection.
|
5 |
2025 |
Integrating Quantum, Deep, and Classic Features with Attention-Guided AdaBoost for Medical Risk Prediction
(Muh Galuh Surya Putra Kusuma, De Rosal Ignatius Moses Setiadi, Wise Herowati, T. Sutojo, Prajanto Wahyu Adi, Pushan Kumar Dutta, Minh T. Nguyen)
DOI : 10.62411/jcta.14873
- Volume: 3,
Issue: 2,
Sitasi : 49 11-Oct-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Chronic diseases such as chronic kidney disease (CKD), diabetes, and heart disease remain major causes of mortality worldwide, highlighting the need for accurate and interpretable diagnostic models. However, conventional machine learning methods often face challenges of limited generalization, feature redundancy, and class imbalance in medical datasets. This study proposes an integrated classification framework that unifies three complementary feature paradigms: classical tabular attributes, deep latent features extracted through an unsupervised Long Short-Term Memory (LSTM) encoder, and quantum-inspired features derived from a five-qubit circuit implemented in PennyLane. These heterogeneous features are fused using a feature-wise attention mechanism combined with an AdaBoost classifier to dynamically weight feature contributions and enhance decision boundaries. Experiments were conducted on three benchmark medical datasets—CKD, early-stage diabetes, and heart disease—under both balanced and imbalanced configurations using stratified five-fold cross-validation. All preprocessing and feature extraction steps were carefully isolated within each fold to ensure fair evaluation. The proposed hybrid model consistently outperformed conventional and ensemble baselines, achieving peak accuracies of 99.75% (CKD), 96.73% (diabetes), and 91.40% (heart disease) with corresponding ROC AUCs up to 1.00. Ablation analyses confirmed that attention-based fusion substantially improved both accuracy and recall, particularly under imbalanced conditions, while SMOTE contributed minimally once feature-level optimization was applied. Overall, the attention-guided AdaBoost framework provides a robust and interpretable approach for clinical risk prediction, demonstrating that integrating diverse quantum, deep, and classical representations can significantly enhance feature discriminability and model reliability in structured medical data.
|
49 |
2025 |
A Probabilistic Feature-Augmented GRU-Attention Model for Chronic Disease Prediction on Imbalanced Data
(Muhammad Nabil Aisy, Sari Ayu Wulandari, De Rosal Ignatius Moses Setiadi)
DOI : 10.62411/faith.3048-3719-100
- Volume: 2,
Issue: 2,
Sitasi : 46 28-Jul-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
This study proposes a lightweight hybrid model that integrates probabilistic feature augmentation, Gated Recurrent Units (GRU), and Multi-Head Attention to enhance chronic disease prediction on imbalanced clinical tabular data. The research addresses the challenge of low recall and poor minority-class detection in conventional models, aiming to improve predictive robustness and interpretability. The proposed model leverages logistic regression to generate probability-based feature augmentations, which are combined with sequential dependencies learned by GRU and refined through attention mechanisms. Evaluated on three benchmark medical datasets, Breast Cancer, Heart Disease, and Hepatitis C, the model achieves a maximum F1-score of 0.951, a recall of 0.944, and an AUC of 0.976, outperforming traditional machine learning baselines and single-path deep learning models. The attention module enhances interpretability by highlighting relevant features, supporting clinical insights. These findings confirm that probabilistic augmentation and attention-guided deep architectures can significantly improve prediction performance on imbalanced medical data. The results support the study’s objective to design an accurate, interpretable, and clinically relevant prediction model.
|
46 |
2025 |
Towards intelligent post-quantum security: a machine learning approach to FrodoKEM, Falcon, and SIKE
(Muhamad Akrom, De Rosal Ignatius Moses Setiadi)
DOI : 10.62411/jimat.v2i1.12865
- Volume: 2,
Issue: 1,
Sitasi : 0 14-Jun-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
The rapid advancement of quantum computing poses a substantial threat to classical cryptographic systems, accelerating the global shift toward post-quantum cryptography (PQC). Despite their theoretical robustness, practical deployment of PQC algorithms remains hindered by challenges such as computational overhead, side-channel vulnerabilities, and poor adaptability to dynamic environments. This study integrates machine learning (ML) techniques to enhance three representative PQC algorithms: FrodoKEM, Falcon, and Supersingular Isogeny Key Encapsulation (SIKE). ML is employed for four key purposes: performance optimization through Bayesian and evolutionary parameter tuning; real-time side-channel leakage detection using deep learning models; dynamic algorithm switching based on runtime conditions using reinforcement learning; and cryptographic forensics through anomaly detection on vulnerable implementations. Experimental results demonstrate up to 23.6% reduction in key generation time, over 96% accuracy in side-channel detection, and significant gains in adaptability and leakage resilience. ML models also identified predictive patterns of cryptographic fragility in the now-broken SIKE protocol. These findings confirm that machine learning augments performance and security and enables intelligent and adaptive cryptographic infrastructures for the post-quantum era.
|
0 |
2025 |
Integrating Hybrid Statistical and Unsupervised LSTM-Guided Feature Extraction for Breast Cancer Detection
(De Rosal Ignatius Moses Setiadi, Arnold Adimabua Ojugo, Octara Pribadi, Etika Kartikadarma, Bimo Haryo Setyoko, Suyud Widiono, Robet Robet, Tabitha Chukwudi Aghaunor, Eferhire Valentine Ugbotu)
DOI : 10.62411/jcta.12698
- Volume: 2,
Issue: 4,
Sitasi : 0 05-May-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Breast cancer is the most prevalent cancer among women worldwide, requiring early and accurate diagnosis to reduce mortality. This study proposes a hybrid classification pipeline that integrates Hybrid Statistical Feature Selection (HSFS) with unsupervised LSTM-guided feature extraction for breast cancer detection using the Wisconsin Diagnostic Breast Cancer (WDBC) dataset. Initially, 20 features were selected using HSFS based on Mutual Information, Chi-square, and Pearson Correlation. To address class imbalance, the training set was balanced using the Synthetic Minority Over-sampling Technique (SMOTE). Subsequently, an LSTM encoder extracted non-linear latent features from the selected features. A fusion strategy was applied by concatenating the statistical and latent features, followed by re-selection of the top 30 features. The final classification was performed using a Support Vector Machine (SVM) with RBF kernel and evaluated using 5-fold cross-validation and a held-out test set. Experimental results showed that the proposed method achieved an average training accuracy of 98.13%, F1-score of 98.13%, and AUC-ROC of 99.55%. On the held-out test set, the model reached an accuracy of 99.30%, precision of 100%, and F1-score of 99.05%, with an AUC-ROC of 0.9973. The proposed pipeline demonstrates improved generalization and interpretability compared to existing methods such as LightGBM-PSO, DHH-GRU, and ensemble deep networks. These results highlight the effectiveness of combining statistical selection and LSTM-based latent feature encoding in a balanced classification framework.
|
0 |
2025 |
Quantum Key Distribution-Assisted Image Encryption Using 7D and 2D Hyperchaotic Systems
(Zahrah Asri Nur Fauzyah, Aceng Sambas, Prajanto Wahyu Adi, De Rosal Ignatius Moses Setiadi)
DOI : 10.62411/faith.3048-3719-93
- Volume: 2,
Issue: 1,
Sitasi : 39 22-Apr-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Secure image transmission is increasingly vital in the digital era, especially against emerging quantum threats. This study proposes a hybrid image encryption scheme that integrates Quantum Key Distribution (QKD) using the BB84 protocol with a combination of 7-dimensional (7D) and 2-dimensional (2D) hyperchaotic systems to achieve robust security. The BB84 protocol facilitates quantum-assisted key exchange, ensuring resistance to eavesdropping, while the hyperchaotic systems provide high entropy and complex randomness, utilized in a layered permutation-substitution encryption framework. The initial seeds for chaotic sequences are derived using a SHA-512 hash of both the input image and quantum-generated key, ensuring uniqueness and sensitivity. Experimental validation was conducted using several benchmark images. The information entropy values of the ciphered images reached up to 7.9993, indicating excellent randomness. Differential analysis showed high resistance to small perturbations, with NPCR exceeding 99.61% and UACI averaging around 33.47%, which meet standard security thresholds. Histogram and chi-square tests confirmed the uniform pixel distribution, with chi-square values below 280, satisfying the randomness criterion for 8-bit images. Furthermore, correlation coefficients of adjacent pixels dropped to near zero, evidencing effective decorrelation. The encryption scheme also demonstrated robustness to data loss, as shown by the successful decryption of partially corrupted cipher images. Robustness testing under partial data loss (200×200-pixel blocks) also demonstrated visual recoverability and algorithm resilience. Overall, the proposed BB84-assisted dual-hyperchaotic encryption scheme offers a secure and computationally effective solution for protecting sensitive image data, making it suitable for post-quantum secure communications.
|
39 |
2025 |
AI-Powered Steganography: Advances in Image, Linguistic, and 3D Mesh Data Hiding – A Survey
(De Rosal Ignatius Moses Setiadi, Sudipta Kr Ghosal, Aditya Kumar Sahu)
DOI : 10.62411/faith.3048-3719-76
- Volume: 2,
Issue: 1,
Sitasi : 97 04-Apr-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
The rapid evolution of artificial intelligence (AI) has significantly transformed the field of steganography, extending its scope beyond conventional image-based techniques to novel domains such as linguistic and 3D mesh data hiding. This review presents a concise, accessible, and critical examination of recent AI-powered steganography methods, focusing on three distinct modalities: image, linguistic, and 3D mesh. Unlike most surveys focusing solely on one modality, this work highlights some modalities, identifies their unique challenges, and discusses how AI has reshaped embedding mechanisms, evaluation strategies, and security concerns. In image-based steganography, deep models such as GANs and Transformers have improved imperceptibility and extraction accuracy, but face limitations in computational efficiency and extraction consistency. Linguistic steganography, previously hindered by semantic fragility, has been revitalized by large language models (LLMs), enabling context-aware and reversible embedding, though still constrained by metric standardization and synchronization issues. Meanwhile, 3D mesh steganography remains dominated by non-AI methods, offering fertile ground for innovation through geometric deep learning. This review also provides a comparative summary of design principles, performance metrics, and modality-specific trade-offs. The analysis reveals a shift in evaluation paradigms, from numeric fidelity (e.g., PSNR, SSIM) to semantic and perceptual metrics (e.g., LPIPS, BERTScore, Hausdorff Distance). Looking ahead, future directions include cross-modal integration, domain adaptation, lightweight AI models, and the development of unified benchmarks. By presenting recent advances and critical perspectives across underexplored domains, this survey aims to inspire early-stage researchers and practitioners to explore new frontiers of steganography in the AI era.
|
97 |
2025 |
Aspect-Based Sentiment Analysis on E-commerce Reviews using BiGRU and Bi-Directional Attention Flow
(De Rosal Ignatius Moses Setiadi, Warto Warto, Ahmad Rofiqul Muslikh, Kristiawan Nugroho, Achmad Nuruddin Safriandono)
DOI : 10.62411/jcta.12376
- Volume: 2,
Issue: 4,
Sitasi : 0 01-Apr-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Aspect-based sentiment Analysis (ABSA) is vital in capturing customer opinions on specific e-commerce products and service attributes. This study proposes a hybrid deep learning model integrating Bi-Directional Gated Recurrent Units (BiGRU) and Bi-Directional Attention Flow (BiDAF) to perform aspect-level sentiment classification. BiGRU captures sequential dependencies, while BiDAF enhances attention by focusing on sentiment-relevant segments. The model is trained on an Amazon review dataset with preprocessing steps, including emoji handling, slang normalization, and lemmatization. It achieves a peak training accuracy of 99.78% at epoch 138 with early stopping. The model delivers a strong performance on the Amazon test set across four key aspects: price, quality, service, and delivery, with F1 scores ranging from 0.90 to 0.92. The model was also evaluated on the SemEval 2014 ABSA dataset to assess generalizability. Results on the restaurant domain achieved an F1-score of 88.78% and 83.66% on the laptop domain, outperforming several state-of-the-art baselines. These findings confirm the effectiveness of the BiGRU-BiDAF architecture in modeling aspect-specific sentiment across diverse domains.
|
0 |
2025 |
Feature Fusion with Albumentation for Enhancing Monkeypox Detection Using Deep Learning Models
(Nizar Rafi Pratama, De Rosal Ignatius Moses Setiadi, Imanuel Harkespan, Arnold Adimabua Ojugo)
DOI : 10.62411/jcta.12255
- Volume: 2,
Issue: 3,
Sitasi : 0 21-Feb-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Monkeypox is a zoonotic disease caused by Orthopoxvirus, presenting clinical challenges due to its visual similarity to other dermatological conditions. Early and accurate detection is crucial to prevent further transmission, yet conventional diagnostic methods are often resource-intensive and time-consuming. This study proposes a deep learning-based classification model by integrating Xception and InceptionV3 using feature fusion to enhance performance in classifying Monkeypox skin lesions. Given the limited availability of annotated medical images, data augmentation was applied using Albumentation to improve model generalization. The proposed model was trained and evaluated on the Monkeypox Skin Lesion Dataset (MSLD), achieving 85.96% accuracy, 86.47% precision, 85.25% recall, 78.43% specificity, and an AUC score of 0.8931, outperforming existing methods. Notably, data augmentation significantly improved recall from 81.23% to 85.25%, demonstrating its effectiveness in enhancing sensitivity to positive cases. Ablation studies further validated that augmentation increased overall accuracy from 82.02% to 85.96%, emphasizing its role in improving model robustness. Comparative analysis with other models confirmed the superiority of our approach. This research enhances automated Monkeypox detection, offering a robust and efficient tool for low-resource clinical settings. The findings reinforce the potential of feature fusion and augmentation in improving deep learn-ing-based medical image classification, facilitating more reliable and accessible disease identification.
|
0 |
2025 |
High-Performance Face Spoofing Detection using Feature Fusion of FaceNet and Tuned DenseNet201
(Leygian Reyhan Zuama, De Rosal Ignatius Moses Setiadi, Ajib Susanto, Stefanus Santosa, Hong-Seng Gan, Arnold Adimabua Ojugo)
DOI : 10.62411/faith.3048-3719-62
- Volume: 1,
Issue: 4,
Sitasi : 76 12-Feb-2025
| Abstrak
| PDF File
| Resource
| Last.29-Jan-2026
Abstrak:
Face spoofing detection is critical for biometric security systems to prevent unauthorized access. This study proposes a deep learning-based approach integrating FaceNet and DenseNet201 to enhance face spoofing detection performance. FaceNet generates identity-based embeddings, ensuring robust facial feature representation, while DenseNet201 extracts complementary texture-based features. These features are fused using the Concatenate function to form a more comprehensive representation for im-proved classification. The proposed method is evaluated on two widely used face spoofing datasets, NUAA Photograph Imposter and LCC-FASD, achieving 100% accuracy on NUAA and 99% on LCC-FASD. Ablation studies reveal that data augmentation does not always enhance performance, particularly on high-complexity datasets such as LCC-FASD, where augmentation increases the False Rejection Rate (FRR). Conversely, DenseNet201 benefits more from augmentation, while the proposed method performs best without augmentation. Comparative analysis with previous studies further confirms the superiority of the proposed approach in reducing error rates, particularly Half Total Error Rate (HTER), False Acceptance Rate (FAR), and FRR. These findings indicate that combining identity-based embeddings and texture-based feature extraction significantly improves spoofing detection and enhances model robustness across different attack scenarios. This study advances biometric security by introducing an efficient feature fusion strategy that strengthens deep learning-based spoof detection. Future research may explore further optimization strategies and evaluate the approach on more diverse datasets to enhance generalization.
|
76 |
2025 |