Depression Classification in University Students using A Machine Learning Approach Based on Multi-Layer Perceptron
DOI:
https://doi.org/10.57152/predatecs.v3i2.2107Keywords:
Classification, Hold-Out Validation, Mental Health, Multi-Layer Perceptron, Student DepressionAbstract
Depression among university students is a critical mental health concern, often exacerbated by academic pressure and social adaptation. While prior studies have utilized Multi-Layer Perceptron (MLP) models to achieve up to 78% accuracy, the effectiveness of these systems remains highly sensitive to architectural design and optimization strategies. To address this gap, this study systematically evaluates the performance of modern MLP architectural variants including DenseNet, ResMLP, and ResNet paired with SGD, Adam, and RMSprop optimizers. Using a dataset of 1,025 student records, the methodology integrates Chi-Square feature selection and Min-Max normalization, followed by an 80:20 Hold-Out validation. Results demonstrate that the ResNet-RMSprop synergy yields a superior accuracy of 83.86%, significantly outperforming traditional MLP benchmarks . By identifying the optimal combination of deep learning structures and optimization algorithms, this research provides a more robust and precise technical foundation for AI-driven early detection systems in academic settings.
References
M. D. Collaborators, “Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic,” vol. 398, 2021, doi: 10.1016/S0140-6736(21)02143-7.
T. Wang, C. Xue, Z. Zhang, T. Cheng, and G. Yang, “Unraveling the distinction between depression and anxiety?: A machine learning exploration of causal relationships,” Comput. Biol. Med., vol. 174, no. March, p. 108446, 2024, doi: 10.1016/j.compbiomed.2024.108446.
M. Bader, M. Abdelwanis, M. Maalouf, and H. F. Jelinek, “Detecting depression severity using weighted random forest and oxidative stress biomarkers,” Sci. Rep., no. 0123456789, pp. 1–19, 2024, doi: 10.1038/s41598-024-67251-y.
J. Chung and J. Teo, “Mental Health Prediction Using Machine Learning?: Taxonomy, Applications, and Challenges,” vol. 2022, 2022.
M. Andreu et al., “The Effect of Traumatic Events on the Longitudinal Course and Outcomes of Youth with Bipolar Disorder,” J. Affect. Disord., vol. 274, no. December 2019, pp. 126–135, 2020, doi: 10.1016/j.jad.2020.05.131.
N. Meda, S. Pardini, P. Rigobello, and F. Visioli, “Frequency and machine learning predictors of severe depressive symptoms and suicidal ideation among university students,” no. July 2023, doi: 10.1017/S2045796023000550.
N. Mumenin et al., “Heliyon Screening depression among university students utilizing GHQ-12 and machine learning,” Heliyon, vol. 10, no. 17, p. e37182, 2024, doi: 10.1016/j.heliyon.2024.e37182.
N. M. Monteiro, “Addressing mental illness in Africa?: Global health challenges and local opportunities Addressing Mental Illness In Africa?: Global Health Challenges And Local Opportunities A Case of Mental Illness in Africa 27-year- old Fatim lives in a rural village in the eastern region of Guinea and has 3 small,” no. October 2015, doi: 10.1285/i24212113v1i2p78.
N. Mumenin, M. A. Yousuf, M. O. Alassafi, M. M. Monowar, and A. Hamid, “DDNet?: A Robust, and Reliable Hybrid Machine Learning Model for Effective Detection of Depression Among University Students,” IEEE Access, vol. 13, no. February, pp. 49334–49353, 2025, doi: 10.1109/ACCESS.2025.3552041.
F. Javed, S. O. Gilani, S. Latif, A. Waris, and M. Jamil, “Predicting Risk of Antenatal Depression and Anxiety Using Multi-Layer Perceptrons and Support Vector Machines,” 2021.
E. S. Mohamed, T. Ahmad, S. Ahmad, C. Bukhari, and I. Rauf, “Healthcare Analytics A hybrid mental health prediction model using Support Vector Machine, Multi-Layer Perceptron, and Random Forest algorithms,” Healthc. Anal., vol. 3, no. July 2022, p. 100185, 2023, doi: 10.1016/j.health.2023.100185.
D. Imans, T. Abuhmed, and M. Alharbi, “Explainable Multi-Layer Dynamic Ensemble Framework Optimized for Depression Detection and Severity Assessment,” pp. 1–41, 2024.
F. Kabir, A. Hossain, A. F. M. M. Rahman, and S. Z. Mishu, “Depression Detection From Social Media Textual Data Using Natural Language Processing and Machine Learning Techniques,” no. December, pp. 13–15, 2023.
G. Hasan, A. Masud, R. Islam, I. Sakin, and M. Rafsan, “Effective depression detection and interpretation?: Integrating machine learning, deep learning, language models, and explainable AI,” Array, vol. 25, no. February, p. 100375, 2025, doi: 10.1016/j.array.2025.100375.
T. Hasan, A. Rahim, J. Shin, and S. Member, “Dynamics of Digital Pen-Tablet?: Handwriting Analysis for Person Identification Using Machine and Deep Learning Techniques,” IEEE Access, vol. 12, no. January, pp. 8154–8177, 2024, doi: 10.1109/ACCESS.2024.3352070.
T. Pfister and O. Sercan, “TabNet?: Attentive Interpretable Tabular Learning”.
C. Desai, “Comparative Analysis of Optimizers in Deep Neural Networks,” International Journal of Innovative Science and Research Technology vol. 5, no. 10, pp. 959–962, 2020.
N. Landro, I. Gallo, and R. La Grassa, “Mixing ADAM and SGD?: a Combined Optimization Method”.
E. Hassan, M. Y. Shams, N. A. Hikal, and S. Elmougy, The effect of choosing optimizer algorithms to improve computer vision tasks?: a comparative study. Multimedia Tools and Applications, 2023.
S. Hamid, H. Madni, H. Muhammad, F. Shahzad, S. Shah, and M. Faheem, “Exploring optimizer efficiency for facial expression recognition with convolutional neural networks,” no. September 2024, pp. 1–29, 2025, doi: 10.1049/tje2.70060.
M. Maulidah and H. Ferdinandus, “Prediction Of Myers-Briggs Type Indicator Personality Using Long Short-Term Memory,” vol. 21, no. 2, pp. 104–111, 2021, doi: 10.14203/jet.v21.104-111.
A. Dhani, D. Lestari, and M. P. Ningrum, “Applying A Supervised Model for Diabetes Type 2 Risk Level Classification,” Public Res. J. Eng. Data Technol. Comput. Sci., vol. 2, no. January, pp. 60–67, 2025, doi: 10.57152/predatecs.v2i2.1105.
A. F. Lubis et al., “Classification of Diabetes Mellitus Sufferers Eating Patterns Using K-Nearest Neighbors, Naïve Bayes and Decision Tree,” Public Res. J. Eng. Data Technol. Comput. Sci., vol. 2, no. 1, pp. 44–51, 2024, doi: 10.57152/predatecs.v2i1.1103.
S. Bahassine, A. Madani, M. Al-salem, and M. Kissi, “Feature selection using an improved Chi-square for Arabic text classification,” J. King Saud Univ. - Comput. Inf. Sci., vol. 32, no. 2, pp. 225–231, 2020, doi: 10.1016/j.jksuci.2018.05.010.
X. Liu et al., “Adapting Feature Selection Algorithms for the Classification of Chinese Texts,” Systems, vol. 11, no. 9, p. 483, 2023, doi: 10.3390/systems11090483.
C. R. May et al., “Translational framework for implementation evaluation and research?: a normalization process theory coding manual for qualitative research and instrument development,” Implement. Sci., pp. 1–15, 2022, doi: 10.1186/s13012-022-01191-x.
A. Rahmah, N. Sepriyanti, M. H. Zikri, I. Ambarani, and M. Y. bin Shahar, “Implementation of Support Vector Machine and Random Forest for Heart Failure Disease Classification,” Public Res. J. Eng. Data Technol. Comput. Sci., vol. 1, no. 1, pp. 34–40, 2023, doi: 10.57152/predatecs.v1i1.816.
F. Deng, J. Huang, X. Yuan, C. Cheng, and L. Zhang, “Performance and efficiency of machine learning algorithms for analyzing rectangular biomedical data Performance and efficiency of machine learning algorithms for analyzing rectangular biomedical data,” Lab. Investig., no. August, 2022, doi: 10.1038/s41374-020-00525-x.
B. Mahesh, “Machine Learning Algorithms - A Review,” no. January 2019, 2020, doi: 10.21275/ART20203995.
E. Mosqueira, R. Elena, H. Pereira, and D. Alonso, Human?in?the?loop machine learning?: a state of the art, vol. 56, no. 4. Springer Netherlands, 2023. doi: 10.1007/s10462-022-10246-w.
T. Ivan, K. Pavel, I. Ivan, M. Anton, and T. Allen, “Graphical abstract Machine learning assisted design of reactor steels with high long-term,” Mater. Des., p. 114014, 2025, doi: 10.1016/j.matdes.2025.114014.
H. El Massari, Z. Sabouri, and S. Mohammadi, “Diabetes Prediction Using Machine Learning Algorithms and Ontology,” vol. 10, pp. 319–337, 2022, doi: 10.13052/jicts2245-800X.10212.
R. Nosratpour and L. Tafakori, “Satellite-based extreme precipitation estimation using convolution neural networks and ant colony optimized multi-layers perceptron,” Atmos. Res., vol. 320, no. November 2024, p. 108037, 2025, doi: 10.1016/j.atmosres.2025.108037.
W. Kamal et al., “Development of multiple machine-learning computational techniques for optimization of heterogenous catalytic biodiesel production from waste vegetable oil,” Arab. J. Chem., vol. 15, no. 6, p. 103843, 2022, doi: 10.1016/j.arabjc.2022.103843.
A. Sumayli, “Development of advanced machine learning models for optimization of methyl ester biofuel production from papaya oil?: Gaussian process regression ( GPR ), Multi-Layer Perceptron ( MLP ), and K-nearest neighbor ( KNN ) regression models,” Arab. J. Chem., vol. 16, no. 7, p. 104833, 2023, doi: 10.1016/j.arabjc.2023.104833.
N. Gupta, B. Kaushik, M. Khalid, and I. Rahmani, “Performance Evaluation of Deep Dense Layer Neural Network for Diabetes,” 2023, doi: 10.32604/cmc.2023.038864.
G. Huang, Z. Liu, G. Pleiss, L. Van Der Maaten, and K. Q. Weinberger, “Convolutional Networks with Dense Connectivity,” 2019, doi: 10.1109/TPAMI.2019.2918284.
H. Touvron, M. Cord, A. Joulin, P. Bojanowski, M. Caron, and A. El-nouby, “ResMLP?: Feedforward networks for image classification with data-efficient training”.
E. Dönmez, “Hybrid convolutional neural network and Multi-Layer Perceptron vision transformer model for wheat species classification task?: E ? ResMLP +,” Eur. Food Res. Technol., vol. 250, no. 5, pp. 1379–1388, 2024, doi: 10.1007/s00217-024-04469-0.
Q. Li, H. Li, and L. Meng, “A generic deep learning architecture optimization method for edge device based on start?up latency reduction,” 2024, doi: 10.1007/s11554-024-01496-8.
M. Razavi, S. Mavaddati, Z. Kobti, and H. Koohi, “Rice-ResNet?: Rice classification and quality detection by transferred ResNet deep model,” Softw. Impacts, vol. 20, no. May, p. 100654, 2024, doi: 10.1016/j.simpa.2024.100654.
T. Haber, T. J. Ashby, T. J. Ashby, and T. J. Ashby, “ScienceDirect SW-SGD?: The Sliding Window Stochastic Gradient SW-SGD?: The Sliding Window Stochastic Gradient Descent Algorithm SW-SGD?: The Sliding Window Stochastic Gradient Descent Algorithm Descent Algorithm,” Procedia Comput. Sci., vol. 108, pp. 2318–2322, 2017, doi: 10.1016/j.procs.2017.05.082.
Y. Alharbi and S. S. Khan, “Classifying Multi-Lingual Reviews Sentiment Analysis in Arabic and English Languages Using the Stochastic Gradient Descent Model” Computers, Materials & Continua., vol. 8, no. 1, pp. 1275-1290, doi: 10.32604/cmc.2025.061490.
R. K. P and N. C. Naveen, “Op-RMSprop ( Optimized-Root Mean Square Propagation ) Classification for Prediction of Polycystic Ovary Syndrome ( PCOS ) using Hybrid Machine Learning Technique,” no. January 2022, 2024, doi: 10.14569/IJACSA.2022.0130671.
C. G. Tekkali and K. Natarajan, “ScienceDirect Conference on Assessing CNN’s Performance with Multiple Optimization Assessing Functions for Credit Card Fraud Detection Functions,” Procedia Comput. Sci., vol. 235, pp. 2035–2042, 2024, doi: 10.1016/j.procs.2024.04.193.
S. Biswas and S. Dey, “Learning Rate Tuner with Relative Adaptation ( LRT-RA ): Road to Sustainable Computing,” AppliedMath, vol. 5, no. 1, pp. 1–39, 2025, doi: 10.3390/appliedmath5010008.
V. Krutikov and E. Tovbis, “Adam Algorithm with Step Adaptation,” Algorithms, vol. 18, no. 5, p. 268, 2025, doi: doi.org/10.3390/a18050268.
Y. Shao et al., “An Improved BGE-Adam Optimization Algorithm Based on Entropy Weighting and Adaptive Gradient Strategy,” Symmetry (Basel)., vol. 16, no. 5, pp. 1–16, 2024, doi: doi.org/10.3390/sym16050623.
M. Reyad and A. M. Sarhan, “A modified Adam algorithm for deep neural network optimization,” Neural Comput. Appl., vol. 35, no. 23, pp. 17095–17112, 2023, doi: 10.1007/s00521-023-08568-z.
K. Moulaei, M. Shanbehzadeh, Z. M. Taghiabad, and H. K. Arpanahi, “Comparing machine learning algorithms for predicting COVID?19 mortality,” BMC Med. Inform. Decis. Mak., vol. 0, pp. 1–12, 2022, doi: 10.1186/s12911-021-01742-0.
P. F. Pratama, D. Rahmadani, R. S. Nahampun, D. Harmutika, A. Rahmadeyan, and M. F. Evizal, “Random Forest Optimization Using Particle Swarm Optimization for Diabetes Classification,” Public Res. J. Eng. Data Technol. Comput. Sci., vol. 1, no. 1, pp. 41–46, 2023, doi: 10.57152/predatecs.v1i1.809.
E. Helmud, E. Helmud, and P. Romadiana, “Classification Comparison Performance of Supervised Machine Learning Random Forest and Decision Tree Algorithms Using Confusion Matrix,” vol. 13, pp. 92–97, 2024.
Z. C. Dwinnie, L. Khairani, M. A. M. Putri, J. Adhiva, and M. I. F. Tsamarah, “Application of the Supervised Learning Algorithm for Classification of Pregnancy Risk Levels,” Public Res. J. Eng. Data Technol. Comput. Sci., vol. 1, no. 1, pp. 26–33, 2023, doi: 10.57152/predatecs.v1i1.806.
Mustakim, P. Khairunnisa, Sepriano and A. Sari, "Classification of Hypertension Disease Using Backpropagation Algorithm Based on Medical Record Data," 2025 International Conference on Smart Computing, IoT and Machine Learning (SIML), Surakarta, Indonesia, 2025, pp. 1-6, doi: 10.1109/SIML65326.2025.11081146.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Fatimah Azzahra, Muhammad Rafiq Pohan, Ainul Mardhiah Binti Mohammed Rafiq, Imran Hazim Bin Abdullah Salim, Azwa Nurnisya Binti Ayub, Nuralya Medina Binti Mohammad Nizam

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Copyright © by Author; Published by Institut Riset dan Publikasi Indonesia (IRPI)
This Public Research Journal of Engineering, Data Technology and Computer Science is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.









