Handwriting Image Classification for Automated Diagnosis of Learning Disabilities: A Review on Deep Learning Models and Future Directions

This study reviews deep learning models used in handwriting image classification for the automated diagnosis of learning disabilities. By addressing handwriting diversity and misclassification challenges, two models were highlighted: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs...

全面介绍

书目详细资料
发表在:19th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2024
主要作者: Sukiman S.A.; Husin N.A.; Hamdan H.; Murad M.A.A.
格式: Conference paper
语言:English
出版: Institute of Electrical and Electronics Engineers Inc. 2024
在线阅读:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216562911&doi=10.1109%2fiSAI-NLP64410.2024.10799245&partnerID=40&md5=94ad7c5e822a95e47c2061ced489ffc5
实物特征
总结:This study reviews deep learning models used in handwriting image classification for the automated diagnosis of learning disabilities. By addressing handwriting diversity and misclassification challenges, two models were highlighted: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Literature was retrieved from major databases including IEEE Xplore, Scopus, Web of Science (WoS), and Google Scholar, with studies on Parkinson's disease, tremor patients, and machine learning excluded. CNNs represent a more mature architecture focusing on convolutions, pooling, and activation function. Meanwhile, ViTs emerges as a promising alternative via its multi-head attention architecture. This review also compares the accuracy of both models, specifying the sources of handwriting images, as well as providing future directions relevant to the research field. © 2024 IEEE.
ISSN:
DOI:10.1109/iSAI-NLP64410.2024.10799245