Feature Substitution Using Latent Dirichlet Allocation for Text Classification
Text classification plays a pivotal role in natural language processing, enabling applications such as product categorization, sentiment analysis, spam detection, and document organization. Traditional methods, including bag-of-words and TF-IDF, often lead to high-dimensional feature spaces, increas...
Published in: | International Journal of Advanced Computer Science and Applications |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Published: |
Science and Information Organization
2025
|
Online Access: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216865449&doi=10.14569%2fIJACSA.2025.01601105&partnerID=40&md5=abf0789c4fb07f118c7d1b6f7437a0d3 |
id |
2-s2.0-85216865449 |
---|---|
spelling |
2-s2.0-85216865449 Mathivanan N.M.N.; Janor R.M.; Razak S.A.; Md. Ghani N.A. Feature Substitution Using Latent Dirichlet Allocation for Text Classification 2025 International Journal of Advanced Computer Science and Applications 16 1 10.14569/IJACSA.2025.01601105 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216865449&doi=10.14569%2fIJACSA.2025.01601105&partnerID=40&md5=abf0789c4fb07f118c7d1b6f7437a0d3 Text classification plays a pivotal role in natural language processing, enabling applications such as product categorization, sentiment analysis, spam detection, and document organization. Traditional methods, including bag-of-words and TF-IDF, often lead to high-dimensional feature spaces, increasing computational complexity and susceptibility to overfitting. This study introduces a novel Feature Substitution technique using Latent Dirichlet Allocation (FS-LDA), which enhances text representation by replacing non-overlapping high-probability topic words. FS-LDA effectively reduces dimensionality while retaining essential semantic features, optimizing classification accuracy and efficiency. Experimental evaluations on five e- commerce datasets and an SMS spam dataset demonstrated that FS-LDA, combined with Hidden Markov Models (HMMs), achieved up to 95% classification accuracy in binary tasks and significant improvements in macro and weighted F1-scores for multiclass tasks. The innovative approach lies in FS-LDA's ability to seamlessly integrate dimensionality reduction with feature substitution, while its predictive advantage is demonstrated through consistent performance enhancement across diverse datasets. Future work will explore its application to other classification models and domains, such as social media analysis and medical document categorization, to further validate its scalability and robustness. © (2025), (Science and Information Organization). All rights reserved. Science and Information Organization 2158107X English Article All Open Access; Gold Open Access |
author |
Mathivanan N.M.N.; Janor R.M.; Razak S.A.; Md. Ghani N.A. |
spellingShingle |
Mathivanan N.M.N.; Janor R.M.; Razak S.A.; Md. Ghani N.A. Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
author_facet |
Mathivanan N.M.N.; Janor R.M.; Razak S.A.; Md. Ghani N.A. |
author_sort |
Mathivanan N.M.N.; Janor R.M.; Razak S.A.; Md. Ghani N.A. |
title |
Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
title_short |
Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
title_full |
Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
title_fullStr |
Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
title_full_unstemmed |
Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
title_sort |
Feature Substitution Using Latent Dirichlet Allocation for Text Classification |
publishDate |
2025 |
container_title |
International Journal of Advanced Computer Science and Applications |
container_volume |
16 |
container_issue |
1 |
doi_str_mv |
10.14569/IJACSA.2025.01601105 |
url |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216865449&doi=10.14569%2fIJACSA.2025.01601105&partnerID=40&md5=abf0789c4fb07f118c7d1b6f7437a0d3 |
description |
Text classification plays a pivotal role in natural language processing, enabling applications such as product categorization, sentiment analysis, spam detection, and document organization. Traditional methods, including bag-of-words and TF-IDF, often lead to high-dimensional feature spaces, increasing computational complexity and susceptibility to overfitting. This study introduces a novel Feature Substitution technique using Latent Dirichlet Allocation (FS-LDA), which enhances text representation by replacing non-overlapping high-probability topic words. FS-LDA effectively reduces dimensionality while retaining essential semantic features, optimizing classification accuracy and efficiency. Experimental evaluations on five e- commerce datasets and an SMS spam dataset demonstrated that FS-LDA, combined with Hidden Markov Models (HMMs), achieved up to 95% classification accuracy in binary tasks and significant improvements in macro and weighted F1-scores for multiclass tasks. The innovative approach lies in FS-LDA's ability to seamlessly integrate dimensionality reduction with feature substitution, while its predictive advantage is demonstrated through consistent performance enhancement across diverse datasets. Future work will explore its application to other classification models and domains, such as social media analysis and medical document categorization, to further validate its scalability and robustness. © (2025), (Science and Information Organization). All rights reserved. |
publisher |
Science and Information Organization |
issn |
2158107X |
language |
English |
format |
Article |
accesstype |
All Open Access; Gold Open Access |
record_format |
scopus |
collection |
Scopus |
_version_ |
1825722576055304192 |