Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network
The difficulties in the communication between the deaf and normal people through sign language can be overcome by implementing deep learning in the gestures signal recognition. The use of the Convolution Neural Network (CNN) in distinguishing radar-based gesture signals of deaf sign language has not...
Published in: | International Journal of Integrated Engineering |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Published: |
Penerbit UTHM
2023
|
Online Access: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85170289448&doi=10.30880%2fijie.2023.15.03.012&partnerID=40&md5=d0f8ac3d6f1c1f0dd4921c42d60d3248 |
id |
2-s2.0-85170289448 |
---|---|
spelling |
2-s2.0-85170289448 Malik M.D.H.D.; Mansor W.; Rashid N.E.A.; Rahman M.Z.U. Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network 2023 International Journal of Integrated Engineering 15 3 10.30880/ijie.2023.15.03.012 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85170289448&doi=10.30880%2fijie.2023.15.03.012&partnerID=40&md5=d0f8ac3d6f1c1f0dd4921c42d60d3248 The difficulties in the communication between the deaf and normal people through sign language can be overcome by implementing deep learning in the gestures signal recognition. The use of the Convolution Neural Network (CNN) in distinguishing radar-based gesture signals of deaf sign language has not been investigated. This paper describes the recognition of gestures of deaf sign language using radar and CNN. Six gestures of deaf sign language were acquired from normal subjects using a radar system and processed. Short-time Fourier Transform was performed to extract the gestures features and the classification was performed using CNN. The performance of CNN was examined using two types of inputs; segmented and non-segmented spectrograms. The accuracy of recognising the gestures is higher (92.31%) using the non-segmented spectrograms compared to the segmented spectrogram. The radar-based deaf sign language could be recognised accurately using CNN without segmentation. © Universiti Tun Hussein Onn Malaysia Publisher’s Office Penerbit UTHM 2229838X English Article All Open Access; Bronze Open Access |
author |
Malik M.D.H.D.; Mansor W.; Rashid N.E.A.; Rahman M.Z.U. |
spellingShingle |
Malik M.D.H.D.; Mansor W.; Rashid N.E.A.; Rahman M.Z.U. Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
author_facet |
Malik M.D.H.D.; Mansor W.; Rashid N.E.A.; Rahman M.Z.U. |
author_sort |
Malik M.D.H.D.; Mansor W.; Rashid N.E.A.; Rahman M.Z.U. |
title |
Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
title_short |
Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
title_full |
Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
title_fullStr |
Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
title_full_unstemmed |
Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
title_sort |
Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network |
publishDate |
2023 |
container_title |
International Journal of Integrated Engineering |
container_volume |
15 |
container_issue |
3 |
doi_str_mv |
10.30880/ijie.2023.15.03.012 |
url |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85170289448&doi=10.30880%2fijie.2023.15.03.012&partnerID=40&md5=d0f8ac3d6f1c1f0dd4921c42d60d3248 |
description |
The difficulties in the communication between the deaf and normal people through sign language can be overcome by implementing deep learning in the gestures signal recognition. The use of the Convolution Neural Network (CNN) in distinguishing radar-based gesture signals of deaf sign language has not been investigated. This paper describes the recognition of gestures of deaf sign language using radar and CNN. Six gestures of deaf sign language were acquired from normal subjects using a radar system and processed. Short-time Fourier Transform was performed to extract the gestures features and the classification was performed using CNN. The performance of CNN was examined using two types of inputs; segmented and non-segmented spectrograms. The accuracy of recognising the gestures is higher (92.31%) using the non-segmented spectrograms compared to the segmented spectrogram. The radar-based deaf sign language could be recognised accurately using CNN without segmentation. © Universiti Tun Hussein Onn Malaysia Publisher’s Office |
publisher |
Penerbit UTHM |
issn |
2229838X |
language |
English |
format |
Article |
accesstype |
All Open Access; Bronze Open Access |
record_format |
scopus |
collection |
Scopus |
_version_ |
1812871797529378816 |