Summary: | The use of a silent speech interface (SSI) to issue commands is becoming more popular because users can use them without uttering the actual sound. This technique is useful for people with speech neurological problems or environments where a speech-based system would be impractical to use, e.g., in a noisy factory or a quiet library. However, state-of-The-Art solutions for SSI is mostly based on vision camera or skin-mounted sensors. These technologies have issues where the camera has privacy concerns and skin sensors are not practical for many applications. Therefore, in this paper, we propose a radar-based SSI which is contactless and protects privacy. For this purpose, we constructed 2-dimensional images of mouth movements from radar echo as a profile of silent command. We propose deep learning-based convolutional neural networks (CNN) to recognize silent commands from 2D images. Our evaluation indicates that the proposed SSI accurately classifies four commands up to 89%. © 2022 IEEE.
|