Human Action Recognition (HAR) using Image Processing on Deep Learning

The advancement of artificial intelligence (AI) has bought many advances to human society as a whole. By using daily activities and integrating the technology from the fruits of AI, we can manage to gain further access to knowledge we can only begin to imagine. In identifying human action recognitio...

Full description

Bibliographic Details
Published in:Proceedings - 13th IEEE International Conference on Control System, Computing and Engineering, ICCSCE 2023
Main Author: Ismail A.P.; Azahar M.A.B.; Tahir N.M.; Daud K.; Kasim N.M.
Format: Conference paper
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2023
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85172935205&doi=10.1109%2fICCSCE58721.2023.10237158&partnerID=40&md5=4191d3c69d7c66d97b4f98ee9314231e
id 2-s2.0-85172935205
spelling 2-s2.0-85172935205
Ismail A.P.; Azahar M.A.B.; Tahir N.M.; Daud K.; Kasim N.M.
Human Action Recognition (HAR) using Image Processing on Deep Learning
2023
Proceedings - 13th IEEE International Conference on Control System, Computing and Engineering, ICCSCE 2023


10.1109/ICCSCE58721.2023.10237158
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85172935205&doi=10.1109%2fICCSCE58721.2023.10237158&partnerID=40&md5=4191d3c69d7c66d97b4f98ee9314231e
The advancement of artificial intelligence (AI) has bought many advances to human society as a whole. By using daily activities and integrating the technology from the fruits of AI, we can manage to gain further access to knowledge we can only begin to imagine. In identifying human action recognition (HAR); processing photos and videos to discern whether a human is present, then mapping the subject classified, which lastly determines the action being carried out is the objective. To achieve this, various steps are taken and careful approach is required, with the extensive amount of research, numerous troubleshooting and experimentation is required. The AI architecture has to learn from dataset collected for it to discern the identification of action properly. HAR is achieved by using Python code using real-time webcam feed. Human pose detection library known as MediaPipe Pose Detection detects human anatomy from input through joints key-points. MediaPipe algorithm that extract features in x-y-z axis with visibility (four variables) and the extracted data is trained using CNN-LSTM based on the trained and tested algorithm classifier model. The output obtained produced an RGB-skeleton and an action label on the detected subject as standing, waving, walking and sitting, has yielded good results. © 2023 IEEE.
Institute of Electrical and Electronics Engineers Inc.

English
Conference paper

author Ismail A.P.; Azahar M.A.B.; Tahir N.M.; Daud K.; Kasim N.M.
spellingShingle Ismail A.P.; Azahar M.A.B.; Tahir N.M.; Daud K.; Kasim N.M.
Human Action Recognition (HAR) using Image Processing on Deep Learning
author_facet Ismail A.P.; Azahar M.A.B.; Tahir N.M.; Daud K.; Kasim N.M.
author_sort Ismail A.P.; Azahar M.A.B.; Tahir N.M.; Daud K.; Kasim N.M.
title Human Action Recognition (HAR) using Image Processing on Deep Learning
title_short Human Action Recognition (HAR) using Image Processing on Deep Learning
title_full Human Action Recognition (HAR) using Image Processing on Deep Learning
title_fullStr Human Action Recognition (HAR) using Image Processing on Deep Learning
title_full_unstemmed Human Action Recognition (HAR) using Image Processing on Deep Learning
title_sort Human Action Recognition (HAR) using Image Processing on Deep Learning
publishDate 2023
container_title Proceedings - 13th IEEE International Conference on Control System, Computing and Engineering, ICCSCE 2023
container_volume
container_issue
doi_str_mv 10.1109/ICCSCE58721.2023.10237158
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85172935205&doi=10.1109%2fICCSCE58721.2023.10237158&partnerID=40&md5=4191d3c69d7c66d97b4f98ee9314231e
description The advancement of artificial intelligence (AI) has bought many advances to human society as a whole. By using daily activities and integrating the technology from the fruits of AI, we can manage to gain further access to knowledge we can only begin to imagine. In identifying human action recognition (HAR); processing photos and videos to discern whether a human is present, then mapping the subject classified, which lastly determines the action being carried out is the objective. To achieve this, various steps are taken and careful approach is required, with the extensive amount of research, numerous troubleshooting and experimentation is required. The AI architecture has to learn from dataset collected for it to discern the identification of action properly. HAR is achieved by using Python code using real-time webcam feed. Human pose detection library known as MediaPipe Pose Detection detects human anatomy from input through joints key-points. MediaPipe algorithm that extract features in x-y-z axis with visibility (four variables) and the extracted data is trained using CNN-LSTM based on the trained and tested algorithm classifier model. The output obtained produced an RGB-skeleton and an action label on the detected subject as standing, waving, walking and sitting, has yielded good results. © 2023 IEEE.
publisher Institute of Electrical and Electronics Engineers Inc.
issn
language English
format Conference paper
accesstype
record_format scopus
collection Scopus
_version_ 1809678020213997568