Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning
The power of mobile devices has been harnessed as a platform for augmented reality (AR) that has embarked on opportunities to combine more inputs called multimodal inputs. There has been little research into the multimodal inputs that combine emotion, speech and markers in a mobile AR learning envir...
Published in: | IEEE Access |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers Inc.
2022
|
Online Access: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85135745670&doi=10.1109%2fACCESS.2022.3193498&partnerID=40&md5=75d2db5c5adbc0eba989c360db22a459 |
id |
2-s2.0-85135745670 |
---|---|
spelling |
2-s2.0-85135745670 Hashim N.C.; Majid N.A.A.; Arshad H.; Hashim H.; Abdi Alkareem Alyasseri Z. Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning 2022 IEEE Access 10 10.1109/ACCESS.2022.3193498 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85135745670&doi=10.1109%2fACCESS.2022.3193498&partnerID=40&md5=75d2db5c5adbc0eba989c360db22a459 The power of mobile devices has been harnessed as a platform for augmented reality (AR) that has embarked on opportunities to combine more inputs called multimodal inputs. There has been little research into the multimodal inputs that combine emotion, speech and markers in a mobile AR learning environment. This study aims to propose a framework for a mobile AR learning system that incorporates the combination of multimodal inputs, namely emotion, image-based marker and speech in order to determine how such a combination can enhance the learning experience. The proposed framework integrates the multimodal inputs based on the decision tree and develops into a four-phase learning system based on Kolb's experiential learning model. To evaluate this learning system, 38 students were selected and divided into two groups for an experiment on vocabulary at a primary school. Quantitative findings showed better results concerning learning effectiveness, mental load, engagement, competency and challenge when the three multimodal inputs, speech, marker and emotion, were combined. Therefore, the proposed multimodal framework can be used as a guideline to develop a multimodal based AR application for learning environment by integrating multimodal inputs and Kolb's experiential learning model. © 2022 IEEE. Institute of Electrical and Electronics Engineers Inc. 21693536 English Article All Open Access; Gold Open Access |
author |
Hashim N.C.; Majid N.A.A.; Arshad H.; Hashim H.; Abdi Alkareem Alyasseri Z. |
spellingShingle |
Hashim N.C.; Majid N.A.A.; Arshad H.; Hashim H.; Abdi Alkareem Alyasseri Z. Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
author_facet |
Hashim N.C.; Majid N.A.A.; Arshad H.; Hashim H.; Abdi Alkareem Alyasseri Z. |
author_sort |
Hashim N.C.; Majid N.A.A.; Arshad H.; Hashim H.; Abdi Alkareem Alyasseri Z. |
title |
Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
title_short |
Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
title_full |
Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
title_fullStr |
Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
title_full_unstemmed |
Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
title_sort |
Mobile Augmented Reality Based on Multimodal Inputs for Experiential Learning |
publishDate |
2022 |
container_title |
IEEE Access |
container_volume |
10 |
container_issue |
|
doi_str_mv |
10.1109/ACCESS.2022.3193498 |
url |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85135745670&doi=10.1109%2fACCESS.2022.3193498&partnerID=40&md5=75d2db5c5adbc0eba989c360db22a459 |
description |
The power of mobile devices has been harnessed as a platform for augmented reality (AR) that has embarked on opportunities to combine more inputs called multimodal inputs. There has been little research into the multimodal inputs that combine emotion, speech and markers in a mobile AR learning environment. This study aims to propose a framework for a mobile AR learning system that incorporates the combination of multimodal inputs, namely emotion, image-based marker and speech in order to determine how such a combination can enhance the learning experience. The proposed framework integrates the multimodal inputs based on the decision tree and develops into a four-phase learning system based on Kolb's experiential learning model. To evaluate this learning system, 38 students were selected and divided into two groups for an experiment on vocabulary at a primary school. Quantitative findings showed better results concerning learning effectiveness, mental load, engagement, competency and challenge when the three multimodal inputs, speech, marker and emotion, were combined. Therefore, the proposed multimodal framework can be used as a guideline to develop a multimodal based AR application for learning environment by integrating multimodal inputs and Kolb's experiential learning model. © 2022 IEEE. |
publisher |
Institute of Electrical and Electronics Engineers Inc. |
issn |
21693536 |
language |
English |
format |
Article |
accesstype |
All Open Access; Gold Open Access |
record_format |
scopus |
collection |
Scopus |
_version_ |
1814778505734389760 |