A Smart Embedded System to Map Video Feeds of Human Actions through the Virtual Characters

Main Article Content

Sana Mushtaq
Syed Aun Irtaza

Abstract

Virtual reality has made significant advancements in recent years and has the potential to transform the way we interact with digital content. One area of research that has gained attention is the integration of facial expression recognition (FER) technology. FER technology involves detecting and analyzing human facial expressions, which can provide important information about a user's emotional state and level of engagement with VR content. The integration of FER into VR can lead to more immersive and engaging experiences, as the technology can allow for more natural interactions between users and virtual environments. Overall, the integration of FER technology into VR holds promise for enhancing user experiences and improving our understanding of human emotions and behavior in virtual environments. In this paper, we have presented a facial expression recognition mapping from videos to virtual characters. We proposed EfficientNetB7 model that is designed to be lightweight and efficient, making it well-suited for deployment on mobile and embedded devices with limited computational resources. We have analyzed AffectNet dataset on multiple optimizers. Our model achieved the highest accuracy of 83.5% on RMSProp which is higher than Adam and SGD optimizers. Later, we used our model to detect human emotions in real time

Article Details

How to Cite
Mushtaq, S., & Irtaza, S. (2025). A Smart Embedded System to Map Video Feeds of Human Actions through the Virtual Characters. Technical Journal, 29(04), 33-38. Retrieved from https://tj.uettaxila.edu.pk/index.php/technical-journal/article/view/2229
Section
COMPUTER SCIENCE
Author Biography

Syed Aun Irtaza

Associate Professor, Computer Science Department, UET Taxila