Perveen, Nazil and Mohan, C. K. and Chen, Yen Wei
(2022)
Expression Modeling Using Dynamic Kernels for Quantitative Assessment of Facial Paralysis.
In: 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2020, 27 February 2020 through 29 February 2020, Valletta.
Full text not available from this repository.
(
Request a copy)
Abstract
Facial paralysis is a syndrome that causes difficulty in the movement of facial muscles on either one or both sides of the face. In this paper, the quantitative assessment for facial paralysis is proposed using dynamic kernels to detect facial paralysis and its various effect level’s on a person’s face by modeling different facial expressions. Initially, the movements of facial muscles are captured locally by spatio-temporal features for each video. Using the extracted spatio-temporal features from all the videos, a large Gaussian mixture model (GMM) is trained to learn the dynamics of facial muscles globally. In order to handle these local and global features in variable-length patterns like videos, we propose to use a dynamic kernel modeling approach. Dynamic kernels are generally known for handling variable-length data patterns like speech, videos, etc., either by mapping them into fixed-length data patterns or by creating new kernels for example by selecting discriminative sets of representations obtained from GMM statistics. In the proposed work, we explore three different kinds of dynamic kernels, namely, explicit mapping kernels, probability-based kernels, and intermediate matching kernels for the modeling of facial expressions. These kernels are then used as feature vectors for classification using the support vector machine (SVM) to detect severity levels of facial paralysis. The efficacy of the proposed dynamic kernel modeling approach for the quantitative assessment of facial paralysis is demonstrated on a self-collected facial paralysis video dataset of 39 facially paralyzed patients of different severity levels. The dataset collected contains patients from different age groups and gender, further, the videos are recorded from seven different view angles for making the proposed model robust to subject and view variations. © 2022, Springer Nature Switzerland AG.
[error in script]
IITH Creators: |
IITH Creators | ORCiD |
---|
Mohan, C. K. | https://orcid.org/0000-0002-7316-0836 |
|
Item Type: |
Conference or Workshop Item
(Paper)
|
Additional Information: |
The DIADERM study is sponsored by Fundaci?n Piel Sana de la AEDV (Healthy Skin Foundation of the Spanish Academy of Dermatology and Venereology), which received funding from Novartis. The pharmaceutical company did not participate in data collection, data analysis, or interpretation of the results. Guillermo Gonz?lez-L?pez received the Juan de Az?a grant from Fundaci?n Piel Sana, part of which was used to fund the present study. |
Uncontrolled Keywords: |
Dynamic kernels; Expression modeling; Facial paralysis; Gaussian mixture model; Spatial and temporal features; Yanagihara grading scales |
Subjects: |
Computer science |
Divisions: |
Department of Computer Science & Engineering |
Depositing User: |
. LibTrainee 2021
|
Date Deposited: |
25 Jul 2022 09:10 |
Last Modified: |
25 Jul 2022 09:10 |
URI: |
http://raiithold.iith.ac.in/id/eprint/9910 |
Publisher URL: |
http://doi.org/10.1007/978-3-030-94893-1_17 |
Related URLs: |
|
Actions (login required)
|
View Item |