CUSTOMIZABLE FACIAL GESTURE RECOGNITION FOR IMPROVED ASSISTIVE TECHNOLOGY

Digital devices have become an essential part of modern life. However, it is much more difficult for less able-bodied individuals to interact with them. Assistive technology based on facial gestures could potentially enable people with upper limb motor disability to interact with electronic interfaces effectively and efficiently. Previous studies proposed solution that can classify predefined facial gestures. In this study, we build a customizable facial gesture recognition system using the Prototypical Network, an effective solution to the few-shot learning problem. Our second contribution is the insight that since facial gesture recognition is done based on tracked landmarks, a training set can be synthesized using a graphics engine. We show that our model trained using only synthetic faces can perform reasonably well on realistic faces.

[1]  Zhi-Hong Mao,et al.  Intelligent Wearable Virtual Reality (VR) Gaming Controller for People with Motor Disabilities , 2018, 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR).

[2]  Martin Lochner,et al.  Fast Human-Computer Interaction by Combining Gaze Pointing and Face Gestures , 2017, TACC.

[3]  Richard S. Zemel,et al.  Prototypical Networks for Few-shot Learning , 2017, NIPS.

[4]  Tomas Pfister,et al.  Learning from Simulated and Unsupervised Images through Adversarial Training , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Eugene Fiume,et al.  JALI , 2016, ACM Trans. Graph..

[6]  Nicu Sebe,et al.  Learning Personalized Models for Facial Expression Analysis and Gesture Recognition , 2016, IEEE Transactions on Multimedia.

[7]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[8]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[9]  Peter H. Tu,et al.  Learning person-specific models for facial expression and action unit recognition , 2013, Pattern Recognit. Lett..

[10]  Georgios Kouroupetroglou Assistive Technologies and Computer Access for Motor Disabilities , 2013 .

[11]  Fernando De la Torre,et al.  Selective Transfer Machine for Personalized Facial Action Unit Detection , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Maja Pantic,et al.  Meta-Analysis of the First Facial Expression Recognition Challenge , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[13]  Aleix M. Martínez,et al.  A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives , 2012, J. Mach. Learn. Res..

[14]  Xuemin Zhang,et al.  Increased accessibility to nonverbal communication through facial and expression recognition technologies for blind/visually impaired subjects , 2011, ASSETS.

[15]  Andrey Ronzhin,et al.  An Assistive Bi-modal User Interface Integrating Multi-channel Speech Recognition and Computer Vision , 2011, HCI.

[16]  Fernando De la Torre,et al.  Facial Expression Analysis , 2011, Visual Analysis of Humans.

[17]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[18]  Howell Istance,et al.  Providing motor impaired users with access to standard Graphical User Interface (GUI) software via eye-based interaction , 1996 .

[19]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .