FME 2023: Facial Micro-Expression Workshop 2023 Ottawa, Canada, October 28-November 3, 2023 |
Conference website | https://megc2023.github.io/workshop.html |
Submission link | https://easychair.org/conferences/?conf=fme2023 |
Submission deadline | July 31, 2023 |
Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress the facial expression, typically found in a high-stakes environment. The duration of MEs are very short, generally lasting no more than 500 milliseconds (ms) and is the telltale sign that distinguishes them from a normal facial expression. Computational analysis and automation of tasks on MEs are emerging areas in face research, with a strong interest appearing as recently as 2014. The availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to further advance in the computational aspect. Since the elicitation and the artificial annotation of MEs are challenging, the amount of labeled ME samples is limited. So far, there are only around 1162 (video) samples across seven public spontaneous databases. Besides, it is impossible to unify the standardization of ME labeling for different annotators. To tackle this problem, we expect that the recent advancement in pattern recognition can help improve ME spotting and recognition performance. For instance, self-supervised learning, one-shot learning, and artificially generating data to aid with the relatively low number of samples.
Furthermore, micro-expression analysis (MEA) faces many challenges. First, the micro-expression generation mechanism is still not precise, while the valence of micro-expression in lie detection is not sufficiently clear. Second, micro-expression samples with high ecological validity are difficult to induce, and data labeling can be quite time-consuming and labor-intensive. This causes the problems of small sample size and imbalanced distribution in MEA tasks. With the development of imaging devices, MEA is not limited to traditional RGB video. New data and research trends are towards combining facial data captured from multiple and various sensors, e.g., depth and thermal cameras, so that different features can be fused for MEA. Furthermore, MEA is an interdisciplinary field with multi-modality research capabilities. First, multi-modality data, such as depth information and physiological signals, can improve micro-expression analysis performance. Second, multi-modal micro-expression analysis can enable more in-depth research on face and emotion analysis.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth.
- Submission Deadline: 21st July 2023
- Notification: 8th August 2023
- Camera-ready: 12th August 2023
- Paper Format and Review:
- FME2023 workshop papers will go through a double-blind review process.
- Paper Format and page limit : The template is the same as the one used for the main conference (ACMMM23) track. Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template as used by regular ACMMM submissions. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf (\documentclass[sigconf,anonymous]{acmart}) template. The page limit would be 8 pages.
We aim to solicit original works that address a variety of challenges of Facial Expressions research, but not limited to:
- Facial expressions (both micro- and macro-expressions) detection/spotting
- Facial expressions recognition
- Multi-modal micro-expression analysis, combining such as depth information, heart rate signal etc.
- FME feature representation and computational analysis
- Unified FME spot-and-recognise schemes
- Deep learning techniques for FMEs detection and recognition
- New objective classes for FMEs analysis
- New FMEs datasets Facial expressions data synthesis
- Psychology of FMEs research
- Facial Action Unit (AU) detection and recognition
- Emotion recognition using AUs
- FME Applications
This workshop explores the intelligent analysis of personal emotions through facial expressions, with particular emphasis on micro-expression analysis to study hidden emotions. Further, a focus on multi-modal approaches and novel generation techniques will be encouraged.
Committees
Organising Committee
- Adrian K. Davison, Manchester Metropolitan University
- Jingting Li, Institute of Psychology, Chinese Academy of Sciences
- Moi Hoon Yap, Manchester Metropolitan University
- John See, Heriot-Watt University Malaysia
- Xiaobai Li, University of Oulu
- Wen-Huang Cheng, National Taiwan University
- Xiaopeng Hong, Harbin Institute of Technology
- Su-Jing Wang, Institute of Psychology, Chinese Academy of Sciences
Program Committee
- TBD
Publication
FME 2023 proceedings will be published by the Association for Computing Machinery.
Contact
All questions about submissions should be emailed to:
Adrian Davison - A.Davison@mmu.ac.uk or
Jingting Li - lijt@psych.ac.cn