Akvelon wanted to develop software that analyzes an individual’s attitude in real time, detecting whether they are showing positive, neutral, and negative facial expressions. This software could be useful for clients who would like to gauge the sentiment of potential employees during interviews.
The model was built using transfer learning, expanding on a pretrained model initially designed to detect facial expressions. The original model was trained on the Fer2013+ dataset. The final layer of the neural network was replaced with one more appropriate for the problem and retraining occurred on the final four layers. The dataset that was used for tuning consisted of 49,566 images of individual human faces with various expressions.
Technologies used: Deep Learning, Convolutional Neural Networks, Python, PyTorch
Try the free demo here: //interview-assistant.centralus.cloudapp.azure.com/
Akvelon successfully built the product offering an accuracy of over 84%. The product was a popular demo at Akvelon’s booth at the North American AI & Big Data Expo 2019.
Pretrained model source:
- Albanie, Samuel, and Nagrani, Arsha and Vedaldi, Andrea, and Zisserman, Andrew, “Emotion Recognition in Speech using Cross-Modal Transfer in the Wild.” ACM Multimedia, 2018.
- Jie Hu, Li Shen, Samuel Albanie, Gang Sun and Enhua Wu. “Squeeze-and-Excitation Networks.” IEEE transactions on pattern analysis and machine intelligence (2019).