Scientists Develop New Software That Can “Read” Facial Micro Expressions To Successfully Interpret Emotions

read emotions

Researchers from the SEI Emerging Technology Center (SEI ETC) in machine emotional intelligence are developing a prototype software tool that could revolutionize the way security interviews and interrogations are conducted.

The research team is developing a special software that will be able to identify and ‘read’ micro-expressions – involuntary facial movements that are said to reveal true emotions.

These movements, which last only a fraction of a second, occur in any given region of the face, are universal to all races and cultures and are virtually impossible to fake or suppress.

Because of all this, it is clear to see how developing a software that can recognize micro-expressions could greatly enhance our capability to predict and react to potentially dangerous situations.

The foundations of micro-expression recognition were laid out by Dr. Paul Ekman, who first linked these subtle, involuntary movements with deception in 1960.

Working with clinical patients who would often hide their strong negative feelings during interviews, Dr. Ekman attempted to determine whether he could spot deception in depressed patients via micro-expressions in order to prevent suicide.

His pioneering work later formed the basis for the science that trains humans to detect micro-expressions, allowing interviewers to read faces with specialized, continuous training.

But can computers be used to reveal the emotions hidden in these subtle and involuntary facial movements?

The answer is – they already are. A software tool like Affectava, for instance, can successfully recognize and identify emotions based on macro-expressions such as an exaggerated frown or a smile. The problem is, micro-expressions can easily be faked.

Other existing approaches to ‘reading’ micro-expressions include the painstaking and time-consuming use of hand-crafted features that search pre-defined areas of the face for facial action units.  

Mr. Satya Venneti, a lead researcher of the new SEI ETC project, claims that their prototype software addresses these limitations by using machine learning features that ‘treat the whole face as a canvas’.

He said the biggest problem they faced was finding a dataset of spontaneous micro-expressions with accurately labeled data to establish the ground truth, as very few existing databases capture subjects’ suppressed reactions, and even fewer capture those reactions at a consistent quality.

The database they selected for the project was compiled by the Chinese Academy of Sciences (CASME) as part of a study in which participants were shown a series of videos and were asked to show no emotion.

It includes five emotional classes including surprise, happiness, depression, disgust, and other.

According to Mr. Venneti, their approach uses two ‘convolutional neural networks’ (CNN’s) including a ‘spatial’ CNN that has been pre-trained on faces from ImageNet, and a ‘temporal’ CNN for analyzing changes over extended periods of time.

He says the approach is 67.7% accurate at recognizing micro-expressions, or as he put it “on par with the state of the art”.

Still a work in progress, Mr. Venneti concedes that their new software has a long way to go before it becomes applicable in real-world settings, but he is adamant that further advances in this field can revolutionize interrogations, security checkpoint encounters and media and video analysis.

Your email address will not be published. Required fields are marked *