Fujitsu and its US division, in collaboration with the Carnegie Mellon School of Computer Science, have developed AI-based facial recognition technology.
“Technology can accurately detect barely perceptible emotional changes, including a nervous smile, embarrassment, etc. Fujitsu expects that the new technology will find application in various applications, including safety,” – announced in a press release. .
As noted in the company, technologies related to the detection of changes in facial expression and “reading” of human emotions, were mainly developed to detect obvious changes – such as: a wide smile or wide eyes. To effectively read human faces, it is important to capture the subtle changes associated with emotions such as misunderstanding, wonder and stress, Fujitsu believes.
To achieve this, the developers of the company used so-called action units (Action Units, AU), which correspond to a specific movement of each facial muscle. There are approximately 30 types of AU based on the movements of each facial muscle. For example, if the AI observes two AUs at the same time – “cheek movement up” and “lip angle lift” – the AI may conclude that the person is happy.
“By integrating this AU into its technology, Fujitsu has been able to detect even slight changes in facial expression,” the company said.
To detect AU with greater accuracy, basic deep learning methods require large amounts of data. However, in real situations, cameras usually capture individuals at different angles and distances, making it difficult to create really large databases for training.
“The problem with today’s technology is that AI has to learn from huge datasets for every AU. He must know how to recognize a particular AU from all possible angles and positions. But we don’t do that, ”a Fujitsu spokesman said in a comment on ZDNet.
In the course of artificial intelligence training, Fujitsu has developed adaptation technology for each facial image. For example, when a person is photographed at an angle, technology can adjust the image to make it look more like a front. With this technology, people’s photos are rotated, enlarged or reduced. This makes it possible to teach AI, with relatively little data available.
According to Fujitsu, the new technology has achieved high accuracy in recognizing facial expressions – 81%, even with limited training data.
Microsoft has a similar technology, as Unirobotica writes. However, her instrument II is capable of recognizing only eight basic emotions – anger, contempt, fear, disgust, happiness, sadness, astonishment, or a neutral facial expression. In this case, the accuracy of the determination of emotions in the tool Microsoft was 60%.