Startup Gets Computers to Read Faces, Seeks Purpose Beyond Ads


David Talbot
MIT Technology Review
October 28, 2013

Face Code: Using images captured from simple webcams, Affectiva’s software tracks the movement of muscles in the lips, eyebrows, and other parts of the face to determine a person’s emotional state. / TechnologyReview.com.

Face Code: Using images captured from simple webcams, Affectiva’s software tracks the movement of muscles in the lips, eyebrows, and other parts of the face to determine a person’s emotional state. / TechnologyReview.com.

Last year more than 1,000 people in four countries sat down and watched 115 television ads, such as one featuring anthropomorphized M&M candies boogying in a bar. All the while, webcams pointed at their faces and streamed images of their expressions to a server in Waltham, Massachusetts.

In Waltham, an algorithm developed by a startup company called Affectiva performed what is known as facial coding: it tracked the panelists’ raised eyebrows, furrowed brows, smirks, half-smirks, frowns, and smiles. (Watch a video of the technology in action below this story or here.) When this face data was later merged with real-world sales data, it turned out that the facial measurements could be used to predict with 75 percent accuracy whether sales of the advertised products would increase, decrease, or stay the same after the commercials aired. By comparison, surveys of panelists’ feelings about the ads could predict the products’ sales with 70 percent accuracy.

Although this was an incremental improvement statistically, it reflected a milestone in the field of affective computing. While people notoriously have a hard time articulating how they feel, now it is clear that machines can not only read some of their feelings but also go a step farther and predict the statistical likelihood of later behavior.

Read more


Infowars.com Videos:


Comments are closed.