If we hope to make truly social robots that people feel comfortable talking to, there are a lot of hurdles we still have to clear. Language is just one of them. Face-to-face human interaction is a complex ritual involving not just spoken words but body language, nonverbal cues and about a million other elements — half of which are essentially subconscious.
For example, consider the idea of eye contact. We don’t incessantly stare at other people when we speak to them, and we don’t expect them to stare back. Unless they’re Kylo Ren.
Programming companion robots to understand this kind of social protocol is tricky but crucial, according to researcher Sean Andrist, a Ph.D. student at the University of Wisconsin’s Department of Computer Sciences.