If you asked me how artificial intelligence would learn about humans, Skype would not be the first thing to come to mind. But, apparently, that’s exactly how the robot apocalypse will go down.
Facebook’s AI lab has designed a bot animation that can mimic human facial expressions through an artificially intelligent algorithm – an algorithm that learned everything it knows from watching people talk to each other on Skype.
According to New Scientist, the AI watched hundreds of videos, learning the expressive nature of humans during Skype conversations – the non-verbal facial cues, like nodding, blinking, and smiling. It studied how to mimic these natural, and often unconscious, ways we communicate, specifically by identifying 68 “key points” on the human face. When later tested, the lab deemed their bot “human-like.”
After learning enough from the videos, the AI could determine the “appropriate facial response” for a conversation. The example given is, perhaps, the bot laughing along with a human.
However, as New Scientist also points out, just because an AI could appropriately mimic human facial expressions, that wouldn’t mean there’s a true personality behind it. Not yet. That could cause a less-than-realistic experience for anyone who might converse with one.
As with all work being done in robotics and artificial intelligence, much of it is very rudimentary at the moment. But right now, we’re witnessing little bits of the future that will some day become as common as the laptop I’m using to write this post. Artificially intelligent machines that resemble humans.
So when your copy of Windows 11 decides it’s going to update right now, whether you like it or not, it can do so with a smile.
You can read the full study here (PDF).
In other news, Facebook has an AI lab. In fact, just a couple months ago, they had to end a different AI experiment because their chat bots, Bob and Alice, created their own language. A secret language. To quote Alice:
“Balls have zero to me to me to me to me to me to me to me to me to. ”
This, as CBS News reported, was “a kind of shorthand.” It wasn’t that the bots were scheming or anything (not that we know, anyway), but their bizarre secret language ventured outside the original intentions of the study. And so their strange and cryptic conversations came to an end.