Neuroengineers at Columbia University have successfully converted human neural activity into synthesized speech. Led by Dr. Nima Mesgarani, their method involved training a deep-learning algorithm to interpret and synthesize the neural patterns of test subjects. Doing so, they were able to “reconstruct the words a person hears with unprecedented clarity.”
First: I just want to make a quick note here that the world did not end after this week’s Super Blood Wolf Moon. I gave it a few days just to be sure. It is what it is. But you know what else hasn’t blown up Earth? CERN’s Large Hadron Collider.
We’re only a week into 2019, and already we’ve seen two pretty amazing events in space exploration: New Horizons’ visit to Ultima Thule, and China’s landing on the far side of the Moon.
In 1984, The Terminator promised me a robot apocalypse circa 2029. And yet, we haven’t quite reached our inevitable obliteration via some Skynet-esque artificial intelligence.
“What are your commands?” asks the spherical robot as it floats through space. Inch by little inch, we’re getting closer to the futures dreamt up in science fiction. In this video, ESA astronaut Alexander Gerst introduces us to Cimon, a robot created to assist astronauts aboard the International Space Station.
China’s Xinhua News Agency, alongside the Chinese search engine Sogou.com, have revealed what they call the “world’s first artificial intelligence news anchor.” Its appearance, facial expressions, and voice were modeled after the real news anchor Zhang Zhao, using machine learning algorithms that essentially “taught” it to recreate his likeness. The final product, what you see …
There was a time when you may have looked at Boston Dynamics’ Atlas robot and thought, hey, I could outrun that. It was a lumbering hunk of metal, after all. Sure, it could open doors and go for quiet walks in the freshly fallen snow, even lift 10-pound boxes. But in a robot apocalypse scenario? …