Google’s chief of engineering, Ray Kurzweil, claims that artificial intelligence will irrevocably “change the nature of humanity itself” by the year 2029.
How will we get there? Metro breaks it down for us, outlining the steps we can expect to take on our way to electronic doom. Right now, they say, we’re surrounded by what’s called “narrow AI.” Our robots are smart enough to handle basic tasks.
Our D.C. office building got a security robot. It drowned itself.
We were promised flying cars, instead we got suicidal robots. pic.twitter.com/rGLTAWZMjn
— Bilal Farooqui (@bilalfarooqui) July 17, 2017
In the next several years, we may see “General AI,” artificial intelligence similar to humans. At this point, some AI or another will pass the Turing Test. This is a test, devised by Alan Turing in 1950, meant to gauge a machine’s ability to behave indistinguishably from a human. Fool the human, win the prize.
Then, finally, we’ll hit “artificial super-intelligence.” This is the kind of AI we like to dream about, but that (I think) we maybe wouldn’t want to hang out with. It’ll be much, much smarter than us. And who knows what else…
Kurzweil has predicted that, by 2045, computers will be something like a “billion times more powerful than all of the human brains on Earth” (The Guardian has a good write-up on Kurzweil from a few years ago, if you’re interested).
We’ll reach the singularity, and whether we want it or not, humanity will be forever changed. We’ll likely become a part of these machines, augmenting ourselves and even transferring our consciousnesses.
According to Metro, Softbank CEO Masayoshi Son doesn’t agree – he thinks it will happen in 2047. And Nvidia’s head of AI developer relations, Alison Lowndes, believes the singularity will happen in her lifetime.
I don’t know how I feel about all that. On one hand, we could join the robots, and become cyborg abominations with thoughts stored in the cloud – if they let us. Or the AI could just destroy us, or leave us behind altogether.