I guess it’s that time of the year, when scientists decide to freak everybody out about the potential for a catastrophic extraterrestrial encounter.
Did you hear the one about the billion-years-old artificially intelligent alien civilization? A recent study by cognitive scientist and NASA philosopher Susan Schneider explored the possibility. She doesn’t think advanced alien civilizations would be biological in nature. Not after a certain point, anyway.
“The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.”
This alien superintelligence would potentially be anywhere from 1.7 billion to 8 billion years old, having achieved the transition from biological to artificial within a “short window” after reaching out into the cosmos. They would also potentially have their minds uploaded to robots.
The ultimate point being, if we ever do have an alien civilization knocking at our door, we should probably expect them to be extraordinarily advanced in comparison to us.
Would that be a good thing? Let’s ask theoretical physicist Michio Kaku.
Kaku recently did an interview with The Guardian, and one of the questions turned to potential communication with extraterrestrials. He believes the launch of the Webb telescope, the successor to Hubble, may indeed lead to first contact. But this, he says, might not be a good thing:
“There are some colleagues of mine that believe we should reach out to them. I think that’s a terrible idea. We all know what happened to Montezuma when he met Cortes in Mexico so many hundreds of years ago. Now, personally, I think that aliens out there would be friendly but we can’t gamble on it. So I think we will make contact but we should do it very carefully.”
I wonder about the idea that our reaching out would be what gets us noticed. If an advanced alien civilization were out there and capable of responding, I have a feeling they’d already know we exist.
First contact aside, is an AI-Human combo civilization an inevitability? That’s the question that comes to my mind. It would seem that way, but there’s one explanation for the Fermi paradox that, in my opinion, continues to ring the most true: That civilizations may be prone to destroy themselves (or, to use the appropriate doomsday phraseology, “self-annihilate”) before reaching farther out into the stars.
As Tom Westby and Christopher Conselice of the University of Nottingham said last year:
“As far as we can tell, when a civilization develops the technology to communicate over large distances it also has the technology to destroy itself and this is unfortunately likely universal.”
Not only would an alien civilization have to exist at just the right time — and at just the right distance — to visit us, they’d first have to avoid their own destruction. So many variables.
But it doesn’t matter. If aliens ever do show up looking for trouble, Arnold Schwarzenegger himself is ready and willing to take them on. A recent poll over in Great Britain asked the public who they’d want to handle an alien invasion, and he just happened to come out on top. “I want to thank the people for putting their faith in me,” he said in response, “I am ready to serve.”