When people think of highly evolved artificial intelligence (AI), images of Hollywood science-fiction movies like The Terminator usually come to mind.
However, by the time scientists have created machines that match or exceed human intelligence, they will probably be different from anything humanity has seen or imagined before.
The term “the singularity” is often used to describe this point in time, when humans will no longer be as smart as their mechanical creations.
The term is vague, though, and can also be applied to a general advance in scientific progress, according to Michael Anissimov, the media director at the Singularity Institute for Artificial Intelligence (SIAI).
The theory of the singularity was created by computer scientist Vernor Vinge in the early 1990s when he compared the difficulty of predicting the future of these technologies to “…our model of physics break[ing] down when it tries to model the singularity at the center of a black hole,” according to the SIAI website.
While it is difficult to predict such a future, it isn’t out of our grasp, Anissimov said.
“We take more of an opinion that it might be impossible to predict the details of how a super intelligence might think or might act in the details, but whatever it does, it will ultimately derive from something that was a human invention,” Anissimov said. “So we see that there’s some continuity between humans and the singularity, even though it would be a radical change.”
Anissimov said the singularity could be reached within the next 20 to 50 years through the development of AI because techniques involving brain-computer interfacing and human brain emulation could be slowed down by both biological and government restrictions.
Shawna Pandya, an alumni of the California-based Singularity University, said the excitement surrounding the singularity is because of its potential to create cutting-edge, positive changes in the world.
“At the very least, we all know about technologies that make our lives easier and more connected — think of social networks, think of the evolution of mobile phones,” she said.
“Now take it to a larger scale. How can you use these to take on the greatest of our human challenges, from disaster response to global health to energy? It’s crucial that we as a society think about these questions now, otherwise there is no way we can [create] a sustainable future for ourselves.”
Anissimov said the most exciting thing to him was creating minds completely different from human ones.
“It would kind of be directly analogous to discovering an alien civilization, like an extraterrestrial intelligence,” he said. “Because we’re so used to human intelligences, we see ourselves as having a direct pipeline to reality. I think we’ll find out when we create the new intelligences that our interpretation is one among many.”
With this, though, could come threats as severe as humanity’s extinction, Anissimov said.
“I think that people have every right to be frightened, and I’m even frightened in some ways by the extremity of the potential changes,” he said.
“Introducing new agents and humans having the ability to upgrade themselves, these technologies will empower everyone, theoretically, and having everyone in power is not necessarily a good thing.”
Despite the danger, though, Anissimov said that the response he’s seen so far has been positive, especially among the media and from people in Silicon Valley. People will probably grow more accepting of the idea as the event draws nearer, he added.
“People under the age of 30 grew up on stories involving cyborgs and superheroes, and even the generation above us did, too,” he said. “It’s obvious that this is something that is deeply ingrained in humanity’s psyche, and it’s being woken up by this movement.”
In the end, Anissimov said, the singularity might end up being an abrupt change, since people believe “we’re uniquely special and our intelligence can’t be duplicated.” So before you know it, the singularity could usher humanity into a brave new world.