Dr. Roman Yampolskiy, an artificial intelligence (AI) safety researcher and a professor at the University of Louisville, thinks there is no way to stop artificial general intelligence (AGI), or human-level intelligence AI, from taking out the human race because it has progressed beyond completing assigned tasks and has gained the ability to think for itself.
He says some the best researchers on the planet say there is a 20%-30% chance humanity is doomed.

"It doesn't have to turn on us quickly," he warns. "It can just slowly become more useful. It can teach us to rely on it, trust it, and over a long period of time, we'll surrender control without ever voting on it or fighting against it."
He also says it is impossible to guess when that will happen or to prevent it because AGI will be thousands of times smarter than people and will think up some novel, ultra-efficient way to end the human race.
"We only get one chance to get it right," Dr. Yampolskiy submits. "This is not cyber security, where somebody steals your credit card and you'll get a new credit card; this is an existential risk. It can kill everyone."
According to a study published last year, the central concern is that a superintelligent AI might act in ways that could be catastrophic for humanity, ranging from societal breakdown to extinction, particularly if the AI develops goals misaligned with those of human programmers.
An AGI might find radical and unforeseen ways to achieve its goals, some of which could be harmful to humans. For example, the "Paperclip Maximizer Scenario" illustrates how an AI tasked with a seemingly innocuous goal, like maximizing paperclip production, could relentlessly pursue that goal to the detriment of human life, converting resources (including potentially humans) into paperclips.
Also, as more and more jobs are turned over to artificial intelligence, many humans may find themselves out of work, which could cause significant economic disruption, increase economic disparities, and possibly lead to uprisings or other large-scale unrest. The drastic societal changes could cause chaos resulting in a great loss of life.

Colonel Bob Maginnis, a national security expert, echoes Rev. Samuel Rodriguez of the National Hispanic Christian Leadership Conference, who has suggested to AFN that this is possibly humanity's second "Tower of Babel moment."
He says Christian thought leaders need to engage now to head that off.
"Theologians need to really wrestle with this and put it in perspective," says Maginnis. "Understand where this could go, because the secular world is all in, and it's moving quickly, and it's dangerous."
With that in mind, several Christian leaders have sent a letter asking President Trump to put moral and ethical guardrails around AI.