Is Artificial Intelligence the Next Great Frontier or the End of Humanity?

Artificial Intelligence (AI) has been on humankind’s collective consciousness ever since the dawn of computers. We long to create something so brilliant that can help us take the leap from being planet bound creatures to becoming a galaxy conquering super species with immortal lifespans and a limitless knowledge pool.

Naturally, a dream this big can both excite and intimidate us. There are questions that need to be answered and scenarios that need to be considered before we should decide to create such a sentient machine upon which might potentially rest the fate of our entire kind.

Two Sides of A Coin: Is Super-Intelligent AI Worth the Risk?

There are two sides to the artificial intelligence debate; one side that fears the consequences of an all-powerful AI, and the other side that thinks such fears are premature and blown way out of proportion.

In a nutshell, one group believes a sentient and powerful AI will eventually go rogue and destroy mankind while the other group believes that even such a powerful AI will have limitations, which will stop it from committing the genocide of our species and instead just be a tool for our advancement. Pop culture reflects both these sides as well with movies like the Terminator and Ex Machina warning us about the dark side of developing such a powerful AI while others like Star Trek showing us the possibilities of utilizing such an AI in a positive way.

However, developing such an AI is the final step in the evolution of computers and we will first have to make a lot of decisions as a community before we get to such a step. The biggest AI question that is relevant to our technological capabilities of today is the development, and subsequent use of AI weapons. We may not have computers who feel emotions at the moment but we definitely have the ability to generate weapons that can use algorithms to identify and eliminate enemies, whoever they may be, without human oversight.

Are AI Powered Weapons a Disaster Waiting to Happen?

Imagine a world where instead of a human pulling the trigger, it is actually a robot doing so. We humans have emotions, empathy and also we can modify our plans on the go but an artificially intelligent weapons doesn’t possess such qualities. It will, theoretically, kill indiscriminately if used that way where a human may flinch. This is the opinion of many of our world’s leading minds like Elon Musk, Stephen Hawking and Bill Gates; all of whom have signed a future of life petition encouraging governments to ban AI weapons in order to avert disaster.

In their opinion, though governments can try to make sure that AI weapons are used responsibly and are carefully monitored, they will eventually fall into the wrong hands who will modify and use these weapons for their own nefarious purposes. For example, if a modified AI weapon is let loose by some terrorist in a crowded place, the damage it can do will be devastating.

Hawking, Musk and Gates Sounds the Alarm:

Professor Stephen Hawking meanwhile has been very vocal about the threat a super intelligent entity could pose to mankind as according to him, if it were to ‘take off its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, wouldn’t be able to compete, and thus would be superseded.’

While answering questions in a Reddit AMA session, Bill Gates had this to say regarding the threat of artificial intelligence.

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligent is strong enough to be a concern.

Needless to say, artificial intelligence is a very complex and dangerous commodity, and we have to be very careful with how we decide to move forward in its development. Nobody is calling for scientists to stop developing such a technology, they are just calling for the exercise of caution in such matters. Because if we fail to manage this properly, as Elon Musk put it, our species might just end up being “the biological boot loader for digital super-intelligence.”

  • Other things aside, but this article do bring me to conclusion: The writer just watched the latest terminator movie, “Terminator Genisys” :-) I was wondering same stuff as above :-p

  • “Science is not good or bad, Victor. But it can be used both ways.” Frankenweenie · 2012

    AI is an upcoming disaster and will bring more harm than good.

  • Looks like terminator Genesys project is real end of world is coming.AI will rule the world and humans will become there slaves.

  • >