The DeepMind AI that Google Created Learns to Be Aggressive in Stressful Situations

With this kind of development of AI we can say that we are at the beginning of something similar to Terminator series

Last year at Cambridge University, Stephen Hawking made the bold claim that the creation of artificial intelligence also called AI will be “either the best, or the worst thing, ever to happen to humanity”. 

Last year, he joined famous Elon Musk and hundreds of other experts in writing an open letter asking the governments to ban autonomous weapons that might one day be able to turn against humans.

Elon Musk is famous for the many speeches about the dangers of AI research and his involvement in this domain.

www-se-ai
Artificial intelligence via huawei.com

The Leverhulme Centre of the Future of Intelligence at the University of Cambridge has received recently more than US$12 million in grants to run research projects that will enhance the future potential of artificial intelligence.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans.

What would this mean for humanity?

Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”.

Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together.

At present, however, we have barely begun to consider its ramifications, good or bad.

We’ve all seen the Terminator series, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity.

Recents results from recent behavior tests of Google’s new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future.

The AI already learned to mimic human voices with a new contender, brought to you by the brilliant minds behind DeepMind.

ai-intelligence
AI via alphacoders.com

Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it’s about to lose, it opts for “highly aggressive” strategies to ensure that it comes out on top.

DeepMind was then tested with a series of games one of them called Wolfpack. This time, three AI agents – two of them played as wolves, and one as the prey are left to act freely.

Unlike Gathering a game were the AI agents compete each other, this game actively encouraged co-operation, because if both wolves were near the prey when it was captured, they both received a reward.

Results?

DeepMind agents learned from Gathering that aggression and selfishness netted them the most favorable result in that particular environment, but they learned from Wolfpack that co-operation can also be the key to greater individual success.

And while these are just simple little computer games, the message is clear.

AI systems if they are put in charge of competing interests in real-life situations, and it could be an all-out war if their objectives are not balanced against the overall goal of benefiting us humans above all else.

It’s still early days for real AIs, and the team at Google has yet to publish their study in a peer-reviewed paper, but the initial results show that, just because we build them, it doesn’t mean robots and AI systems will automatically have our interests at heart.

AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task.

So, we just have to beware on these humans…

Leave a Reply

Your email address will not be published. Required fields are marked *