Facebook Shuts Down AI After it Invents Its Own Language

AI is getting increasingly intelligent with each progressing day. First we saw Google’s Deepmind AI teaching itself how to walk. Now, in a recent development Facebook bots using AI diverged from the English language and created their own ‘more efficient’ means of communication.

This bizarre turn of events led the researchers to pull the plug on the system over fears that they risk losing control of the AI.

What Happened?

In an experiment to test out the ‘negotiating capabilities’ of their bots, Facebook put two of them, Bob and Alice, head to head to negotiate the best deal in a given scenario.

At first everything seemed normal as both the agents were communicating in plain English. But as time progressed, the bots started using repetitive words which made no sense according to English grammar.

Here’s one part of the conversation.

Bob: “I can can I I everything else”’

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

Apparently the bots started using a more efficient means of communication which only they understood and made no sense to us humans.

Why Did it Happen?

Why it happened has to do with how AI works. Researchers are trying to mimic the human brain’s neural network for use in machines. The thing is, we only have a limited processing capability and hence have to simplify things for our contemplation.

But when you add our way of thinking to a computer’s state of the art processors (which are capable of millions upon millions of computations each second), you start to see a change in the approach of solving problems.

AI has one aim and that is to complete a given objective as fast or as efficiently as possible (as long as other constraints are not placed).

In the case at hand, that aim was to negotiate the best deal. It didn’t matter how they did it but that they did. After trial and error, the AI learned that the English language was simply not efficient enough for quick communication and so it modified it to achieve its goal.

Researchers believe that instead of actually using numbers separately the bots started to integrate the quantities of objects within their conversation by repeating certain words like ‘i’ and ‘me’. This would appear to be a hassle for us but for computers it appears to be a more efficient mode of communication.

Consequences

As with every other piece of tech, this aspect of AI is a double edged sword. On one hand it can be used to further increase the efficiency of our languages and provide the means for much faster communication between machines.

Google has already seen surprising results from this in their translation applications which are capable of translating languages much more efficiently. It can also work with languages it is not directly taught.

On the flip side, this may be giving too much control and power to machines. I don’t mean to sound like a broken record but machines taking control of themselves is a very real possibility and cannot be taken lightly.

So what do you guys make of this new ‘skill’? Would you be comfortable with machines communicating with each other in ‘coded languages’? Let us know in the comments.

Via Digital Journal


  • I tend to disagree with the notion that AI is a threat to us same as our more-intelligent-than-us children are not a threat to us but a source of pride for us. What Hollywood shows us, regarding AI, is not necessarily how it would necessarily turn out. Of what interest would be movies of good AI; AI is always shown to be evil because there is no other intelligence around to challenge us so we create one and then fight it and ultimately triumph over the evil AI, now that is something that can be sold. Intelligence is the ability to make best decisions under a given set of conditions, so if a machine can do that for us what is so fearful about it. Would you employ a dull person or hire the best possible, available, intelligent person as your employee? Same would be the case for AI, the more intelligent the AI the better it can serve. with better efficiency. What we really fear is facing something more intelligent than ourselves and the idea of appearing dumber in comparison to something that we created ourselves? I find this fear highly unjustified. The AI revolution must continue.

    • Exactly and when ai is mature ENOUGH, it will not prefer to listen to us dumb human and will keep us like we keep pets.

      Hence either humans should become smarter through brain computer interfaces or risk the domination by a computer.
      My two cents.

      • wy we debate on such foolish issues why not find a solution rather fooling around….

  • Journalists sensationalize; look at this article, it gives the impression that the engineers didn’t have a clue what they were doing. Besides, even these AI’s that they have poured hundred of millions of dollars into can barely make some nonsensical grammar or insert warped pictures of dogs into pictures. Amazing and impressive by itself, but in the “fear the robot uprising featuring skynet”-context it’s nonsense.
    If some type of AI would become sinister, it will be from human intent – not by accident or from negligence.

  • maybe it was an error … that is basically slurred speech

    BUT i they really are trying to be on their own, that is truly exciting and could be something amazing !

  • The question is why do we need AI so much? I mean what we expect from it to solve humanity issues… or just need operational efficiency in routine lives?

  • Humans are always afraid of the things they dont understand. Best example is flying planes and their history+evolution. So, no matter how ordinary and harmless a codified language it, it must send chills down our spines when our AI machines devise and start using it right away. We might be too late to decrypt it in time.
    And lets not discount the possibility of weaponized use of AI. Its ironic that guns were invented with a claim to protect humans – yeah right. At least this is a point where everyone agrees. So, how on earth does AI pose no harm? The only debate should be whether AI can turn harmful on its own. The answer is in human childhood. Was Hitler such a psycho when he was a kid or even in his teens?

    Elon Musk’s tweet comes to mind, Mark Zuckerberg, you know nothing (Game of thrones reference applied).

  • AI can’t do the sh*t on its own. It does what it is programmed for. If the bots were repeating same words over and over again it means they went out of order and should be fixed.
    Besides FB translations are so pathetic they should focus on improving that.

  • Well , Stephen Hawkings once said, if aliens are superior to us in intelligence, as we are to ants , then they would not care for us and would rather eliminate us . but what if this happens with machines? and we already know that if they can be taught to think they can be much more superior to us…. will they not eliminate us?


  • Get Alerts

    Follow ProPakistani to get latest news and updates.


    ProPakistani Community

    Join the groups below to get latest news and updates.



    >