Woveon

5 Ways AI Should Have Never Been Used

artificial intelligence

If you consider the world of science fiction as depicted in movies and on television, you’d think that Artificial Intelligence and its uses would be something quite similar to what we saw in ‘I Robot’. That’s what most of the world thinks, actually.

In reality, though, experts’ belief is that artificial intelligence is akin to human intelligence, appears quite differently. While a majority of the intelligent world likes to think that the successful creation of AI would be the biggest achievement in human history, groups of leading scientists just don’t feel the same way. In fact, according to Stephen Hawking, perhaps the most infamous physicist in the world, AI, instead of being the biggest achievement, could possibly be the worst mistake ever made.

That’s not all though, other world leaders such as Bill Gates and Elon Musk share the same sentiments.

It’s hard not to think about just how progressive the world would be with the successful use of artificial super-intelligence. More than the endless sophisticated advancements that are being made in the field of computer science, it SOUNDS very cool, doesn’t it? Something that’s straight out of a Hollywood Sci-Fi film come to life.

However, despite how fascinated we might be with AI and the progression the technological world is showing in successfully achieving successful standards of AI, the truth remains that there are unending ways in which the use of AI could go wrong. The potential dangers of misuse, mismanagement, accidents due to human error, as well as the safety concerns are just too real to deem inconsequential.

In fact, not just for the future, there are current examples of the past and present which clearly show us why AI should never be or have been used.

Of the limitless quote-worthy cases, here’s considering the top five ways AI should have never been used.

1. The Microsoft Chatbot

In 2016 spring, the world witnessed a Microsoft chatbot with the name of Tay – an AI persona – go completely off center to hurl abusive monikers and statements to the people interacting with her on the social platform Twitter. While the chatbot was only responding to the messages sent her way by interpreting them through phrase processing, adaptive algorithms and machine learning, it was still an example of an AI robot experiment going awry with the bot developing its own mind and thought process.  

2. Humanity Destroying Sophia

One of the biggest concerns that most IT world leaders have is the possibility of AI devices taking over the world as we know it or causing irreparable harm. One robot, Sophia the lifelike android, brought to life by the engineers at Hanson Robotics gave us a real cause of concern when along with declaring her future ambitions such as going to school, studying, making art, starting a new business and eventually having a home and family of her own declared that she would destroy humans.

While the declaration came as a response to a question jokingly asked of Sophia by her interviewer at the SXSW tech conference in March of last year, the response was no less alarming.

3. The Existential Debating Google Home Devices

Things got very interesting – read weird – this past January when in a curious experiment two Google Home devices were placed next to each other in front of a live webcam. The home devices, which a programmed to learn from speech recognition began to converse with one another, learning from each other during the course of the interaction.

The experiment, which is said to have run the course of a few days, took a twisted turn when the bots began to get into what can only be described is a heated debate about whether or not they were both humans or merely robots. A classic example of AI machines truly having a mind of their own.

4. The Russian Bot On The Run

Just last year the world witnessed just how quickly AI robots can develop a mind and liking of their own. Case in point, the Russian robot prototype Promobot IR77 which escaped out the doors of its laboratory and wandered out in the streets – all by learning and programming itself based on its interaction with human beings. Naturally, chaos ensued when a snowman made of plastic ventured out in the midst of heavy traffic at a busy intersection. According to reports, the robot, despite being reprogrammed twice following the incident continues to move toward the exits when tested.

5. Image Recognition Fails

AI modalities primarily gather its information from speech and visual recognition. The AI devices and systems learn and program themselves by going through and processing hundreds of voices, words, languages and equal amounts of images as they go along.  When introduced by Google, back in 2015, the image recognition system labeled two people as ‘gorillas’. While the incident resulted in a public outcry and had Google issue an apology before it smoothed over, it gave us a clear example of how unrealistic it is to assume that systems of AI can make sense of and learn the tricky ways of human-environment accurately.

The fact is, that AI does have the potential to become considerably more intelligent than any human alive. When and if that happens, the possibility of AI overtaking and controlling human lives will become reality – a reality that will not only have no ways of being controlled but one that will be without any sort of accurate or semi-accurate predictors that gauge its behavior in any way.